Compare commits

..

27 Commits

Author SHA1 Message Date
Stefan Melmuk
5d84f17600 fix hiding of signup link (#6113)
The registration link should be hidden if signup is not allowed and
whitelist is empty unless mail is disabled and invitations are allowed
2025-07-29 12:13:02 +02:00
Mathijs van Veluw
0db4b00007 Update crates to trigger rebuild for mysql issue (#6111)
Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-28 21:31:02 +02:00
Stefan Melmuk
a0198d8d7c fix account key rotation (#6105) 2025-07-27 12:18:54 +02:00
Mathijs van Veluw
dfad931dca Update crates (#6100)
Updated crates and made adjustments where needed.
Also removed a struct which wasn't used and the nightly compiler complained about it.

Used pinact to update GitHub Actions.
Validated GitHub Actions with zizmor.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-26 14:58:39 +02:00
Richy
25865efd79 fix: resolve group permission conflicts with multiple groups (#6017)
* fix: resolve group permission conflicts with multiple groups

When a user belonged to multiple groups with different permissions for the
same collection, only the permissions from one group were applied instead
of combining them properly. This caused users to see incorrect access levels
when initially viewing collection items.

The fix combines permissions from all user groups by taking the most
permissive settings:
- read_only: false if ANY group allows write access
- hide_passwords: false if ANY group allows password viewing
- manage: true if ANY group allows management

This ensures users immediately see the correct permissions when opening
collection entries, matching the behavior after editing and saving.

* Update src/api/core/ciphers.rs

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>

* fix: format

* fix: restrict collection manage permissions to managers only

Prevent users from getting logged out when they have manage permissions by only allowing manage permissions for MembershipType::Manager and higher roles.

* refactor: cipher permission logic to prioritize user access

Updated permission checks to return user collection permissions if available, otherwise fallback to group permissions. Clarified comments to indicate user permissions overrule group permissions and corrected the logic for the 'manage' flag to use boolean OR instead of AND.

---------

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2025-07-25 20:58:41 +02:00
Mathijs van Veluw
bcf627930e Adjust issue template to hopefully show better to search for closed and open issues (#6096) 2025-07-25 18:17:55 +02:00
Timshel
ce70cd2cf4 Hide login form custom fields (#6054)
Co-authored-by: Timshel <timshel@480s>
2025-07-14 22:01:20 +02:00
Daniel
2ac589d4b4 Fix digest SHA extraction step (#6059) 2025-07-13 12:20:16 +02:00
Stefan Melmuk
b2e2aef7de fix hash reference in release.yml (#6058) 2025-07-13 10:22:33 +02:00
Daniel García
0755bb19c0 Update release.yml (#6057)
Seems like docker can't use the hash references: https://github.com/dani-garcia/vaultwarden/actions/runs/16242780267/job/45861396226
2025-07-13 01:01:08 +02:00
Mathijs van Veluw
fee0c1c711 Update crates, workflow and issue template (#6056)
- Updated all the crates, which probably fixes #5959
- Updated all the workflows and tested it with zizmor
  Also added zizmor as a workflow it self.
- Updated the issue template to better mention to search first.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-13 00:48:56 +02:00
Stefan Melmuk
f58539f0b4 close unmatched left parenthesis in the README (#6046) 2025-07-10 13:52:52 +02:00
Stefan Melmuk
e718afb441 improve the usage section of the README (#6041) 2025-07-09 23:44:20 +02:00
Mathijs van Veluw
55945ad793 Update web-vault and admin resources (#6044)
- Updated web-vault to v2025.7.0
- Updated admin JS and CSS files

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-09 23:26:12 +02:00
Stefan Melmuk
4fd22d8e3b fix hiding email as 2fa provider (#6026) 2025-07-09 23:25:11 +02:00
mountdisk
d6a8fb8e48 chore: fix some minor issues in the comments (#5998)
Signed-off-by: mountdisk <mountdisk@icloud.com>
2025-07-09 23:24:29 +02:00
Mathijs van Veluw
3b48e6e903 Fix v2025.6.x clients and newer to delete items (#6004) 2025-07-01 10:33:22 +02:00
Chase Douglas
6b9333b33e Use existing reqwest client for AWS S3 requests (#5917)
This removes a lot of duplicate client dependency bloat for roughly
equivalent functionality.

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2025-06-30 22:57:00 +02:00
Daniel García
a545636ee5 Update flags version and enable manual error reporting (#5994) 2025-06-27 21:39:38 +02:00
Mathijs van Veluw
f125d5f1a1 Misc Updates and favicon fixes (#5993)
- Updated crates
- Switched to rustls instead of native-tls
  Some dependency were already using rustls by default or without option.
  By removing native-tls we also have just one way of working here.

Updated favicon fetching which now is able to fetch more icons.
- Use rustls instead of native-tls
  This seems to work better, probably because of tls sniffing
- Use different user-agent and added several other headers
- Added SVG support. SVG Images will be sanitized first before stored or presented.
  Also, a special CSP for images will be sent to prevent scripts etc.. from SVG images.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-27 21:20:36 +02:00
Mathijs van Veluw
ad75ce281e Fix an issue with yubico keys not validating (#5991)
* Fix an issue with yubico keys not validating

When adding or updating yubico otp keys there were some issues with the validation.
Looks like the web-vault sends all keys, not only filled-in keys, which triggered a check on empty keys.
Also, we should only return filled-in keys, not the empty ones too.

Fixes #5986

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code - take 2

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-26 21:46:56 +02:00
Stefan Melmuk
9059437c35 fix account recovery withdrawal (#5968)
since `web-v2025.4.0` the client sends `""` instead of `null`, so we
also have to check whether the `reset_password_key` is empty or not.
2025-06-17 18:55:11 +02:00
Stefan Melmuk
c84db0daca allow signup for invited users (#5967)
invited users (e.g. via /admin panel or org invite) should be able to
register if email is disabled.
2025-06-17 11:15:36 +02:00
Mathijs van Veluw
72adc239f5 Update crates and web-vault (#5955)
- Updated crates
- Updated web-vault to v2025.6.0

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-15 01:19:53 +02:00
Nick Grimshaw
34ebeeca76 Minor fixes to copy in .env.template (#5928) 2025-06-15 01:19:08 +02:00
Stefan Melmuk
0469d9ba4c make css for login-page position independent (#5906)
* make css for login-page position independent

starting with v2025.5.1 the login page will have custom classes so the
fields to be disabled can be targeted specifically without risking
side-effects

* hide buttons after cancelling login
2025-06-14 19:31:51 +02:00
Daniel
eaa6ad06ed Update Alpine to version 3.22 (#5938) 2025-06-14 19:30:19 +02:00
36 changed files with 984 additions and 815 deletions

View File

@@ -130,7 +130,7 @@
## and are always in terms of UTC time (regardless of your local time zone settings).
##
## The schedule format is a bit different from crontab as crontab does not contains seconds.
## You can test the the format here: https://crontab.guru, but remove the first digit!
## You can test the format here: https://crontab.guru, but remove the first digit!
## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK
## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri"
## "0 30 * * * * "
@@ -273,7 +273,7 @@
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Invitations org admins to invite users, even when signups are disabled
## Allows org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
@@ -341,16 +341,16 @@
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
## The default is 10 seconds, but this could be too low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Block HTTP domains/IPs by Regex
## Any domains or IPs that match this regex won't be fetched by the internal HTTP client.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
## NOTE: Always enclose this regex within single quotes!
# HTTP_REQUEST_BLOCK_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Enabling this will cause the internal HTTP client to refuse to connect to any non global IP address.
## Enabling this will cause the internal HTTP client to refuse to connect to any non-global IP address.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# HTTP_REQUEST_BLOCK_NON_GLOBAL_IPS=true

1
.github/CODEOWNERS vendored
View File

@@ -1,5 +1,6 @@
/.github @dani-garcia @BlackDex
/.github/** @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/ISSUE_TEMPLATE/** @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex
/SECURITY.md @dani-garcia @BlackDex

View File

@@ -8,15 +8,30 @@ body:
value: |
Thanks for taking the time to fill out this bug report!
Please *do not* submit feature requests or ask for help on how to configure Vaultwarden here.
Please **do not** submit feature requests or ask for help on how to configure Vaultwarden here!
The [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions/) has sections for Questions and Ideas.
Our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki/) has topics on how to configure Vaultwarden.
Also, make sure you are running [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest) of Vaultwarden!
And search for existing open or closed issues or discussions regarding your topic before posting.
Be sure to check and validate the Vaultwarden Admin Diagnostics (`/admin/diagnostics`) page for any errors!
See here [how to enable the admin page](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page).
> [!IMPORTANT]
> ## :bangbang: Search for existing **Closed _AND_ Open** [Issues](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue%20) **_AND_** [Discussions](https://github.com/dani-garcia/vaultwarden/discussions?discussions_q=) regarding your topic before posting! :bangbang:
#
- type: checkboxes
id: checklist
attributes:
label: Prerequisites
description: Please confirm you have completed the following before submitting an issue!
options:
- label: I have searched the existing **Closed _AND_ Open** [Issues](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue%20) **_AND_** [Discussions](https://github.com/dani-garcia/vaultwarden/discussions?discussions_q=)
required: true
- label: I have searched and read the [documentation](https://github.com/dani-garcia/vaultwarden/wiki/)
required: true
#
- id: support-string
type: textarea
@@ -36,7 +51,7 @@ body:
attributes:
label: Vaultwarden Build Version
description: What version of Vaultwarden are you running?
placeholder: ex. v1.31.0 or v1.32.0-3466a804
placeholder: ex. v1.34.0 or v1.34.1-53f58b14
validations:
required: true
#
@@ -67,7 +82,7 @@ body:
attributes:
label: Reverse Proxy
description: Are you using a reverse proxy, if so which and what version?
placeholder: ex. nginx 1.26.2, caddy 2.8.4, traefik 3.1.2, haproxy 3.0
placeholder: ex. nginx 1.29.0, caddy 2.10.0, traefik 3.4.4, haproxy 3.2
validations:
required: true
#
@@ -115,7 +130,7 @@ body:
attributes:
label: Client Version
description: What version(s) of the client(s) are you seeing the problem on?
placeholder: ex. CLI v2024.7.2, Firefox 130 - v2024.7.0
placeholder: ex. CLI v2025.7.0, Firefox 140 - v2025.6.1
#
- id: reproduce
type: textarea

View File

@@ -66,13 +66,15 @@ jobs:
- name: Init Variables
id: toolchain
shell: bash
env:
CHANNEL: ${{ matrix.channel }}
run: |
if [[ "${{ matrix.channel }}" == 'rust-toolchain' ]]; then
if [[ "${CHANNEL}" == 'rust-toolchain' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'channel.*"(\K.*?)(?=")' rust-toolchain.toml)"
elif [[ "${{ matrix.channel }}" == 'msrv' ]]; then
elif [[ "${CHANNEL}" == 'msrv' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'rust-version.*"(\K.*?)(?=")' Cargo.toml)"
else
RUST_TOOLCHAIN="${{ matrix.channel }}"
RUST_TOOLCHAIN="${CHANNEL}"
fi
echo "RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" | tee -a "${GITHUB_OUTPUT}"
# End Determine rust-toolchain version
@@ -80,7 +82,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master @ Apr 29, 2025, 9:22 PM GMT+2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -90,7 +92,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master @ Apr 29, 2025, 9:22 PM GMT+2
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -115,7 +117,7 @@ jobs:
# Enable Rust Caching
- name: Rust Caching
uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.

View File

@@ -4,7 +4,8 @@ permissions: {}
on: [ push, pull_request ]
jobs:
docker-templates:
docker-templates:
name: Validate docker templates
permissions:
contents: read
runs-on: ubuntu-24.04
@@ -20,7 +21,7 @@ jobs:
- name: Run make to rebuild templates
working-directory: docker
run: make
run: make
- name: Check for unstaged changes
working-directory: docker

View File

@@ -14,7 +14,7 @@ jobs:
steps:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:

View File

@@ -47,7 +47,7 @@ jobs:
# Start a local docker registry to extract the compiled binaries to upload as artifacts and attest them
services:
registry:
image: registry:2
image: registry@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 # v3.0.0
ports:
- 5000:5000
env:
@@ -76,7 +76,7 @@ jobs:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
@@ -192,7 +192,7 @@ jobs:
- name: Bake ${{ matrix.base_image }} containers
id: bake_vw
uses: docker/bake-action@4ba453fbc2db7735392b93edf935aaf9b1e8f747 # v6.5.0
uses: docker/bake-action@37816e747588cb137173af99ab33873600c46ea8 # v6.8.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -213,14 +213,15 @@ jobs:
shell: bash
env:
BAKE_METADATA: ${{ steps.bake_vw.outputs.metadata }}
BASE_IMAGE: ${{ matrix.base_image }}
run: |
GET_DIGEST_SHA="$(jq -r '.["${{ matrix.base_image }}-multi"]."containerimage.digest"' <<< "${BAKE_METADATA}")"
GET_DIGEST_SHA="$(jq -r --arg base "$BASE_IMAGE" '.[$base + "-multi"]."containerimage.digest"' <<< "${BAKE_METADATA}")"
echo "DIGEST_SHA=${GET_DIGEST_SHA}" | tee -a "${GITHUB_ENV}"
# Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -228,7 +229,7 @@ jobs:
- name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -236,7 +237,7 @@ jobs:
- name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -248,6 +249,7 @@ jobs:
shell: bash
env:
REF_TYPE: ${{ github.ref_type }}
BASE_IMAGE: ${{ matrix.base_image }}
run: |
# Check which main tag we are going to build determined by ref_type
if [[ "${REF_TYPE}" == "tag" ]]; then
@@ -257,7 +259,7 @@ jobs:
fi
# Check which base_image was used and append -alpine if needed
if [[ "${{ matrix.base_image }}" == "alpine" ]]; then
if [[ "${BASE_IMAGE}" == "alpine" ]]; then
EXTRACT_TAG="${EXTRACT_TAG}-alpine"
fi
@@ -266,25 +268,25 @@ jobs:
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp amd64:/vaultwarden vaultwarden-amd64-${{ matrix.base_image }}
docker cp amd64:/vaultwarden vaultwarden-amd64-${BASE_IMAGE}
docker rm --force amd64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp arm64:/vaultwarden vaultwarden-arm64-${{ matrix.base_image }}
docker cp arm64:/vaultwarden vaultwarden-arm64-${BASE_IMAGE}
docker rm --force arm64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv7:/vaultwarden vaultwarden-armv7-${{ matrix.base_image }}
docker cp armv7:/vaultwarden vaultwarden-armv7-${BASE_IMAGE}
docker rm --force armv7
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv6:/vaultwarden vaultwarden-armv6-${{ matrix.base_image }}
docker cp armv6:/vaultwarden vaultwarden-armv6-${BASE_IMAGE}
docker rm --force armv6
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
@@ -314,7 +316,7 @@ jobs:
path: vaultwarden-armv6-${{ matrix.base_image }}
- name: "Attest artifacts ${{ matrix.base_image }}"
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-path: vaultwarden-*
# End Upload artifacts to Github Actions

View File

@@ -36,7 +36,7 @@ jobs:
persist-credentials: false
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@6c175e9c4083a92bbca2f9724c8a5e33bc2d97a5 # v0.30.0
uses: aquasecurity/trivy-action@dc5a429b52fcf669ce959baa2c2dd26090d2a6c4 # v0.32.0
env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
@@ -48,6 +48,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@86b04fb0e47484f7282357688f21d5d0e32175fe # v3.27.5
uses: github/codeql-action/upload-sarif@4e828ff8d448a8a6e532957b1811f387a63867e8 # v3.29.4
with:
sarif_file: 'trivy-results.sarif'

28
.github/workflows/zizmor.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Security Analysis with zizmor
on:
push:
branches: ["main"]
pull_request:
branches: ["**"]
permissions: {}
jobs:
zizmor:
name: Run zizmor
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@f52a838cfabf134edcbaa7c8b3677dde20045018 # v0.1.1
with:
# intentionally not scanning the entire repository,
# since it contains integration tests.
inputs: ./.github/

1009
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.85.0"
rust-version = "1.86.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -32,7 +32,7 @@ enable_mimalloc = ["dep:mimalloc"]
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run.
query_logger = ["dep:diesel_logger"]
s3 = ["opendal/services-s3", "dep:aws-config", "dep:aws-credential-types", "dep:anyhow", "dep:reqsign"]
s3 = ["opendal/services-s3", "dep:aws-config", "dep:aws-credential-types", "dep:aws-smithy-runtime-api", "dep:anyhow", "dep:http", "dep:reqsign"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
@@ -73,15 +73,15 @@ dashmap = "6.1.0"
# Async futures
futures = "0.3.31"
tokio = { version = "1.45.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio = { version = "1.47.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio-util = { version = "0.7.15", features = ["compat"]}
# A generic serialization/deserialization framework
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
serde_json = "1.0.141"
# A safe, extensible ORM and Query builder
diesel = { version = "2.2.10", features = ["chrono", "r2d2", "numeric"] }
diesel = { version = "2.2.12", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true }
@@ -89,10 +89,10 @@ derive_more = { version = "2.0.1", features = ["from", "into", "as_ref", "deref"
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.33.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.35.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.9.1"
rand = "0.9.2"
ring = "0.17.14"
subtle = "2.6.1"
@@ -101,7 +101,7 @@ uuid = { version = "1.17.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.41", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.3"
chrono-tz = "0.10.4"
time = "0.3.41"
# Job scheduler
@@ -126,7 +126,7 @@ webauthn-rs = "0.3.2"
url = "2.5.4"
# Email libraries
lettre = { version = "0.11.16", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
lettre = { version = "0.11.18", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "hostname", "tracing", "tokio1-rustls", "ring", "rustls-native-certs"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9"
@@ -134,7 +134,7 @@ email_address = "0.2.9"
handlebars = { version = "6.3.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.18", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
reqwest = { version = "0.12.22", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false}
hickory-resolver = "0.25.2"
# Favicon extraction libraries
@@ -142,9 +142,10 @@ html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.10.1"
svg-hush = "0.9.5"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.55.1", features = ["async"] }
cached = { version = "0.56.0", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.18.1"
@@ -165,9 +166,9 @@ semver = "1.0.26"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.46", features = ["secure"], default-features = false, optional = true }
mimalloc = { version = "0.1.47", features = ["secure"], default-features = false, optional = true }
which = "7.0.3"
which = "8.0.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.3"
@@ -179,13 +180,15 @@ rpassword = "7.4.0"
grass_compiler = { version = "0.13.4", default-features = false }
# File are accessed through Apache OpenDAL
opendal = { version = "0.53.3", features = ["services-fs"] }
opendal = { version = "0.54.0", features = ["services-fs"], default-features = false }
# For retrieving AWS credentials, including temporary SSO credentials
anyhow = { version = "1.0.98", optional = true }
aws-config = { version = "1.6.3", features = ["behavior-version-latest"], optional = true }
aws-credential-types = { version = "1.2.3", optional = true }
reqsign = { version = "0.16.3", optional = true }
aws-config = { version = "1.8.3", features = ["behavior-version-latest", "rt-tokio", "credentials-process", "sso"], default-features = false, optional = true }
aws-credential-types = { version = "1.2.4", optional = true }
aws-smithy-runtime-api = { version = "1.8.5", optional = true }
http = { version = "1.3.1", optional = true }
reqsign = { version = "0.16.5", optional = true }
# Strip debuginfo from the release builds
# The debug symbols are to provide better panic traces
@@ -276,7 +279,6 @@ macro_use_imports = "deny"
manual_assert = "deny"
manual_instant_elapsed = "deny"
manual_string_new = "deny"
match_on_vec_items = "deny"
match_wildcard_for_single_variants = "deny"
mem_forget = "deny"
needless_continue = "deny"

View File

@@ -59,19 +59,21 @@ A nearly complete implementation of the Bitwarden Client API is provided, includ
## Usage
> [!IMPORTANT]
> Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
>
>This can be configured in [Vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
>
>If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy or Traefik (see examples linked above).
> The web-vault requires the use a secure context for the [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API).
> That means it will only work via `http://localhost:8000` (using the port from the example below) or if you [enable HTTPS](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS).
The recommended way to install and use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
See [which container image to use](https://github.com/dani-garcia/vaultwarden/wiki/Which-container-image-to-use) for an explanation of the provided tags.
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
Alternatively, you can also [build Vaultwarden](https://github.com/dani-garcia/vaultwarden/wiki/Building-binary) yourself.
While Vaultwarden is based upon the [Rocket web framework](https://rocket.rs) which has built-in support for TLS our recommendation would be that you setup a reverse proxy (see [proxy examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
> [!TIP]
>**For more detailed examples on how to install, use and configure Vaultwarden you can check our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).**
The main way to use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
### Docker/Podman CLI
Pull the container image and mount a volume from the host for persistent storage.<br>
@@ -83,7 +85,7 @@ docker run --detach --name vaultwarden \
--env DOMAIN="https://vw.domain.tld" \
--volume /vw-data/:/data/ \
--restart unless-stopped \
--publish 80:80 \
--publish 127.0.0.1:8000:80 \
vaultwarden/server:latest
```
@@ -104,7 +106,7 @@ services:
volumes:
- ./vw-data/:/data/
ports:
- 80:80
- 127.0.0.1:8000:80
```
<br>

View File

@@ -1,13 +1,13 @@
---
vault_version: "v2025.5.0"
vault_image_digest: "sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e"
vault_version: "v2025.7.0"
vault_image_digest: "sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e"
# Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.87.0 # Rust version to be used
rust_version: 1.88.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: "3.21" # Alpine version to be used
alpine_version: "3.22" # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch

View File

@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.7.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.7.0
# [docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2025.5.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e
# [docker.io/vaultwarden/web-vault:v2025.7.0]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e AS vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.87.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.87.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.87.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.87.0 AS build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.88.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.88.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.88.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.88.0 AS build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
@@ -127,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.21
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.22
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \

View File

@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.7.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.7.0
# [docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2025.5.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e
# [docker.io/vaultwarden/web-vault:v2025.7.0]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e AS vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bd
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.87.0-slim-bookworm AS build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.88.0-slim-bookworm AS build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT

View File

@@ -10,7 +10,7 @@ proc-macro = true
[dependencies]
quote = "1.0.40"
syn = "2.0.101"
syn = "2.0.104"
[lints]
workspace = true

View File

@@ -1,4 +1,4 @@
[toolchain]
channel = "1.87.0"
channel = "1.88.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"

View File

@@ -613,6 +613,7 @@ use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already
/// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit
/// Any cache will be lost if Vaultwarden is restarted
use std::time::Duration; // Needed for cached
#[cached(time = 600, sync_writes = "default")]
async fn get_release_info(has_http_access: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.

View File

@@ -556,14 +556,45 @@ use super::sends::{update_send_from_data, SendData};
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct KeyData {
account_unlock_data: RotateAccountUnlockData,
account_keys: RotateAccountKeys,
account_data: RotateAccountData,
old_master_key_authentication_hash: String,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct RotateAccountUnlockData {
emergency_access_unlock_data: Vec<UpdateEmergencyAccessData>,
master_password_unlock_data: MasterPasswordUnlockData,
organization_account_recovery_unlock_data: Vec<UpdateResetPasswordData>,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct MasterPasswordUnlockData {
kdf_type: i32,
kdf_iterations: i32,
kdf_parallelism: Option<i32>,
kdf_memory: Option<i32>,
email: String,
master_key_authentication_hash: String,
master_key_encrypted_user_key: String,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct RotateAccountKeys {
user_key_encrypted_account_private_key: String,
account_public_key: String,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct RotateAccountData {
ciphers: Vec<CipherData>,
folders: Vec<UpdateFolderData>,
sends: Vec<SendData>,
emergency_access_keys: Vec<UpdateEmergencyAccessData>,
reset_password_keys: Vec<UpdateResetPasswordData>,
key: String,
master_password_hash: String,
private_key: String,
}
fn validate_keydata(
@@ -573,10 +604,24 @@ fn validate_keydata(
existing_emergency_access: &[EmergencyAccess],
existing_memberships: &[Membership],
existing_sends: &[Send],
user: &User,
) -> EmptyResult {
if user.client_kdf_type != data.account_unlock_data.master_password_unlock_data.kdf_type
|| user.client_kdf_iter != data.account_unlock_data.master_password_unlock_data.kdf_iterations
|| user.client_kdf_memory != data.account_unlock_data.master_password_unlock_data.kdf_memory
|| user.client_kdf_parallelism != data.account_unlock_data.master_password_unlock_data.kdf_parallelism
|| user.email != data.account_unlock_data.master_password_unlock_data.email
{
err!("Changing the kdf variant or email is not supported during key rotation");
}
if user.public_key.as_ref() != Some(&data.account_keys.account_public_key) {
err!("Changing the asymmetric keypair is not possible during key rotation")
}
// Check that we're correctly rotating all the user's ciphers
let existing_cipher_ids = existing_ciphers.iter().map(|c| &c.uuid).collect::<HashSet<&CipherId>>();
let provided_cipher_ids = data
.account_data
.ciphers
.iter()
.filter(|c| c.organization_id.is_none())
@@ -588,7 +633,8 @@ fn validate_keydata(
// Check that we're correctly rotating all the user's folders
let existing_folder_ids = existing_folders.iter().map(|f| &f.uuid).collect::<HashSet<&FolderId>>();
let provided_folder_ids = data.folders.iter().filter_map(|f| f.id.as_ref()).collect::<HashSet<&FolderId>>();
let provided_folder_ids =
data.account_data.folders.iter().filter_map(|f| f.id.as_ref()).collect::<HashSet<&FolderId>>();
if !provided_folder_ids.is_superset(&existing_folder_ids) {
err!("All existing folders must be included in the rotation")
}
@@ -596,8 +642,12 @@ fn validate_keydata(
// Check that we're correctly rotating all the user's emergency access keys
let existing_emergency_access_ids =
existing_emergency_access.iter().map(|ea| &ea.uuid).collect::<HashSet<&EmergencyAccessId>>();
let provided_emergency_access_ids =
data.emergency_access_keys.iter().map(|ea| &ea.id).collect::<HashSet<&EmergencyAccessId>>();
let provided_emergency_access_ids = data
.account_unlock_data
.emergency_access_unlock_data
.iter()
.map(|ea| &ea.id)
.collect::<HashSet<&EmergencyAccessId>>();
if !provided_emergency_access_ids.is_superset(&existing_emergency_access_ids) {
err!("All existing emergency access keys must be included in the rotation")
}
@@ -605,15 +655,19 @@ fn validate_keydata(
// Check that we're correctly rotating all the user's reset password keys
let existing_reset_password_ids =
existing_memberships.iter().map(|m| &m.org_uuid).collect::<HashSet<&OrganizationId>>();
let provided_reset_password_ids =
data.reset_password_keys.iter().map(|rp| &rp.organization_id).collect::<HashSet<&OrganizationId>>();
let provided_reset_password_ids = data
.account_unlock_data
.organization_account_recovery_unlock_data
.iter()
.map(|rp| &rp.organization_id)
.collect::<HashSet<&OrganizationId>>();
if !provided_reset_password_ids.is_superset(&existing_reset_password_ids) {
err!("All existing reset password keys must be included in the rotation")
}
// Check that we're correctly rotating all the user's sends
let existing_send_ids = existing_sends.iter().map(|s| &s.uuid).collect::<HashSet<&SendId>>();
let provided_send_ids = data.sends.iter().filter_map(|s| s.id.as_ref()).collect::<HashSet<&SendId>>();
let provided_send_ids = data.account_data.sends.iter().filter_map(|s| s.id.as_ref()).collect::<HashSet<&SendId>>();
if !provided_send_ids.is_superset(&existing_send_ids) {
err!("All existing sends must be included in the rotation")
}
@@ -621,12 +675,12 @@ fn validate_keydata(
Ok(())
}
#[post("/accounts/key", data = "<data>")]
#[post("/accounts/key-management/rotate-user-account-keys", data = "<data>")]
async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
// TODO: See if we can wrap everything within a SQL Transaction. If something fails it should revert everything.
let data: KeyData = data.into_inner();
if !headers.user.check_valid_password(&data.master_password_hash) {
if !headers.user.check_valid_password(&data.old_master_key_authentication_hash) {
err!("Invalid password")
}
@@ -634,7 +688,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Bitwarden does not process the import if there is one item invalid.
// Since we check for the size of the encrypted note length, we need to do that here to pre-validate it.
// TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks.
Cipher::validate_cipher_data(&data.ciphers)?;
Cipher::validate_cipher_data(&data.account_data.ciphers)?;
let user_id = &headers.user.uuid;
@@ -655,10 +709,11 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
&existing_emergency_access,
&existing_memberships,
&existing_sends,
&headers.user,
)?;
// Update folder data
for folder_data in data.folders {
for folder_data in data.account_data.folders {
// Skip `null` folder id entries.
// See: https://github.com/bitwarden/clients/issues/8453
if let Some(folder_id) = folder_data.id {
@@ -672,7 +727,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
}
// Update emergency access data
for emergency_access_data in data.emergency_access_keys {
for emergency_access_data in data.account_unlock_data.emergency_access_unlock_data {
let Some(saved_emergency_access) =
existing_emergency_access.iter_mut().find(|ea| ea.uuid == emergency_access_data.id)
else {
@@ -684,7 +739,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
}
// Update reset password data
for reset_password_data in data.reset_password_keys {
for reset_password_data in data.account_unlock_data.organization_account_recovery_unlock_data {
let Some(membership) =
existing_memberships.iter_mut().find(|m| m.org_uuid == reset_password_data.organization_id)
else {
@@ -696,7 +751,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
}
// Update send data
for send_data in data.sends {
for send_data in data.account_data.sends {
let Some(send) = existing_sends.iter_mut().find(|s| &s.uuid == send_data.id.as_ref().unwrap()) else {
err!("Send doesn't exist")
};
@@ -707,7 +762,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update cipher data
use super::ciphers::update_cipher_from_data;
for cipher_data in data.ciphers {
for cipher_data in data.account_data.ciphers {
if cipher_data.organization_id.is_none() {
let Some(saved_cipher) = existing_ciphers.iter_mut().find(|c| &c.uuid == cipher_data.id.as_ref().unwrap())
else {
@@ -724,9 +779,13 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update user data
let mut user = headers.user;
user.akey = data.key;
user.private_key = Some(data.private_key);
user.reset_security_stamp();
user.private_key = Some(data.account_keys.user_key_encrypted_account_private_key);
user.set_password(
&data.account_unlock_data.master_password_unlock_data.master_key_authentication_hash,
Some(data.account_unlock_data.master_password_unlock_data.master_key_encrypted_user_key),
true,
None,
);
let save_result = user.save(&mut conn).await;

View File

@@ -1924,11 +1924,21 @@ impl CipherSyncData {
// Generate a HashMap with the collections_uuid as key and the CollectionGroup record
let user_collections_groups: HashMap<CollectionId, CollectionGroup> = if CONFIG.org_groups_enabled() {
CollectionGroup::find_by_user(user_id, conn)
.await
.into_iter()
.map(|collection_group| (collection_group.collections_uuid.clone(), collection_group))
.collect()
CollectionGroup::find_by_user(user_id, conn).await.into_iter().fold(
HashMap::new(),
|mut combined_permissions, cg| {
combined_permissions
.entry(cg.collections_uuid.clone())
.and_modify(|existing| {
// Combine permissions: take the most permissive settings.
existing.read_only &= cg.read_only; // false if ANY group allows write
existing.hide_passwords &= cg.hide_passwords; // false if ANY group allows password view
existing.manage |= cg.manage; // true if ANY group allows manage
})
.or_insert(cg);
combined_permissions
},
)
} else {
HashMap::new()
};

View File

@@ -200,15 +200,17 @@ fn get_api_webauthn(_headers: Headers) -> Json<Value> {
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
// Official available feature flags can be found here:
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
feature_states.insert("duo-redirect".to_string(), true);
feature_states.insert("email-verification".to_string(), true);
feature_states.insert("unauth-ui-refresh".to_string(), true);
feature_states.insert("enable-pm-flight-recorder".to_string(), true);
feature_states.insert("mobile-error-reporting".to_string(), true);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
@@ -216,14 +218,14 @@ fn config() -> Json<Value> {
// We should make sure that we keep this updated when we support the new server features
// Version history:
// - Individual cipher key encryption: 2024.2.0
"version": "2025.4.0",
"version": "2025.6.0",
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden"
},
"settings": {
"disableUserRegistration": !crate::CONFIG.signups_allowed() && crate::CONFIG.signups_domains_whitelist().is_empty(),
"disableUserRegistration": crate::CONFIG.is_signup_disabled()
},
"environment": {
"vault": domain,

View File

@@ -726,15 +726,6 @@ async fn delete_organization_collection(
_delete_organization_collection(&org_id, &col_id, &headers, &mut conn).await
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
struct DeleteCollectionData {
#[allow(dead_code)]
id: String,
#[allow(dead_code)]
org_id: OrganizationId,
}
#[post("/organizations/<org_id>/collections/<col_id>/delete")]
async fn post_organization_collection_delete(
org_id: OrganizationId,
@@ -3334,13 +3325,17 @@ async fn put_reset_password_enrollment(
let reset_request = data.into_inner();
if reset_request.reset_password_key.is_none()
&& OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await
{
let reset_password_key = match reset_request.reset_password_key {
None => None,
Some(ref key) if key.is_empty() => None,
Some(key) => Some(key),
};
if reset_password_key.is_none() && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await {
err!("Reset password can't be withdrawn due to an enterprise policy");
}
if reset_request.reset_password_key.is_some() {
if reset_password_key.is_some() {
PasswordOrOtpData {
master_password_hash: reset_request.master_password_hash,
otp: reset_request.otp,
@@ -3349,7 +3344,7 @@ async fn put_reset_password_enrollment(
.await?;
}
member.reset_password_key = reset_request.reset_password_key;
member.reset_password_key = reset_password_key;
member.save(&mut conn).await?;
let log_id = if member.reset_password_key.is_some() {

View File

@@ -145,15 +145,14 @@ async fn activate_yubikey(data: Json<EnableYubikeyData>, headers: Headers, mut c
// Ensure they are valid OTPs
for yubikey in &yubikeys {
if yubikey.len() == 12 {
// YubiKey ID
if yubikey.is_empty() || yubikey.len() == 12 {
continue;
}
verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?;
}
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect();
let yubikey_ids: Vec<String> = yubikeys.into_iter().filter_map(|x| x.get(..12).map(str::to_owned)).collect();
let yubikey_metadata = YubikeyMetadata {
keys: yubikey_ids,

View File

@@ -14,6 +14,7 @@ use reqwest::{
Client, Response,
};
use rocket::{http::ContentType, response::Redirect, Route};
use svg_hush::{data_url_filter, Filter};
use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer};
@@ -35,11 +36,29 @@ pub fn routes() -> Vec<Route> {
static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = HeaderMap::new();
default_headers.insert(header::USER_AGENT, HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(
header::USER_AGENT,
HeaderValue::from_static(
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36",
),
);
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en-US,en;q=0.9"));
default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache"));
default_headers.insert(header::UPGRADE_INSECURE_REQUESTS, HeaderValue::from_static("1"));
default_headers.insert("Sec-Ch-Ua-Mobile", HeaderValue::from_static("?0"));
default_headers.insert("Sec-Ch-Ua-Platform", HeaderValue::from_static("Linux"));
default_headers.insert(
"Sec-Ch-Ua",
HeaderValue::from_static("\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"138\", \"Google Chrome\";v=\"138\""),
);
default_headers.insert("Sec-Fetch-Site", HeaderValue::from_static("none"));
default_headers.insert("Sec-Fetch-Mode", HeaderValue::from_static("navigate"));
default_headers.insert("Sec-Fetch-User", HeaderValue::from_static("?1"));
default_headers.insert("Sec-Fetch-Dest", HeaderValue::from_static("document"));
// Generate the cookie store
let cookie_store = Arc::new(Jar::default());
@@ -53,6 +72,7 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.default_headers(default_headers.clone())
.http1_title_case_headers()
.build()
.expect("Failed to build client")
});
@@ -318,7 +338,7 @@ struct IconUrlResult {
/// Returns a IconUrlResult which holds a Vector IconList and a string which holds the referer.
/// There will always two items within the iconlist which holds http(s)://domain.tld/favicon.ico.
/// This does not mean that that location does exists, but it is the default location browser use.
/// This does not mean that location exists, but (it) is the default location the browser uses.
///
/// # Argument
/// * `domain` - A string which holds the domain with extension.
@@ -561,6 +581,16 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
if buffer.is_empty() {
err_silent!("Empty response or unable find a valid icon", domain);
} else if icon_type == Some("svg+xml") {
let mut svg_filter = Filter::new();
svg_filter.set_data_url_filter(data_url_filter::allow_standard_images);
let mut sanitized_svg = Vec::new();
if svg_filter.filter(&*buffer, &mut sanitized_svg).is_err() {
icon_type = None;
buffer.clear();
} else {
buffer = sanitized_svg.into();
}
}
Ok((buffer, icon_type))
@@ -581,6 +611,16 @@ async fn save_icon(path: &str, icon: Vec<u8>) {
}
fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
fn check_svg_after_xml_declaration(bytes: &[u8]) -> Option<&'static str> {
// Look for SVG tag within the first 1KB
if let Ok(content) = std::str::from_utf8(&bytes[..bytes.len().min(1024)]) {
if content.contains("<svg") || content.contains("<SVG") {
return Some("svg+xml");
}
}
None
}
match bytes {
[137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"),
@@ -588,6 +628,8 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
[255, 216, 255, ..] => Some("jpeg"),
[71, 73, 70, 56, ..] => Some("gif"),
[66, 77, ..] => Some("bmp"),
[60, 115, 118, 103, ..] => Some("svg+xml"), // Normal svg
[60, 63, 120, 109, 108, ..] => check_svg_after_xml_declaration(bytes), // An svg starting with <?xml
_ => None,
}
}
@@ -599,6 +641,12 @@ async fn stream_to_bytes_limit(res: Response, max_size: usize) -> Result<Bytes,
let mut buf = BytesMut::new();
let mut size = 0;
while let Some(chunk) = stream.next().await {
// It is possible that there might occure UnexpectedEof errors or others
// This is most of the time no issue, and if there is no chunked data anymore or at all parsing the HTML will not happen anyway.
// Therfore if chunk is an err, just break and continue with the data be have received.
if chunk.is_err() {
break;
}
let chunk = &chunk?;
size += chunk.len();
buf.extend(chunk);

View File

@@ -718,7 +718,10 @@ async fn register_verification_email(
) -> ApiResult<RegisterVerificationResponse> {
let data = data.into_inner();
if !CONFIG.is_signup_allowed(&data.email) {
// the registration can only continue if signup is allowed or there exists an invitation
if !(CONFIG.is_signup_allowed(&data.email)
|| (!CONFIG.mail_enabled() && Invitation::find_by_mail(&data.email, &mut conn).await.is_some()))
{
err!("Registration not allowed or user already exists")
}

View File

@@ -55,8 +55,9 @@ fn not_found() -> ApiResult<Html<String>> {
#[get("/css/vaultwarden.css")]
fn vaultwarden_css() -> Cached<Css<String>> {
let css_options = json!({
"signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(),
"signup_disabled": CONFIG.is_signup_disabled(),
"mail_enabled": CONFIG.mail_enabled(),
"mail_2fa_enabled": CONFIG._enable_email_2fa(),
"yubico_enabled": CONFIG._enable_yubico() && CONFIG.yubico_client_id().is_some() && CONFIG.yubico_secret_key().is_some(),
"emergency_access_allowed": CONFIG.emergency_access_allowed(),
"sends_allowed": CONFIG.sends_allowed(),

View File

@@ -856,10 +856,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
}
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
//
// NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const!
const KNOWN_FLAGS: &[&str] = &[
@@ -1188,6 +1188,9 @@ fn opendal_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
#[cfg(s3)]
fn opendal_s3_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
use crate::http_client::aws::AwsReqwestConnector;
use aws_config::{default_provider::credentials::DefaultCredentialsChain, provider_config::ProviderConfig};
// This is a custom AWS credential loader that uses the official AWS Rust
// SDK config crate to load credentials. This ensures maximum compatibility
// with AWS credential configurations. For example, OpenDAL doesn't support
@@ -1200,12 +1203,19 @@ fn opendal_s3_operator_for_path(path: &str) -> Result<opendal::Operator, Error>
use aws_credential_types::provider::ProvideCredentials as _;
use tokio::sync::OnceCell;
static DEFAULT_CREDENTIAL_CHAIN: OnceCell<
aws_config::default_provider::credentials::DefaultCredentialsChain,
> = OnceCell::const_new();
static DEFAULT_CREDENTIAL_CHAIN: OnceCell<DefaultCredentialsChain> = OnceCell::const_new();
let chain = DEFAULT_CREDENTIAL_CHAIN
.get_or_init(|| aws_config::default_provider::credentials::DefaultCredentialsChain::builder().build())
.get_or_init(|| {
let reqwest_client = reqwest::Client::builder().build().unwrap();
let connector = AwsReqwestConnector {
client: reqwest_client,
};
let conf = ProviderConfig::default().with_http_client(connector);
DefaultCredentialsChain::builder().configure(conf).build()
})
.await;
let creds = chain.provide_credentials().await?;
@@ -1344,6 +1354,14 @@ impl Config {
}
}
// The registration link should be hidden if signup is not allowed and whitelist is empty
// unless mail is disabled and invitations are allowed
pub fn is_signup_disabled(&self) -> bool {
!self.signups_allowed()
&& self.signups_domains_whitelist().is_empty()
&& (self.mail_enabled() || !self.invitations_allowed())
}
/// Tests whether the specified user is allowed to create an organization.
pub fn is_org_creation_allowed(&self, email: &str) -> bool {
let users = self.org_creation_users();

View File

@@ -382,6 +382,11 @@ impl Cipher {
// the "Read Only" or "Hide Passwords" restrictions for the user.
json_object["edit"] = json!(!read_only);
json_object["viewPassword"] = json!(!hide_passwords);
// The new key used by clients since v2025.6.0
json_object["permissions"] = json!({
"delete": !read_only,
"restore": !read_only,
});
}
let key = match self.atype {
@@ -604,22 +609,23 @@ impl Cipher {
let mut rows: Vec<(bool, bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections {
//User permissions
// User permissions
if let Some(cu) = cipher_sync_data.user_collections.get(collection) {
rows.push((cu.read_only, cu.hide_passwords, cu.manage));
}
//Group permissions
if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
// Group permissions
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
rows.push((cg.read_only, cg.hide_passwords, cg.manage));
}
}
}
rows
} else {
let mut access_flags = self.get_user_collections_access_flags(user_uuid, conn).await;
access_flags.append(&mut self.get_group_collections_access_flags(user_uuid, conn).await);
access_flags
let user_permissions = self.get_user_collections_access_flags(user_uuid, conn).await;
if !user_permissions.is_empty() {
user_permissions
} else {
self.get_group_collections_access_flags(user_uuid, conn).await
}
};
if rows.is_empty() {
@@ -628,6 +634,9 @@ impl Cipher {
}
// A cipher can be in multiple collections with inconsistent access flags.
// Also, user permission overrule group permissions
// and only user permissions are returned by the code above.
//
// For example, a cipher could be in one collection where the user has
// read-only access, but also in another collection where the user has
// read/write access. For a flag to be in effect for a cipher, upstream
@@ -636,13 +645,15 @@ impl Cipher {
// and `hide_passwords` columns. This could ideally be done as part of the
// query, but Diesel doesn't support a min() or bool_and() function on
// booleans and this behavior isn't portable anyway.
//
// The only exception is for the `manage` flag, that needs a boolean OR!
let mut read_only = true;
let mut hide_passwords = true;
let mut manage = false;
for (ro, hp, mn) in rows.iter() {
read_only &= ro;
hide_passwords &= hp;
manage &= mn;
manage |= mn;
}
Some((read_only, hide_passwords, manage))

View File

@@ -97,13 +97,13 @@ impl Collection {
(
cu.read_only,
cu.hide_passwords,
cu.manage || (is_manager && !cu.read_only && !cu.hide_passwords),
is_manager && (cu.manage || (!cu.read_only && !cu.hide_passwords)),
)
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(&self.uuid) {
(
cg.read_only,
cg.hide_passwords,
cg.manage || (is_manager && !cg.read_only && !cg.hide_passwords),
is_manager && (cg.manage || (!cg.read_only && !cg.hide_passwords)),
)
} else {
(false, false, false)
@@ -114,7 +114,9 @@ impl Collection {
} else {
match Membership::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(_) if self.is_manageable_by_user(user_uuid, conn).await => (false, false, true),
Some(m) if m.atype == MembershipType::Manager && self.is_manageable_by_user(user_uuid, conn).await => {
(false, false, true)
}
Some(m) => {
let is_manager = m.atype == MembershipType::Manager;
let read_only = !self.is_writable_by_user(user_uuid, conn).await;

View File

@@ -244,3 +244,61 @@ impl Resolve for CustomDnsResolver {
})
}
}
#[cfg(s3)]
pub(crate) mod aws {
use aws_smithy_runtime_api::client::{
http::{HttpClient, HttpConnector, HttpConnectorFuture, HttpConnectorSettings, SharedHttpConnector},
orchestrator::HttpResponse,
result::ConnectorError,
runtime_components::RuntimeComponents,
};
use reqwest::Client;
// Adapter that wraps reqwest to be compatible with the AWS SDK
#[derive(Debug)]
pub(crate) struct AwsReqwestConnector {
pub(crate) client: Client,
}
impl HttpConnector for AwsReqwestConnector {
fn call(&self, request: aws_smithy_runtime_api::client::orchestrator::HttpRequest) -> HttpConnectorFuture {
// Convert the AWS-style request to a reqwest request
let client = self.client.clone();
let future = async move {
let method = reqwest::Method::from_bytes(request.method().as_bytes())
.map_err(|e| ConnectorError::user(Box::new(e)))?;
let mut req_builder = client.request(method, request.uri().to_string());
for (name, value) in request.headers() {
req_builder = req_builder.header(name, value);
}
if let Some(body_bytes) = request.body().bytes() {
req_builder = req_builder.body(body_bytes.to_vec());
}
let response = req_builder.send().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
let status = response.status().into();
let bytes = response.bytes().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
Ok(HttpResponse::new(status, bytes.into()))
};
HttpConnectorFuture::new(Box::pin(future))
}
}
impl HttpClient for AwsReqwestConnector {
fn http_connector(
&self,
_settings: &HttpConnectorSettings,
_components: &RuntimeComponents,
) -> SharedHttpConnector {
SharedHttpConnector::new(AwsReqwestConnector {
client: self.client.clone(),
})
}
}
}

View File

@@ -1,5 +1,5 @@
/*!
* Bootstrap v5.3.6 (https://getbootstrap.com/)
* Bootstrap v5.3.7 (https://getbootstrap.com/)
* Copyright 2011-2025 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
@@ -647,7 +647,7 @@
* Constants
*/
const VERSION = '5.3.6';
const VERSION = '5.3.7';
/**
* Class definition
@@ -4805,7 +4805,6 @@
*
* Shout-out to Angular https://github.com/angular/angular/blob/15.2.8/packages/core/src/sanitization/url_sanitizer.ts#L38
*/
// eslint-disable-next-line unicorn/better-regex
const SAFE_URL_PATTERN = /^(?!javascript:)(?:[a-z0-9+.-]+:|[^&:/?#]*(?:[/?#]|$))/i;
const allowedAttribute = (attribute, allowedAttributeList) => {
const attributeName = attribute.nodeName.toLowerCase();
@@ -5349,6 +5348,7 @@
if (trigger === 'click') {
EventHandler.on(this._element, this.constructor.eventName(EVENT_CLICK$1), this._config.selector, event => {
const context = this._initializeOnDelegatedTarget(event);
context._activeTrigger[TRIGGER_CLICK] = !(context._isShown() && context._activeTrigger[TRIGGER_CLICK]);
context.toggle();
});
} else if (trigger !== TRIGGER_MANUAL) {

View File

@@ -1,6 +1,6 @@
@charset "UTF-8";
/*!
* Bootstrap v5.3.6 (https://getbootstrap.com/)
* Bootstrap v5.3.7 (https://getbootstrap.com/)
* Copyright 2011-2025 The Bootstrap Authors
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/

View File

@@ -4,10 +4,10 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-2.3.1
* https://datatables.net/download/#bs5/dt-2.3.2
*
* Included libraries:
* DataTables 2.3.1
* DataTables 2.3.2
*/
:root {
@@ -17,17 +17,18 @@
--dt-row-stripe: 0, 0, 0;
--dt-row-hover: 0, 0, 0;
--dt-column-ordering: 0, 0, 0;
--dt-header-align-items: center;
--dt-html-background: white;
}
:root.dark {
--dt-html-background: rgb(33, 37, 41);
}
table.dataTable td.dt-control {
table.dataTable tbody td.dt-control {
text-align: center;
cursor: pointer;
}
table.dataTable td.dt-control:before {
table.dataTable tbody td.dt-control:before {
display: inline-block;
box-sizing: border-box;
content: "";
@@ -36,7 +37,7 @@ table.dataTable td.dt-control:before {
border-bottom: 5px solid transparent;
border-right: 0px solid transparent;
}
table.dataTable tr.dt-hasChild td.dt-control:before {
table.dataTable tbody tr.dt-hasChild td.dt-control:before {
border-top: 10px solid rgba(0, 0, 0, 0.5);
border-left: 5px solid transparent;
border-bottom: 0px solid transparent;
@@ -163,7 +164,7 @@ table.dataTable tfoot > tr > td div.dt-column-header,
table.dataTable tfoot > tr > td div.dt-column-footer {
display: flex;
justify-content: space-between;
align-items: center;
align-items: var(--dt-header-align-items);
gap: 4px;
}
table.dataTable thead > tr > th div.dt-column-header span.dt-column-title,
@@ -421,6 +422,10 @@ table.dataTable tbody td.dt-body-nowrap {
white-space: nowrap;
}
:root {
--dt-header-align-items: flex-end;
}
/*! Bootstrap 5 integration for DataTables
*
* ©2020 SpryMedia Ltd, all rights reserved.

View File

@@ -4,13 +4,13 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-2.3.1
* https://datatables.net/download/#bs5/dt-2.3.2
*
* Included libraries:
* DataTables 2.3.1
* DataTables 2.3.2
*/
/*! DataTables 2.3.1
/*! DataTables 2.3.2
* © SpryMedia Ltd - datatables.net/license
*/
@@ -124,7 +124,7 @@
_fnCamelToHungarian( defaults.column, defaults.column, true );
/* Setting up the initialisation object */
_fnCamelToHungarian( defaults, $.extend( oInit, $this.data() ), true );
_fnCamelToHungarian( defaults, $.extend( oInit, _fnEscapeObject($this.data()) ), true );
@@ -513,7 +513,7 @@
*
* @type string
*/
builder: "bs5/dt-2.3.1",
builder: "bs5/dt-2.3.2",
/**
* Buttons. For use with the Buttons extension for DataTables. This is
@@ -554,6 +554,11 @@
*/
errMode: "alert",
/** HTML entity escaping */
escape: {
/** When reading data-* attributes for initialisation options */
attributes: false
},
/**
* Legacy so v1 plug-ins don't throw js errors on load
@@ -4025,7 +4030,7 @@
if ( write ) {
if (unique) {
// Allow column options to be set from HTML attributes
_fnColumnOptions( settings, shifted, jqCell.data() );
_fnColumnOptions( settings, shifted, _fnEscapeObject(jqCell.data()) );
// Get the width for the column. This can be defined from the
// width attribute, style attribute or `columns.width` option
@@ -4271,7 +4276,7 @@
// to the object for the callback.
var empty = {};
DataTable.util.set(ajax.dataSrc)(empty, []);
_fnAjaxDataSrc(oSettings, empty, []);
callback(empty);
}
else {
@@ -5799,9 +5804,11 @@
var run = false;
var columns = column === undefined
? _fnColumnsFromHeader( e.target )
: Array.isArray(column)
? column
: [column];
: typeof column === 'function'
? column()
: Array.isArray(column)
? column
: [column];
if ( columns.length ) {
for ( var i=0, ien=columns.length ; i<ien ; i++ ) {
@@ -6866,6 +6873,19 @@
}
}
/**
* Escape HTML entities in strings, in an object
*/
function _fnEscapeObject(obj) {
if (DataTable.ext.escape.attributes) {
$.each(obj, function (key, val) {
obj[key] = _escapeHtml(val);
})
}
return obj;
}
/**
@@ -10211,7 +10231,7 @@
* @type string
* @default Version number
*/
DataTable.version = "2.3.1";
DataTable.version = "2.3.2";
/**
* Private data store, containing all of the settings objects that are

View File

@@ -20,16 +20,50 @@ a[href$="/settings/sponsored-families"] {
@extend %vw-hide;
}
/* Hide the sso `Email` input field */
.vw-email-sso {
@extend %vw-hide;
}
/* Hide the `Enterprise Single Sign-On` button on the login page */
app-root form.ng-untouched button.\!tw-text-primary-600:nth-child(4) {
{{#if (webver ">=2025.5.1")}}
.vw-sso-login {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(4) {
@extend %vw-hide;
}
{{/if}}
/* Hide the `Log in with passkey` settings */
app-change-password app-webauthn-login-settings {
@extend %vw-hide;
}
/* Hide Log in with passkey on the login page */
app-root form.ng-untouched > div > div > button.\!tw-text-primary-600:nth-child(3) {
{{#if (webver ">=2025.5.1")}}
.vw-passkey-login {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(3) {
@extend %vw-hide;
}
{{/if}}
/* Hide the or text followed by the two buttons hidden above */
app-root form.ng-untouched > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
{{#if (webver ">=2025.5.1")}}
.vw-or-text {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
@extend %vw-hide;
}
{{/if}}
/* Hide the `Other` button on the login page */
.vw-other-login {
@extend %vw-hide;
}
@@ -98,7 +132,7 @@ app-root a[routerlink="/signup"] {
{{/if}}
{{/if}}
{{#unless mail_enabled}}
{{#unless mail_2fa_enabled}}
/* Hide `Email` 2FA if mail is not enabled */
.providers-2fa-1 {
@extend %vw-hide;

View File

@@ -61,9 +61,11 @@ impl Fairing for AppHeaders {
// The `Cross-Origin-Resource-Policy` header should not be set on images or on the `icon_external` route.
// Otherwise some clients, like the Bitwarden Desktop, will fail to download the icons
let mut is_image = true;
if !(res.headers().get_one("Content-Type").is_some_and(|v| v.starts_with("image/"))
|| req.route().is_some_and(|v| v.name.as_deref() == Some("icon_external")))
{
is_image = false;
res.set_raw_header("Cross-Origin-Resource-Policy", "same-origin");
}
@@ -71,49 +73,56 @@ impl Fairing for AppHeaders {
// This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo.
// This is the same behavior as upstream Bitwarden.
if !req_uri_path.ends_with("connector.html") {
// # Frame Ancestors:
// Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb
// Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US
// Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/
// # img/child/frame src:
// Have I Been Pwned to allow those calls to work.
// # Connect src:
// Leaked Passwords check: api.pwnedpasswords.com
// 2FA/MFA Site check: api.2fa.directory
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
// app.simplelogin.io, app.addy.io, api.fastmail.com, quack.duckduckgo.com
let csp = format!(
"default-src 'none'; \
font-src 'self'; \
manifest-src 'self'; \
base-uri 'self'; \
form-action 'self'; \
object-src 'self' blob:; \
script-src 'self' 'wasm-unsafe-eval'; \
style-src 'self' 'unsafe-inline'; \
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-ancestors 'self' \
chrome-extension://nngceckbapebfimnlniiiahkandclblb \
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \
moz-extension://* \
{allowed_iframe_ancestors}; \
img-src 'self' data: \
https://haveibeenpwned.com \
{icon_service_csp}; \
connect-src 'self' \
https://api.pwnedpasswords.com \
https://api.2fa.directory \
https://app.simplelogin.io/api/ \
https://app.addy.io/api/ \
https://api.fastmail.com/ \
https://api.forwardemail.net \
{allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
);
let csp = if is_image {
// Prevent scripts, frames, objects, etc., from loading with images, mainly for SVG images, since these could contain JavaScript and other unsafe items.
// Even though we sanitize SVG images before storing and viewing them, it's better to prevent allowing these elements.
String::from("default-src 'none'; img-src 'self' data:; style-src 'unsafe-inline'; script-src 'none'; frame-src 'none'; object-src 'none")
} else {
// # Frame Ancestors:
// Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb
// Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US
// Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/
// # img/child/frame src:
// Have I Been Pwned to allow those calls to work.
// # Connect src:
// Leaked Passwords check: api.pwnedpasswords.com
// 2FA/MFA Site check: api.2fa.directory
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
// app.simplelogin.io, app.addy.io, api.fastmail.com, api.forwardemail.net
format!(
"default-src 'none'; \
font-src 'self'; \
manifest-src 'self'; \
base-uri 'self'; \
form-action 'self'; \
object-src 'self' blob:; \
script-src 'self' 'wasm-unsafe-eval'; \
style-src 'self' 'unsafe-inline'; \
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-ancestors 'self' \
chrome-extension://nngceckbapebfimnlniiiahkandclblb \
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \
moz-extension://* \
{allowed_iframe_ancestors}; \
img-src 'self' data: \
https://haveibeenpwned.com \
{icon_service_csp}; \
connect-src 'self' \
https://api.pwnedpasswords.com \
https://api.2fa.directory \
https://app.simplelogin.io/api/ \
https://app.addy.io/api/ \
https://api.fastmail.com/ \
https://api.forwardemail.net \
{allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
)
};
res.set_raw_header("Content-Security-Policy", csp);
res.set_raw_header("X-Frame-Options", "SAMEORIGIN");
} else {