Compare commits

...

29 Commits

Author SHA1 Message Date
Richy
25865efd79 fix: resolve group permission conflicts with multiple groups (#6017)
* fix: resolve group permission conflicts with multiple groups

When a user belonged to multiple groups with different permissions for the
same collection, only the permissions from one group were applied instead
of combining them properly. This caused users to see incorrect access levels
when initially viewing collection items.

The fix combines permissions from all user groups by taking the most
permissive settings:
- read_only: false if ANY group allows write access
- hide_passwords: false if ANY group allows password viewing
- manage: true if ANY group allows management

This ensures users immediately see the correct permissions when opening
collection entries, matching the behavior after editing and saving.

* Update src/api/core/ciphers.rs

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>

* fix: format

* fix: restrict collection manage permissions to managers only

Prevent users from getting logged out when they have manage permissions by only allowing manage permissions for MembershipType::Manager and higher roles.

* refactor: cipher permission logic to prioritize user access

Updated permission checks to return user collection permissions if available, otherwise fallback to group permissions. Clarified comments to indicate user permissions overrule group permissions and corrected the logic for the 'manage' flag to use boolean OR instead of AND.

---------

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2025-07-25 20:58:41 +02:00
Mathijs van Veluw
bcf627930e Adjust issue template to hopefully show better to search for closed and open issues (#6096) 2025-07-25 18:17:55 +02:00
Timshel
ce70cd2cf4 Hide login form custom fields (#6054)
Co-authored-by: Timshel <timshel@480s>
2025-07-14 22:01:20 +02:00
Daniel
2ac589d4b4 Fix digest SHA extraction step (#6059) 2025-07-13 12:20:16 +02:00
Stefan Melmuk
b2e2aef7de fix hash reference in release.yml (#6058) 2025-07-13 10:22:33 +02:00
Daniel García
0755bb19c0 Update release.yml (#6057)
Seems like docker can't use the hash references: https://github.com/dani-garcia/vaultwarden/actions/runs/16242780267/job/45861396226
2025-07-13 01:01:08 +02:00
Mathijs van Veluw
fee0c1c711 Update crates, workflow and issue template (#6056)
- Updated all the crates, which probably fixes #5959
- Updated all the workflows and tested it with zizmor
  Also added zizmor as a workflow it self.
- Updated the issue template to better mention to search first.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-13 00:48:56 +02:00
Stefan Melmuk
f58539f0b4 close unmatched left parenthesis in the README (#6046) 2025-07-10 13:52:52 +02:00
Stefan Melmuk
e718afb441 improve the usage section of the README (#6041) 2025-07-09 23:44:20 +02:00
Mathijs van Veluw
55945ad793 Update web-vault and admin resources (#6044)
- Updated web-vault to v2025.7.0
- Updated admin JS and CSS files

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-07-09 23:26:12 +02:00
Stefan Melmuk
4fd22d8e3b fix hiding email as 2fa provider (#6026) 2025-07-09 23:25:11 +02:00
mountdisk
d6a8fb8e48 chore: fix some minor issues in the comments (#5998)
Signed-off-by: mountdisk <mountdisk@icloud.com>
2025-07-09 23:24:29 +02:00
Mathijs van Veluw
3b48e6e903 Fix v2025.6.x clients and newer to delete items (#6004) 2025-07-01 10:33:22 +02:00
Chase Douglas
6b9333b33e Use existing reqwest client for AWS S3 requests (#5917)
This removes a lot of duplicate client dependency bloat for roughly
equivalent functionality.

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2025-06-30 22:57:00 +02:00
Daniel García
a545636ee5 Update flags version and enable manual error reporting (#5994) 2025-06-27 21:39:38 +02:00
Mathijs van Veluw
f125d5f1a1 Misc Updates and favicon fixes (#5993)
- Updated crates
- Switched to rustls instead of native-tls
  Some dependency were already using rustls by default or without option.
  By removing native-tls we also have just one way of working here.

Updated favicon fetching which now is able to fetch more icons.
- Use rustls instead of native-tls
  This seems to work better, probably because of tls sniffing
- Use different user-agent and added several other headers
- Added SVG support. SVG Images will be sanitized first before stored or presented.
  Also, a special CSP for images will be sent to prevent scripts etc.. from SVG images.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-27 21:20:36 +02:00
Mathijs van Veluw
ad75ce281e Fix an issue with yubico keys not validating (#5991)
* Fix an issue with yubico keys not validating

When adding or updating yubico otp keys there were some issues with the validation.
Looks like the web-vault sends all keys, not only filled-in keys, which triggered a check on empty keys.
Also, we should only return filled-in keys, not the empty ones too.

Fixes #5986

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code - take 2

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-26 21:46:56 +02:00
Stefan Melmuk
9059437c35 fix account recovery withdrawal (#5968)
since `web-v2025.4.0` the client sends `""` instead of `null`, so we
also have to check whether the `reset_password_key` is empty or not.
2025-06-17 18:55:11 +02:00
Stefan Melmuk
c84db0daca allow signup for invited users (#5967)
invited users (e.g. via /admin panel or org invite) should be able to
register if email is disabled.
2025-06-17 11:15:36 +02:00
Mathijs van Veluw
72adc239f5 Update crates and web-vault (#5955)
- Updated crates
- Updated web-vault to v2025.6.0

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-15 01:19:53 +02:00
Nick Grimshaw
34ebeeca76 Minor fixes to copy in .env.template (#5928) 2025-06-15 01:19:08 +02:00
Stefan Melmuk
0469d9ba4c make css for login-page position independent (#5906)
* make css for login-page position independent

starting with v2025.5.1 the login page will have custom classes so the
fields to be disabled can be targeted specifically without risking
side-effects

* hide buttons after cancelling login
2025-06-14 19:31:51 +02:00
Daniel
eaa6ad06ed Update Alpine to version 3.22 (#5938) 2025-06-14 19:30:19 +02:00
Timshel
0d3f283c37 Fix and improvements to policies (#5923) 2025-06-02 21:47:12 +02:00
Mathijs van Veluw
51a1d641c5 Some small admin updates (#5909)
- Some tweaks on the diagnostics layout
- Always show the latest web-vault version also when running in a container
  Users can override the web-vault folder and forget
- Also updated to the latest crates.

Kinda fixes #5908

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-30 16:56:29 +02:00
Chase Douglas
90f7e5ff80 Abstract persistent files through Apache OpenDAL (#5626)
* Abstract file access through Apache OpenDAL

* Add AWS S3 support via OpenDAL for data files

* PR improvements

* Additional PR improvements

* Config setting comments for local/remote data locations
2025-05-29 21:40:58 +02:00
Stefan Melmuk
200999c94e fix css for locked screen (#5905)
by making the selector more specific to the login page
the logout button on the locked screen should be visible again
2025-05-29 08:08:32 +02:00
Stefan Melmuk
d363e647e9 fix css to hide login with passkey (#5890) 2025-05-27 06:31:48 +02:00
Mathijs van Veluw
53f58b14d5 Fix admin diagnostics crash (#5886)
Better handle semver issues.
Fixes #5882
Fixes #5883
Fixes #5885

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-26 23:14:17 +02:00
50 changed files with 2216 additions and 820 deletions

View File

@@ -15,6 +15,14 @@
####################
## Main data folder
## This can be a path to local folder or a path to an external location
## depending on features enabled at build time. Possible external locations:
##
## - AWS S3 Bucket (via `s3` feature): s3://bucket-name/path/to/folder
##
## When using an external location, make sure to set TMP_FOLDER,
## TEMPLATES_FOLDER, and DATABASE_URL to local paths and/or a remote database
## location.
# DATA_FOLDER=data
## Individual folders, these override %DATA_FOLDER%
@@ -22,10 +30,13 @@
# ICON_CACHE_FOLDER=data/icon_cache
# ATTACHMENTS_FOLDER=data/attachments
# SENDS_FOLDER=data/sends
## Temporary folder used for storing temporary file uploads
## Must be a local path.
# TMP_FOLDER=data/tmp
## Templates data folder, by default uses embedded templates
## Check source code to see the format
## HTML template overrides data folder
## Must be a local path.
# TEMPLATES_FOLDER=data/templates
## Automatically reload the templates for every request, slow, use only for development
# RELOAD_TEMPLATES=false
@@ -39,7 +50,9 @@
#########################
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
## When using SQLite, this is the path to the DB file, and it defaults to
## %DATA_FOLDER%/db.sqlite3. If DATA_FOLDER is set to an external location, this
## must be set to a local sqlite3 file path.
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/2.1.x/diesel/mysql/struct.MysqlConnection.html
@@ -117,7 +130,7 @@
## and are always in terms of UTC time (regardless of your local time zone settings).
##
## The schedule format is a bit different from crontab as crontab does not contains seconds.
## You can test the the format here: https://crontab.guru, but remove the first digit!
## You can test the format here: https://crontab.guru, but remove the first digit!
## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK
## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri"
## "0 30 * * * * "
@@ -260,7 +273,7 @@
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Invitations org admins to invite users, even when signups are disabled
## Allows org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
@@ -328,16 +341,16 @@
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
## The default is 10 seconds, but this could be too low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Block HTTP domains/IPs by Regex
## Any domains or IPs that match this regex won't be fetched by the internal HTTP client.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
## NOTE: Always enclose this regex within single quotes!
# HTTP_REQUEST_BLOCK_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Enabling this will cause the internal HTTP client to refuse to connect to any non global IP address.
## Enabling this will cause the internal HTTP client to refuse to connect to any non-global IP address.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# HTTP_REQUEST_BLOCK_NON_GLOBAL_IPS=true

1
.github/CODEOWNERS vendored
View File

@@ -1,5 +1,6 @@
/.github @dani-garcia @BlackDex
/.github/** @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/ISSUE_TEMPLATE/** @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex
/SECURITY.md @dani-garcia @BlackDex

View File

@@ -8,15 +8,30 @@ body:
value: |
Thanks for taking the time to fill out this bug report!
Please *do not* submit feature requests or ask for help on how to configure Vaultwarden here.
Please **do not** submit feature requests or ask for help on how to configure Vaultwarden here!
The [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions/) has sections for Questions and Ideas.
Our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki/) has topics on how to configure Vaultwarden.
Also, make sure you are running [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest) of Vaultwarden!
And search for existing open or closed issues or discussions regarding your topic before posting.
Be sure to check and validate the Vaultwarden Admin Diagnostics (`/admin/diagnostics`) page for any errors!
See here [how to enable the admin page](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page).
> [!IMPORTANT]
> ## :bangbang: Search for existing **Closed _AND_ Open** [Issues](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue%20) **_AND_** [Discussions](https://github.com/dani-garcia/vaultwarden/discussions?discussions_q=) regarding your topic before posting! :bangbang:
#
- type: checkboxes
id: checklist
attributes:
label: Prerequisites
description: Please confirm you have completed the following before submitting an issue!
options:
- label: I have searched the existing **Closed _AND_ Open** [Issues](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue%20) **_AND_** [Discussions](https://github.com/dani-garcia/vaultwarden/discussions?discussions_q=)
required: true
- label: I have searched and read the [documentation](https://github.com/dani-garcia/vaultwarden/wiki/)
required: true
#
- id: support-string
type: textarea
@@ -36,7 +51,7 @@ body:
attributes:
label: Vaultwarden Build Version
description: What version of Vaultwarden are you running?
placeholder: ex. v1.31.0 or v1.32.0-3466a804
placeholder: ex. v1.34.0 or v1.34.1-53f58b14
validations:
required: true
#
@@ -67,7 +82,7 @@ body:
attributes:
label: Reverse Proxy
description: Are you using a reverse proxy, if so which and what version?
placeholder: ex. nginx 1.26.2, caddy 2.8.4, traefik 3.1.2, haproxy 3.0
placeholder: ex. nginx 1.29.0, caddy 2.10.0, traefik 3.4.4, haproxy 3.2
validations:
required: true
#
@@ -115,7 +130,7 @@ body:
attributes:
label: Client Version
description: What version(s) of the client(s) are you seeing the problem on?
placeholder: ex. CLI v2024.7.2, Firefox 130 - v2024.7.0
placeholder: ex. CLI v2025.7.0, Firefox 140 - v2025.6.1
#
- id: reproduce
type: textarea

View File

@@ -66,13 +66,15 @@ jobs:
- name: Init Variables
id: toolchain
shell: bash
env:
CHANNEL: ${{ matrix.channel }}
run: |
if [[ "${{ matrix.channel }}" == 'rust-toolchain' ]]; then
if [[ "${CHANNEL}" == 'rust-toolchain' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'channel.*"(\K.*?)(?=")' rust-toolchain.toml)"
elif [[ "${{ matrix.channel }}" == 'msrv' ]]; then
elif [[ "${CHANNEL}" == 'msrv' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'rust-version.*"(\K.*?)(?=")' Cargo.toml)"
else
RUST_TOOLCHAIN="${{ matrix.channel }}"
RUST_TOOLCHAIN="${CHANNEL}"
fi
echo "RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" | tee -a "${GITHUB_OUTPUT}"
# End Determine rust-toolchain version
@@ -80,7 +82,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master @ Apr 29, 2025, 9:22 PM GMT+2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -90,7 +92,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master @ Apr 29, 2025, 9:22 PM GMT+2
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -115,7 +117,7 @@ jobs:
# Enable Rust Caching
- name: Rust Caching
uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.

View File

@@ -4,7 +4,8 @@ permissions: {}
on: [ push, pull_request ]
jobs:
docker-templates:
docker-templates:
name: Validate docker templates
permissions:
contents: read
runs-on: ubuntu-24.04
@@ -20,7 +21,7 @@ jobs:
- name: Run make to rebuild templates
working-directory: docker
run: make
run: make
- name: Check for unstaged changes
working-directory: docker

View File

@@ -14,7 +14,7 @@ jobs:
steps:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:

View File

@@ -47,7 +47,7 @@ jobs:
# Start a local docker registry to extract the compiled binaries to upload as artifacts and attest them
services:
registry:
image: registry:2
image: registry@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 # v3.0.0
ports:
- 5000:5000
env:
@@ -76,7 +76,7 @@ jobs:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
@@ -192,7 +192,7 @@ jobs:
- name: Bake ${{ matrix.base_image }} containers
id: bake_vw
uses: docker/bake-action@4ba453fbc2db7735392b93edf935aaf9b1e8f747 # v6.5.0
uses: docker/bake-action@37816e747588cb137173af99ab33873600c46ea8 # v6.8.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -213,14 +213,15 @@ jobs:
shell: bash
env:
BAKE_METADATA: ${{ steps.bake_vw.outputs.metadata }}
BASE_IMAGE: ${{ matrix.base_image }}
run: |
GET_DIGEST_SHA="$(jq -r '.["${{ matrix.base_image }}-multi"]."containerimage.digest"' <<< "${BAKE_METADATA}")"
GET_DIGEST_SHA="$(jq -r --arg base "$BASE_IMAGE" '.[$base + "-multi"]."containerimage.digest"' <<< "${BAKE_METADATA}")"
echo "DIGEST_SHA=${GET_DIGEST_SHA}" | tee -a "${GITHUB_ENV}"
# Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -228,7 +229,7 @@ jobs:
- name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -236,7 +237,7 @@ jobs:
- name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -248,6 +249,7 @@ jobs:
shell: bash
env:
REF_TYPE: ${{ github.ref_type }}
BASE_IMAGE: ${{ matrix.base_image }}
run: |
# Check which main tag we are going to build determined by ref_type
if [[ "${REF_TYPE}" == "tag" ]]; then
@@ -257,7 +259,7 @@ jobs:
fi
# Check which base_image was used and append -alpine if needed
if [[ "${{ matrix.base_image }}" == "alpine" ]]; then
if [[ "${BASE_IMAGE}" == "alpine" ]]; then
EXTRACT_TAG="${EXTRACT_TAG}-alpine"
fi
@@ -266,25 +268,25 @@ jobs:
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp amd64:/vaultwarden vaultwarden-amd64-${{ matrix.base_image }}
docker cp amd64:/vaultwarden vaultwarden-amd64-${BASE_IMAGE}
docker rm --force amd64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp arm64:/vaultwarden vaultwarden-arm64-${{ matrix.base_image }}
docker cp arm64:/vaultwarden vaultwarden-arm64-${BASE_IMAGE}
docker rm --force arm64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv7:/vaultwarden vaultwarden-armv7-${{ matrix.base_image }}
docker cp armv7:/vaultwarden vaultwarden-armv7-${BASE_IMAGE}
docker rm --force armv7
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv6:/vaultwarden vaultwarden-armv6-${{ matrix.base_image }}
docker cp armv6:/vaultwarden vaultwarden-armv6-${BASE_IMAGE}
docker rm --force armv6
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
@@ -314,7 +316,7 @@ jobs:
path: vaultwarden-armv6-${{ matrix.base_image }}
- name: "Attest artifacts ${{ matrix.base_image }}"
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
with:
subject-path: vaultwarden-*
# End Upload artifacts to Github Actions

View File

@@ -36,7 +36,7 @@ jobs:
persist-credentials: false
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@6c175e9c4083a92bbca2f9724c8a5e33bc2d97a5 # v0.30.0
uses: aquasecurity/trivy-action@dc5a429b52fcf669ce959baa2c2dd26090d2a6c4 # v0.32.0
env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
@@ -48,6 +48,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@86b04fb0e47484f7282357688f21d5d0e32175fe # v3.27.5
uses: github/codeql-action/upload-sarif@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
with:
sarif_file: 'trivy-results.sarif'

28
.github/workflows/zizmor.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Security Analysis with zizmor
on:
push:
branches: ["main"]
pull_request:
branches: ["**"]
permissions: {}
jobs:
zizmor:
name: Run zizmor
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@f52a838cfabf134edcbaa7c8b3677dde20045018 # v0.1.1
with:
# intentionally not scanning the entire repository,
# since it contains integration tests.
inputs: ./.github/

1609
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.85.0"
rust-version = "1.86.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -32,6 +32,7 @@ enable_mimalloc = ["dep:mimalloc"]
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run.
query_logger = ["dep:diesel_logger"]
s3 = ["opendal/services-s3", "dep:aws-config", "dep:aws-credential-types", "dep:aws-smithy-runtime-api", "dep:anyhow", "dep:http", "dep:reqsign"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
@@ -72,14 +73,15 @@ dashmap = "6.1.0"
# Async futures
futures = "0.3.31"
tokio = { version = "1.45.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio = { version = "1.46.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio-util = { version = "0.7.15", features = ["compat"]}
# A generic serialization/deserialization framework
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
# A safe, extensible ORM and Query builder
diesel = { version = "2.2.10", features = ["chrono", "r2d2", "numeric"] }
diesel = { version = "2.2.12", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true }
@@ -87,7 +89,7 @@ derive_more = { version = "2.0.1", features = ["from", "into", "as_ref", "deref"
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.33.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.35.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.9.1"
@@ -99,7 +101,7 @@ uuid = { version = "1.17.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.41", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.3"
chrono-tz = "0.10.4"
time = "0.3.41"
# Job scheduler
@@ -124,7 +126,7 @@ webauthn-rs = "0.3.2"
url = "2.5.4"
# Email libraries
lettre = { version = "0.11.16", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
lettre = { version = "0.11.17", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "hostname", "tracing", "tokio1-rustls", "ring", "rustls-native-certs"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9"
@@ -132,7 +134,7 @@ email_address = "0.2.9"
handlebars = { version = "6.3.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.15", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
reqwest = { version = "0.12.22", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false}
hickory-resolver = "0.25.2"
# Favicon extraction libraries
@@ -140,6 +142,7 @@ html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.10.1"
svg-hush = "0.9.5"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.55.1", features = ["async"] }
@@ -149,7 +152,7 @@ cookie = "0.18.1"
cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.72"
openssl = "0.10.73"
# CLI argument parsing
pico-args = "0.5.0"
@@ -163,9 +166,9 @@ semver = "1.0.26"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.46", features = ["secure"], default-features = false, optional = true }
mimalloc = { version = "0.1.47", features = ["secure"], default-features = false, optional = true }
which = "7.0.3"
which = "8.0.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.3"
@@ -176,6 +179,17 @@ rpassword = "7.4.0"
# Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false }
# File are accessed through Apache OpenDAL
opendal = { version = "0.53.3", features = ["services-fs"], default-features = false }
# For retrieving AWS credentials, including temporary SSO credentials
anyhow = { version = "1.0.98", optional = true }
aws-config = { version = "1.8.1", features = ["behavior-version-latest", "rt-tokio", "credentials-process", "sso"], default-features = false, optional = true }
aws-credential-types = { version = "1.2.3", optional = true }
aws-smithy-runtime-api = { version = "1.8.3", optional = true }
http = { version = "1.3.1", optional = true }
reqsign = { version = "0.16.5", optional = true }
# Strip debuginfo from the release builds
# The debug symbols are to provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations
@@ -265,7 +279,6 @@ macro_use_imports = "deny"
manual_assert = "deny"
manual_instant_elapsed = "deny"
manual_string_new = "deny"
match_on_vec_items = "deny"
match_wildcard_for_single_variants = "deny"
mem_forget = "deny"
needless_continue = "deny"

View File

@@ -59,19 +59,21 @@ A nearly complete implementation of the Bitwarden Client API is provided, includ
## Usage
> [!IMPORTANT]
> Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
>
>This can be configured in [Vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
>
>If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy or Traefik (see examples linked above).
> The web-vault requires the use a secure context for the [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API).
> That means it will only work via `http://localhost:8000` (using the port from the example below) or if you [enable HTTPS](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS).
The recommended way to install and use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
See [which container image to use](https://github.com/dani-garcia/vaultwarden/wiki/Which-container-image-to-use) for an explanation of the provided tags.
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
Alternatively, you can also [build Vaultwarden](https://github.com/dani-garcia/vaultwarden/wiki/Building-binary) yourself.
While Vaultwarden is based upon the [Rocket web framework](https://rocket.rs) which has built-in support for TLS our recommendation would be that you setup a reverse proxy (see [proxy examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
> [!TIP]
>**For more detailed examples on how to install, use and configure Vaultwarden you can check our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).**
The main way to use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
### Docker/Podman CLI
Pull the container image and mount a volume from the host for persistent storage.<br>
@@ -83,7 +85,7 @@ docker run --detach --name vaultwarden \
--env DOMAIN="https://vw.domain.tld" \
--volume /vw-data/:/data/ \
--restart unless-stopped \
--publish 80:80 \
--publish 127.0.0.1:8000:80 \
vaultwarden/server:latest
```
@@ -104,7 +106,7 @@ services:
volumes:
- ./vw-data/:/data/
ports:
- 80:80
- 127.0.0.1:8000:80
```
<br>

View File

@@ -11,6 +11,8 @@ fn main() {
println!("cargo:rustc-cfg=postgresql");
#[cfg(feature = "query_logger")]
println!("cargo:rustc-cfg=query_logger");
#[cfg(feature = "s3")]
println!("cargo:rustc-cfg=s3");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!(
@@ -23,6 +25,7 @@ fn main() {
println!("cargo::rustc-check-cfg=cfg(mysql)");
println!("cargo::rustc-check-cfg=cfg(postgresql)");
println!("cargo::rustc-check-cfg=cfg(query_logger)");
println!("cargo::rustc-check-cfg=cfg(s3)");
// Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed.

View File

@@ -1,13 +1,13 @@
---
vault_version: "v2025.5.0"
vault_image_digest: "sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e"
vault_version: "v2025.7.0"
vault_image_digest: "sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e"
# Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.87.0 # Rust version to be used
rust_version: 1.88.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: "3.21" # Alpine version to be used
alpine_version: "3.22" # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch

View File

@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.7.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.7.0
# [docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2025.5.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e
# [docker.io/vaultwarden/web-vault:v2025.7.0]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e AS vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.87.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.87.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.87.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.87.0 AS build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.88.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.88.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.88.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.88.0 AS build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
@@ -127,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.21
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.22
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \

View File

@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.7.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.7.0
# [docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2025.5.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e
# [docker.io/vaultwarden/web-vault:v2025.7.0]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:f6ac819a2cd9e226f2cd2ec26196ede94a41e672e9672a11b5f307a19278b15e AS vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bd
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.87.0-slim-bookworm AS build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.88.0-slim-bookworm AS build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT

View File

@@ -10,7 +10,7 @@ proc-macro = true
[dependencies]
quote = "1.0.40"
syn = "2.0.101"
syn = "2.0.104"
[lints]
workspace = true

View File

@@ -1,4 +1,4 @@
[toolchain]
channel = "1.87.0"
channel = "1.88.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"

View File

@@ -614,7 +614,7 @@ use cached::proc_macro::cached;
/// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit
/// Any cache will be lost if Vaultwarden is restarted
#[cached(time = 600, sync_writes = "default")]
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) {
async fn get_release_info(has_http_access: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
(
@@ -633,17 +633,11 @@ async fn get_release_info(has_http_access: bool, running_within_container: bool)
},
// Do not fetch the web-vault version when running within a container
// The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_container {
"-".to_string()
} else {
match get_json_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
)
match get_json_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest")
.await
{
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(),
}
{
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(),
},
)
} else {
@@ -689,8 +683,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
_ => "Unable to resolve domain name.".to_string(),
};
let (latest_release, latest_commit, latest_web_build) =
get_release_info(has_http_access, running_within_container).await;
let (latest_release, latest_commit, latest_web_build) = get_release_info(has_http_access).await;
let ip_header_name = &ip_header.0.unwrap_or_default();
@@ -698,10 +691,14 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
let web_vault_version = get_web_vault_version();
// Check if the running version is newer than the latest stable released version
let web_ver_match = semver::VersionReq::parse(&format!(">{latest_web_build}")).unwrap();
let web_vault_pre_release = web_ver_match.matches(
&semver::Version::parse(&web_vault_version).unwrap_or_else(|_| semver::Version::parse("2025.1.1").unwrap()),
);
let web_vault_pre_release = if let Ok(web_ver_match) = semver::VersionReq::parse(&format!(">{latest_web_build}")) {
web_ver_match.matches(
&semver::Version::parse(&web_vault_version).unwrap_or_else(|_| semver::Version::parse("2025.1.1").unwrap()),
)
} else {
error!("Unable to parse latest_web_build: '{latest_web_build}'");
false
};
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
@@ -749,17 +746,17 @@ fn get_diagnostics_http(code: u16, _token: AdminToken) -> EmptyResult {
}
#[post("/config", format = "application/json", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
async fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner();
if let Err(e) = CONFIG.update_config(data, true) {
if let Err(e) = CONFIG.update_config(data, true).await {
err!(format!("Unable to save config: {e:?}"))
}
Ok(())
}
#[post("/config/delete", format = "application/json")]
fn delete_config(_token: AdminToken) -> EmptyResult {
if let Err(e) = CONFIG.delete_user_config() {
async fn delete_config(_token: AdminToken) -> EmptyResult {
if let Err(e) = CONFIG.delete_user_config().await {
err!(format!("Unable to delete config: {e:?}"))
}
Ok(())

View File

@@ -8,8 +8,8 @@ use serde_json::Value;
use crate::{
api::{
core::{log_user_event, two_factor::email},
register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult, Notify,
PasswordOrOtpData, UpdateType,
master_password_policy, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
Notify, PasswordOrOtpData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto,
@@ -1068,7 +1068,7 @@ struct SecretVerificationRequest {
}
#[post("/accounts/verify-password", data = "<data>")]
fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers) -> EmptyResult {
async fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
let data: SecretVerificationRequest = data.into_inner();
let user = headers.user;
@@ -1076,7 +1076,7 @@ fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers) -> E
err!("Invalid password")
}
Ok(())
Ok(Json(master_password_policy(&user, &conn).await))
}
async fn _api_key(data: Json<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult {

View File

@@ -11,10 +11,11 @@ use rocket::{
use serde_json::Value;
use crate::auth::ClientVersion;
use crate::util::NumberOrString;
use crate::util::{save_temp_file, NumberOrString};
use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType},
auth::Headers,
config::PathType,
crypto,
db::{models::*, DbConn, DbPool},
CONFIG,
@@ -105,12 +106,7 @@ struct SyncData {
}
#[get("/sync?<data..>")]
async fn sync(
data: SyncData,
headers: Headers,
client_version: Option<ClientVersion>,
mut conn: DbConn,
) -> Json<Value> {
async fn sync(data: SyncData, headers: Headers, client_version: Option<ClientVersion>, mut conn: DbConn) -> JsonResult {
let user_json = headers.user.to_json(&mut conn).await;
// Get all ciphers which are visible by the user
@@ -134,7 +130,7 @@ async fn sync(
for c in ciphers {
ciphers_json.push(
c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn)
.await,
.await?,
);
}
@@ -159,7 +155,7 @@ async fn sync(
api::core::_get_eq_domains(headers, true).into_inner()
};
Json(json!({
Ok(Json(json!({
"profile": user_json,
"folders": folders_json,
"collections": collections_json,
@@ -168,11 +164,11 @@ async fn sync(
"domains": domains_json,
"sends": sends_json,
"object": "sync"
}))
})))
}
#[get("/ciphers")]
async fn get_ciphers(headers: Headers, mut conn: DbConn) -> Json<Value> {
async fn get_ciphers(headers: Headers, mut conn: DbConn) -> JsonResult {
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &mut conn).await;
let cipher_sync_data = CipherSyncData::new(&headers.user.uuid, CipherSyncType::User, &mut conn).await;
@@ -180,15 +176,15 @@ async fn get_ciphers(headers: Headers, mut conn: DbConn) -> Json<Value> {
for c in ciphers {
ciphers_json.push(
c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn)
.await,
.await?,
);
}
Json(json!({
Ok(Json(json!({
"data": ciphers_json,
"object": "list",
"continuationToken": null
}))
})))
}
#[get("/ciphers/<cipher_id>")]
@@ -201,7 +197,7 @@ async fn get_cipher(cipher_id: CipherId, headers: Headers, mut conn: DbConn) ->
err!("Cipher is not owned by user")
}
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
#[get("/ciphers/<cipher_id>/admin")]
@@ -339,7 +335,7 @@ async fn post_ciphers(data: Json<CipherData>, headers: Headers, mut conn: DbConn
let mut cipher = Cipher::new(data.r#type, data.name.clone());
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherCreate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
/// Enforces the personal ownership policy on user-owned ciphers, if applicable.
@@ -676,7 +672,7 @@ async fn put_cipher(
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherUpdate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
#[post("/ciphers/<cipher_id>/partial", data = "<data>")]
@@ -714,7 +710,7 @@ async fn put_cipher_partial(
// Update favorite
cipher.set_favorite(Some(data.favorite), &headers.user.uuid, &mut conn).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
#[derive(Deserialize)]
@@ -825,7 +821,7 @@ async fn post_collections_update(
)
.await;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
#[put("/ciphers/<cipher_id>/collections-admin", data = "<data>")]
@@ -1030,7 +1026,7 @@ async fn share_cipher_by_uuid(
update_cipher_from_data(&mut cipher, data.cipher, headers, Some(shared_to_collections), conn, nt, ut).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
}
/// v2 API for downloading an attachment. This just redirects the client to
@@ -1055,7 +1051,7 @@ async fn get_attachment(
}
match Attachment::find_by_id(&attachment_id, &mut conn).await {
Some(attachment) if cipher_id == attachment.cipher_uuid => Ok(Json(attachment.to_json(&headers.host))),
Some(attachment) if cipher_id == attachment.cipher_uuid => Ok(Json(attachment.to_json(&headers.host).await?)),
Some(_) => err!("Attachment doesn't belong to cipher"),
None => err!("Attachment doesn't exist"),
}
@@ -1116,7 +1112,7 @@ async fn post_attachment_v2(
"attachmentId": attachment_id,
"url": url,
"fileUploadType": FileUploadType::Direct as i32,
response_key: cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await,
response_key: cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?,
})))
}
@@ -1142,7 +1138,7 @@ async fn save_attachment(
mut conn: DbConn,
nt: Notify<'_>,
) -> Result<(Cipher, DbConn), crate::error::Error> {
let mut data = data.into_inner();
let data = data.into_inner();
let Some(size) = data.data.len().to_i64() else {
err!("Attachment data size overflow");
@@ -1269,13 +1265,7 @@ async fn save_attachment(
attachment.save(&mut conn).await.expect("Error saving attachment");
}
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_id.as_ref());
let file_path = folder_path.join(file_id.as_ref());
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
save_temp_file(PathType::Attachments, &format!("{cipher_id}/{file_id}"), data.data, true).await?;
nt.send_cipher_update(
UpdateType::SyncCipherUpdate,
@@ -1342,7 +1332,7 @@ async fn post_attachment(
let (cipher, mut conn) = save_attachment(attachment, cipher_id, data, &headers, conn, nt).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
}
#[post("/ciphers/<cipher_id>/attachment-admin", format = "multipart/form-data", data = "<data>")]
@@ -1786,7 +1776,7 @@ async fn _restore_cipher_by_uuid(
.await;
}
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await))
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
}
async fn _restore_multiple_ciphers(
@@ -1859,7 +1849,7 @@ async fn _delete_cipher_attachment_by_id(
)
.await;
}
let cipher_json = cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await;
let cipher_json = cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?;
Ok(Json(json!({"cipher":cipher_json})))
}
@@ -1934,11 +1924,21 @@ impl CipherSyncData {
// Generate a HashMap with the collections_uuid as key and the CollectionGroup record
let user_collections_groups: HashMap<CollectionId, CollectionGroup> = if CONFIG.org_groups_enabled() {
CollectionGroup::find_by_user(user_id, conn)
.await
.into_iter()
.map(|collection_group| (collection_group.collections_uuid.clone(), collection_group))
.collect()
CollectionGroup::find_by_user(user_id, conn).await.into_iter().fold(
HashMap::new(),
|mut combined_permissions, cg| {
combined_permissions
.entry(cg.collections_uuid.clone())
.and_modify(|existing| {
// Combine permissions: take the most permissive settings.
existing.read_only &= cg.read_only; // false if ANY group allows write
existing.hide_passwords &= cg.hide_passwords; // false if ANY group allows password view
existing.manage |= cg.manage; // true if ANY group allows manage
})
.or_insert(cg);
combined_permissions
},
)
} else {
HashMap::new()
};

View File

@@ -582,7 +582,7 @@ async fn view_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut
CipherSyncType::User,
&mut conn,
)
.await,
.await?,
);
}

View File

@@ -200,15 +200,17 @@ fn get_api_webauthn(_headers: Headers) -> Json<Value> {
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
// Official available feature flags can be found here:
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
feature_states.insert("duo-redirect".to_string(), true);
feature_states.insert("email-verification".to_string(), true);
feature_states.insert("unauth-ui-refresh".to_string(), true);
feature_states.insert("enable-pm-flight-recorder".to_string(), true);
feature_states.insert("mobile-error-reporting".to_string(), true);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
@@ -216,7 +218,7 @@ fn config() -> Json<Value> {
// We should make sure that we keep this updated when we support the new server features
// Version history:
// - Individual cipher key encryption: 2024.2.0
"version": "2025.4.0",
"version": "2025.6.0",
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",

View File

@@ -917,21 +917,26 @@ async fn get_org_details(data: OrgIdData, headers: OrgMemberHeaders, mut conn: D
}
Ok(Json(json!({
"data": _get_org_details(&data.organization_id, &headers.host, &headers.user.uuid, &mut conn).await,
"data": _get_org_details(&data.organization_id, &headers.host, &headers.user.uuid, &mut conn).await?,
"object": "list",
"continuationToken": null,
})))
}
async fn _get_org_details(org_id: &OrganizationId, host: &str, user_id: &UserId, conn: &mut DbConn) -> Value {
async fn _get_org_details(
org_id: &OrganizationId,
host: &str,
user_id: &UserId,
conn: &mut DbConn,
) -> Result<Value, crate::Error> {
let ciphers = Cipher::find_by_org(org_id, conn).await;
let cipher_sync_data = CipherSyncData::new(user_id, CipherSyncType::Organization, conn).await;
let mut ciphers_json = Vec::with_capacity(ciphers.len());
for c in ciphers {
ciphers_json.push(c.to_json(host, user_id, Some(&cipher_sync_data), CipherSyncType::Organization, conn).await);
ciphers_json.push(c.to_json(host, user_id, Some(&cipher_sync_data), CipherSyncType::Organization, conn).await?);
}
json!(ciphers_json)
Ok(json!(ciphers_json))
}
#[derive(FromForm)]
@@ -3329,13 +3334,17 @@ async fn put_reset_password_enrollment(
let reset_request = data.into_inner();
if reset_request.reset_password_key.is_none()
&& OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await
{
let reset_password_key = match reset_request.reset_password_key {
None => None,
Some(ref key) if key.is_empty() => None,
Some(key) => Some(key),
};
if reset_password_key.is_none() && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await {
err!("Reset password can't be withdrawn due to an enterprise policy");
}
if reset_request.reset_password_key.is_some() {
if reset_password_key.is_some() {
PasswordOrOtpData {
master_password_hash: reset_request.master_password_hash,
otp: reset_request.otp,
@@ -3344,7 +3353,7 @@ async fn put_reset_password_enrollment(
.await?;
}
member.reset_password_key = reset_request.reset_password_key;
member.reset_password_key = reset_password_key;
member.save(&mut conn).await?;
let log_id = if member.reset_password_key.is_some() {
@@ -3372,7 +3381,7 @@ async fn get_org_export(org_id: OrganizationId, headers: AdminHeaders, mut conn:
Ok(Json(json!({
"collections": convert_json_key_lcase_first(_get_org_collections(&org_id, &mut conn).await),
"ciphers": convert_json_key_lcase_first(_get_org_details(&org_id, &headers.host, &headers.user.uuid, &mut conn).await),
"ciphers": convert_json_key_lcase_first(_get_org_details(&org_id, &headers.host, &headers.user.uuid, &mut conn).await?),
})))
}

View File

@@ -1,4 +1,5 @@
use std::path::Path;
use std::time::Duration;
use chrono::{DateTime, TimeDelta, Utc};
use num_traits::ToPrimitive;
@@ -12,8 +13,9 @@ use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType},
auth::{ClientIp, Headers, Host},
config::PathType,
db::{models::*, DbConn, DbPool},
util::NumberOrString,
util::{save_temp_file, NumberOrString},
CONFIG,
};
@@ -228,7 +230,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
let UploadData {
model,
mut data,
data,
} = data.into_inner();
let model = model.into_inner();
@@ -268,13 +270,8 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
}
let file_id = crate::crypto::generate_send_file_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.persist_to(&file_path).await {
data.move_copy_to(file_path).await?
}
save_temp_file(PathType::Sends, &format!("{}/{file_id}", send.uuid), data, true).await?;
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
@@ -381,7 +378,7 @@ async fn post_send_file_v2_data(
) -> EmptyResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut data = data.into_inner();
let data = data.into_inner();
let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.", "Invalid send uuid or does not belong to user.")
@@ -424,19 +421,9 @@ async fn post_send_file_v2_data(
err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size));
}
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_id);
let file_path = folder_path.join(file_id);
let file_path = format!("{send_id}/{file_id}");
// Check if the file already exists, if that is the case do not overwrite it
if tokio::fs::metadata(&file_path).await.is_ok() {
err!("Send file has already been uploaded.", format!("File {file_path:?} already exists"))
}
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
save_temp_file(PathType::Sends, &file_path, data.data, false).await?;
nt.send_send_update(
UpdateType::SyncSendCreate,
@@ -569,15 +556,26 @@ async fn post_access_file(
)
.await;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({
"object": "send-fileDownload",
"id": file_id,
"url": format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host)
"url": download_url(&host, &send_id, &file_id).await?,
})))
}
async fn download_url(host: &Host, send_id: &SendId, file_id: &SendFileId) -> Result<String, crate::Error> {
let operator = CONFIG.opendal_operator_for_path_type(PathType::Sends)?;
if operator.info().scheme() == opendal::Scheme::Fs {
let token_claims = crate::auth::generate_send_claims(send_id, file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host))
} else {
Ok(operator.presign_read(&format!("{send_id}/{file_id}"), Duration::from_secs(5 * 60)).await?.uri().to_string())
}
}
#[get("/sends/<send_id>/<file_id>?<t>")]
async fn download_send(send_id: SendId, file_id: SendFileId, t: &str) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(t) {

View File

@@ -261,7 +261,7 @@ pub(crate) async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiRes
}
.map_res("Can't fetch Duo Keys")?;
Ok((data.ik, data.sk, CONFIG.get_duo_akey(), data.host))
Ok((data.ik, data.sk, CONFIG.get_duo_akey().await, data.host))
}
pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult<(String, String)> {

View File

@@ -145,15 +145,14 @@ async fn activate_yubikey(data: Json<EnableYubikeyData>, headers: Headers, mut c
// Ensure they are valid OTPs
for yubikey in &yubikeys {
if yubikey.len() == 12 {
// YubiKey ID
if yubikey.is_empty() || yubikey.len() == 12 {
continue;
}
verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?;
}
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect();
let yubikey_ids: Vec<String> = yubikeys.into_iter().filter_map(|x| x.get(..12).map(str::to_owned)).collect();
let yubikey_metadata = YubikeyMetadata {
keys: yubikey_ids,

View File

@@ -14,14 +14,12 @@ use reqwest::{
Client, Response,
};
use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
};
use svg_hush::{data_url_filter, Filter};
use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer};
use crate::{
config::PathType,
error::Error,
http_client::{get_reqwest_client_builder, should_block_address, CustomHttpClientError},
util::Cached,
@@ -38,11 +36,29 @@ pub fn routes() -> Vec<Route> {
static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = HeaderMap::new();
default_headers.insert(header::USER_AGENT, HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(
header::USER_AGENT,
HeaderValue::from_static(
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36",
),
);
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en-US,en;q=0.9"));
default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache"));
default_headers.insert(header::UPGRADE_INSECURE_REQUESTS, HeaderValue::from_static("1"));
default_headers.insert("Sec-Ch-Ua-Mobile", HeaderValue::from_static("?0"));
default_headers.insert("Sec-Ch-Ua-Platform", HeaderValue::from_static("Linux"));
default_headers.insert(
"Sec-Ch-Ua",
HeaderValue::from_static("\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"138\", \"Google Chrome\";v=\"138\""),
);
default_headers.insert("Sec-Fetch-Site", HeaderValue::from_static("none"));
default_headers.insert("Sec-Fetch-Mode", HeaderValue::from_static("navigate"));
default_headers.insert("Sec-Fetch-User", HeaderValue::from_static("?1"));
default_headers.insert("Sec-Fetch-Dest", HeaderValue::from_static("document"));
// Generate the cookie store
let cookie_store = Arc::new(Jar::default());
@@ -56,6 +72,7 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.default_headers(default_headers.clone())
.http1_title_case_headers()
.build()
.expect("Failed to build client")
});
@@ -158,7 +175,7 @@ fn is_valid_domain(domain: &str) -> bool {
}
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{domain}.png", CONFIG.icon_cache_folder());
let path = format!("{domain}.png");
// Check for expiration of negatively cached copy
if icon_is_negcached(&path).await {
@@ -177,7 +194,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
// Get the icon, or None in case of error
match download_icon(domain).await {
Ok((icon, icon_type)) => {
save_icon(&path, &icon).await;
save_icon(&path, icon.to_vec()).await;
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
}
Err(e) => {
@@ -190,7 +207,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
warn!("Unable to download icon: {e:?}");
let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]).await;
save_icon(&miss_indicator, vec![]).await;
None
}
}
@@ -203,11 +220,9 @@ async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
}
// Try to read the cached icon, and return it if it exists
if let Ok(mut f) = File::open(path).await {
let mut buffer = Vec::new();
if f.read_to_end(&mut buffer).await.is_ok() {
return Some(buffer);
if let Ok(operator) = CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
if let Ok(buf) = operator.read(path).await {
return Some(buf.to_vec());
}
}
@@ -215,9 +230,11 @@ async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
}
async fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path).await?;
let modified = meta.modified()?;
let age = SystemTime::now().duration_since(modified)?;
let operator = CONFIG.opendal_operator_for_path_type(PathType::IconCache)?;
let meta = operator.stat(path).await?;
let modified =
meta.last_modified().ok_or_else(|| std::io::Error::other(format!("No last modified time for `{path}`")))?;
let age = SystemTime::now().duration_since(modified.into())?;
Ok(ttl > 0 && ttl <= age.as_secs())
}
@@ -229,8 +246,13 @@ async fn icon_is_negcached(path: &str) -> bool {
match expired {
// No longer negatively cached, drop the marker
Ok(true) => {
if let Err(e) = remove_file(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {path:?}: {e:?}");
match CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
Ok(operator) => {
if let Err(e) = operator.delete(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {path:?}: {e:?}");
}
}
Err(e) => error!("Could not remove negative cache indicator for icon {path:?}: {e:?}"),
}
false
}
@@ -316,7 +338,7 @@ struct IconUrlResult {
/// Returns a IconUrlResult which holds a Vector IconList and a string which holds the referer.
/// There will always two items within the iconlist which holds http(s)://domain.tld/favicon.ico.
/// This does not mean that that location does exists, but it is the default location browser use.
/// This does not mean that location exists, but (it) is the default location the browser uses.
///
/// # Argument
/// * `domain` - A string which holds the domain with extension.
@@ -559,26 +581,46 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
if buffer.is_empty() {
err_silent!("Empty response or unable find a valid icon", domain);
} else if icon_type == Some("svg+xml") {
let mut svg_filter = Filter::new();
svg_filter.set_data_url_filter(data_url_filter::allow_standard_images);
let mut sanitized_svg = Vec::new();
if svg_filter.filter(&*buffer, &mut sanitized_svg).is_err() {
icon_type = None;
buffer.clear();
} else {
buffer = sanitized_svg.into();
}
}
Ok((buffer, icon_type))
}
async fn save_icon(path: &str, icon: &[u8]) {
match File::create(path).await {
Ok(mut f) => {
f.write_all(icon).await.expect("Error writing icon file");
}
Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => {
create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder");
}
async fn save_icon(path: &str, icon: Vec<u8>) {
let operator = match CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
Ok(operator) => operator,
Err(e) => {
warn!("Unable to save icon: {e:?}");
warn!("Failed to get OpenDAL operator while saving icon: {e}");
return;
}
};
if let Err(e) = operator.write(path, icon).await {
warn!("Unable to save icon: {e:?}");
}
}
fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
fn check_svg_after_xml_declaration(bytes: &[u8]) -> Option<&'static str> {
// Look for SVG tag within the first 1KB
if let Ok(content) = std::str::from_utf8(&bytes[..bytes.len().min(1024)]) {
if content.contains("<svg") || content.contains("<SVG") {
return Some("svg+xml");
}
}
None
}
match bytes {
[137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"),
@@ -586,6 +628,8 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
[255, 216, 255, ..] => Some("jpeg"),
[71, 73, 70, 56, ..] => Some("gif"),
[66, 77, ..] => Some("bmp"),
[60, 115, 118, 103, ..] => Some("svg+xml"), // Normal svg
[60, 63, 120, 109, 108, ..] => check_svg_after_xml_declaration(bytes), // An svg starting with <?xml
_ => None,
}
}
@@ -597,6 +641,12 @@ async fn stream_to_bytes_limit(res: Response, max_size: usize) -> Result<Bytes,
let mut buf = BytesMut::new();
let mut size = 0;
while let Some(chunk) = stream.next().await {
// It is possible that there might occure UnexpectedEof errors or others
// This is most of the time no issue, and if there is no chunked data anymore or at all parsing the HTML will not happen anyway.
// Therfore if chunk is an err, just break and continue with the data be have received.
if chunk.is_err() {
break;
}
let chunk = &chunk?;
size += chunk.len();
buf.extend(chunk);

View File

@@ -14,6 +14,7 @@ use crate::{
log_user_event,
two_factor::{authenticator, duo, duo_oidc, email, enforce_2fa_policy, webauthn, yubikey},
},
master_password_policy,
push::register_push_device,
ApiResult, EmptyResult, JsonResult,
},
@@ -132,18 +133,6 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
Ok(Json(result))
}
#[derive(Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct MasterPasswordPolicy {
min_complexity: u8,
min_length: u32,
require_lower: bool,
require_upper: bool,
require_numbers: bool,
require_special: bool,
enforce_on_login: bool,
}
async fn _password_login(
data: ConnectData,
user_id: &mut Option<UserId>,
@@ -300,36 +289,7 @@ async fn _password_login(
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id);
device.save(conn).await?;
// Fetch all valid Master Password Policies and merge them into one with all trues and largest numbers as one policy
let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid,
OrgPolicyType::MasterPassword,
conn,
)
.await
.into_iter()
.filter_map(|p| serde_json::from_str(&p.data).ok())
.collect();
// NOTE: Upstream still uses PascalCase here for `Object`!
let master_password_policy = if !master_password_policies.is_empty() {
let mut mpp_json = json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy {
min_complexity: acc.min_complexity.max(policy.min_complexity),
min_length: acc.min_length.max(policy.min_length),
require_lower: acc.require_lower || policy.require_lower,
require_upper: acc.require_upper || policy.require_upper,
require_numbers: acc.require_numbers || policy.require_numbers,
require_special: acc.require_special || policy.require_special,
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
}
}));
mpp_json["Object"] = json!("masterPasswordPolicy");
mpp_json
} else {
json!({"Object": "masterPasswordPolicy"})
};
let master_password_policy = master_password_policy(&user, conn).await;
let mut result = json!({
"access_token": access_token,
@@ -758,7 +718,10 @@ async fn register_verification_email(
) -> ApiResult<RegisterVerificationResponse> {
let data = data.into_inner();
if !CONFIG.is_signup_allowed(&data.email) {
// the registration can only continue if signup is allowed or there exists an invitation
if !(CONFIG.is_signup_allowed(&data.email)
|| (!CONFIG.mail_enabled() && Invitation::find_by_mail(&data.email, &mut conn).await.is_some()))
{
err!("Registration not allowed or user already exists")
}

View File

@@ -32,7 +32,10 @@ pub use crate::api::{
web::routes as web_routes,
web::static_files,
};
use crate::db::{models::User, DbConn};
use crate::db::{
models::{OrgPolicy, OrgPolicyType, User},
DbConn,
};
// Type aliases for API methods results
type ApiResult<T> = Result<T, crate::error::Error>;
@@ -68,3 +71,49 @@ impl PasswordOrOtpData {
Ok(())
}
}
#[derive(Debug, Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct MasterPasswordPolicy {
min_complexity: Option<u8>,
min_length: Option<u32>,
require_lower: bool,
require_upper: bool,
require_numbers: bool,
require_special: bool,
enforce_on_login: bool,
}
// Fetch all valid Master Password Policies and merge them into one with all trues and largest numbers as one policy
async fn master_password_policy(user: &User, conn: &DbConn) -> Value {
let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid,
OrgPolicyType::MasterPassword,
conn,
)
.await
.into_iter()
.filter_map(|p| serde_json::from_str(&p.data).ok())
.collect();
let mut mpp_json = if !master_password_policies.is_empty() {
json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy {
min_complexity: acc.min_complexity.max(policy.min_complexity),
min_length: acc.min_length.max(policy.min_length),
require_lower: acc.require_lower || policy.require_lower,
require_upper: acc.require_upper || policy.require_upper,
require_numbers: acc.require_numbers || policy.require_numbers,
require_special: acc.require_special || policy.require_special,
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
}
}))
} else {
json!({})
};
// NOTE: Upstream still uses PascalCase here for `Object`!
mpp_json["Object"] = json!("masterPasswordPolicy");
mpp_json
}

View File

@@ -57,6 +57,7 @@ fn vaultwarden_css() -> Cached<Css<String>> {
let css_options = json!({
"signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(),
"mail_enabled": CONFIG.mail_enabled(),
"mail_2fa_enabled": CONFIG._enable_email_2fa(),
"yubico_enabled": CONFIG._enable_yubico() && CONFIG.yubico_client_id().is_some() && CONFIG.yubico_secret_key().is_some(),
"emergency_access_allowed": CONFIG.emergency_access_allowed(),
"sends_allowed": CONFIG.sends_allowed(),

View File

@@ -7,16 +7,14 @@ use once_cell::sync::{Lazy, OnceCell};
use openssl::rsa::Rsa;
use serde::de::DeserializeOwned;
use serde::ser::Serialize;
use std::{
env,
fs::File,
io::{Read, Write},
net::IpAddr,
};
use std::{env, net::IpAddr};
use crate::db::models::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId,
SendFileId, SendId, UserId,
use crate::{
config::PathType,
db::models::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId,
SendFileId, SendId, UserId,
},
};
use crate::{error::Error, CONFIG};
@@ -40,37 +38,33 @@ static JWT_REGISTER_VERIFY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|regis
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
pub fn initialize_keys() -> Result<(), Error> {
fn read_key(create_if_missing: bool) -> Result<(Rsa<openssl::pkey::Private>, Vec<u8>), Error> {
let mut priv_key_buffer = Vec::with_capacity(2048);
pub async fn initialize_keys() -> Result<(), Error> {
use std::io::Error;
let mut priv_key_file = File::options()
.create(create_if_missing)
.truncate(false)
.read(true)
.write(create_if_missing)
.open(CONFIG.private_rsa_key())?;
let rsa_key_filename = std::path::PathBuf::from(CONFIG.private_rsa_key())
.file_name()
.ok_or_else(|| Error::other("Private RSA key path missing filename"))?
.to_str()
.ok_or_else(|| Error::other("Private RSA key path filename is not valid UTF-8"))?
.to_string();
#[allow(clippy::verbose_file_reads)]
let bytes_read = priv_key_file.read_to_end(&mut priv_key_buffer)?;
let operator = CONFIG.opendal_operator_for_path_type(PathType::RsaKey).map_err(Error::other)?;
let rsa_key = if bytes_read > 0 {
Rsa::private_key_from_pem(&priv_key_buffer[..bytes_read])?
} else if create_if_missing {
// Only create the key if the file doesn't exist or is empty
let rsa_key = Rsa::generate(2048)?;
priv_key_buffer = rsa_key.private_key_to_pem()?;
priv_key_file.write_all(&priv_key_buffer)?;
info!("Private key '{}' created correctly", CONFIG.private_rsa_key());
rsa_key
} else {
err!("Private key does not exist or invalid format", CONFIG.private_rsa_key());
};
let priv_key_buffer = match operator.read(&rsa_key_filename).await {
Ok(buffer) => Some(buffer),
Err(e) if e.kind() == opendal::ErrorKind::NotFound => None,
Err(e) => return Err(e.into()),
};
Ok((rsa_key, priv_key_buffer))
}
let (priv_key, priv_key_buffer) = read_key(true).or_else(|_| read_key(false))?;
let (priv_key, priv_key_buffer) = if let Some(priv_key_buffer) = priv_key_buffer {
(Rsa::private_key_from_pem(priv_key_buffer.to_vec().as_slice())?, priv_key_buffer.to_vec())
} else {
let rsa_key = Rsa::generate(2048)?;
let priv_key_buffer = rsa_key.private_key_to_pem()?;
operator.write(&rsa_key_filename, priv_key_buffer.clone()).await?;
info!("Private key '{}' created correctly", CONFIG.private_rsa_key());
(rsa_key, priv_key_buffer)
};
let pub_key_buffer = priv_key.public_key_to_pem()?;
let enc = EncodingKey::from_rsa_pem(&priv_key_buffer)?;

View File

@@ -3,7 +3,7 @@ use std::{
process::exit,
sync::{
atomic::{AtomicBool, Ordering},
RwLock,
LazyLock, RwLock,
},
};
@@ -22,10 +22,32 @@ static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
get_env("CONFIG_FILE").unwrap_or_else(|| format!("{data_folder}/config.json"))
});
static CONFIG_FILE_PARENT_DIR: LazyLock<String> = LazyLock::new(|| {
let path = std::path::PathBuf::from(&*CONFIG_FILE);
path.parent().unwrap_or(std::path::Path::new("data")).to_str().unwrap_or("data").to_string()
});
static CONFIG_FILENAME: LazyLock<String> = LazyLock::new(|| {
let path = std::path::PathBuf::from(&*CONFIG_FILE);
path.file_name().unwrap_or(std::ffi::OsStr::new("config.json")).to_str().unwrap_or("config.json").to_string()
});
pub static SKIP_CONFIG_VALIDATION: AtomicBool = AtomicBool::new(false);
pub static CONFIG: Lazy<Config> = Lazy::new(|| {
Config::load().unwrap_or_else(|e| {
std::thread::spawn(|| {
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
exit(12)
});
rt.block_on(Config::load()).unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
exit(12)
})
})
.join()
.unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
exit(12)
})
@@ -110,10 +132,11 @@ macro_rules! make_config {
builder
}
fn from_file(path: &str) -> Result<Self, Error> {
let config_str = std::fs::read_to_string(path)?;
println!("[INFO] Using saved config from `{path}` for configuration.\n");
serde_json::from_str(&config_str).map_err(Into::into)
async fn from_file() -> Result<Self, Error> {
let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
let config_bytes = operator.read(&CONFIG_FILENAME).await?;
println!("[INFO] Using saved config from `{}` for configuration.\n", *CONFIG_FILE);
serde_json::from_slice(&config_bytes.to_vec()).map_err(Into::into)
}
fn clear_non_editable(&mut self) {
@@ -833,10 +856,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
}
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
//
// NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const!
const KNOWN_FLAGS: &[&str] = &[
@@ -1138,11 +1161,103 @@ fn smtp_convert_deprecated_ssl_options(smtp_ssl: Option<bool>, smtp_explicit_tls
"starttls".to_string()
}
fn opendal_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
// Cache of previously built operators by path
static OPERATORS_BY_PATH: LazyLock<dashmap::DashMap<String, opendal::Operator>> =
LazyLock::new(dashmap::DashMap::new);
if let Some(operator) = OPERATORS_BY_PATH.get(path) {
return Ok(operator.clone());
}
let operator = if path.starts_with("s3://") {
#[cfg(not(s3))]
return Err(opendal::Error::new(opendal::ErrorKind::ConfigInvalid, "S3 support is not enabled").into());
#[cfg(s3)]
opendal_s3_operator_for_path(path)?
} else {
let builder = opendal::services::Fs::default().root(path);
opendal::Operator::new(builder)?.finish()
};
OPERATORS_BY_PATH.insert(path.to_string(), operator.clone());
Ok(operator)
}
#[cfg(s3)]
fn opendal_s3_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
use crate::http_client::aws::AwsReqwestConnector;
use aws_config::{default_provider::credentials::DefaultCredentialsChain, provider_config::ProviderConfig};
// This is a custom AWS credential loader that uses the official AWS Rust
// SDK config crate to load credentials. This ensures maximum compatibility
// with AWS credential configurations. For example, OpenDAL doesn't support
// AWS SSO temporary credentials yet.
struct OpenDALS3CredentialLoader {}
#[async_trait]
impl reqsign::AwsCredentialLoad for OpenDALS3CredentialLoader {
async fn load_credential(&self, _client: reqwest::Client) -> anyhow::Result<Option<reqsign::AwsCredential>> {
use aws_credential_types::provider::ProvideCredentials as _;
use tokio::sync::OnceCell;
static DEFAULT_CREDENTIAL_CHAIN: OnceCell<DefaultCredentialsChain> = OnceCell::const_new();
let chain = DEFAULT_CREDENTIAL_CHAIN
.get_or_init(|| {
let reqwest_client = reqwest::Client::builder().build().unwrap();
let connector = AwsReqwestConnector {
client: reqwest_client,
};
let conf = ProviderConfig::default().with_http_client(connector);
DefaultCredentialsChain::builder().configure(conf).build()
})
.await;
let creds = chain.provide_credentials().await?;
Ok(Some(reqsign::AwsCredential {
access_key_id: creds.access_key_id().to_string(),
secret_access_key: creds.secret_access_key().to_string(),
session_token: creds.session_token().map(|s| s.to_string()),
expires_in: creds.expiry().map(|expiration| expiration.into()),
}))
}
}
const OPEN_DAL_S3_CREDENTIAL_LOADER: OpenDALS3CredentialLoader = OpenDALS3CredentialLoader {};
let url = Url::parse(path).map_err(|e| format!("Invalid path S3 URL path {path:?}: {e}"))?;
let bucket = url.host_str().ok_or_else(|| format!("Missing Bucket name in data folder S3 URL {path:?}"))?;
let builder = opendal::services::S3::default()
.customized_credential_load(Box::new(OPEN_DAL_S3_CREDENTIAL_LOADER))
.enable_virtual_host_style()
.bucket(bucket)
.root(url.path())
.default_storage_class("INTELLIGENT_TIERING");
Ok(opendal::Operator::new(builder)?.finish())
}
pub enum PathType {
Data,
IconCache,
Attachments,
Sends,
RsaKey,
}
impl Config {
pub fn load() -> Result<Self, Error> {
pub async fn load() -> Result<Self, Error> {
// Loading from env and file
let _env = ConfigBuilder::from_env();
let _usr = ConfigBuilder::from_file(&CONFIG_FILE).unwrap_or_default();
let _usr = ConfigBuilder::from_file().await.unwrap_or_default();
// Create merged config, config file overwrites env
let mut _overrides = Vec::new();
@@ -1166,7 +1281,7 @@ impl Config {
})
}
pub fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> {
pub async fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> {
// Remove default values
//let builder = other.remove(&self.inner.read().unwrap()._env);
@@ -1198,20 +1313,19 @@ impl Config {
}
//Save to file
use std::{fs::File, io::Write};
let mut file = File::create(&*CONFIG_FILE)?;
file.write_all(config_str.as_bytes())?;
let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
operator.write(&CONFIG_FILENAME, config_str).await?;
Ok(())
}
fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
async fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
let builder = {
let usr = &self.inner.read().unwrap()._usr;
let mut _overrides = Vec::new();
usr.merge(&other, false, &mut _overrides)
};
self.update_config(builder, false)
self.update_config(builder, false).await
}
/// Tests whether an email's domain is allowed. A domain is allowed if it
@@ -1253,8 +1367,9 @@ impl Config {
}
}
pub fn delete_user_config(&self) -> Result<(), Error> {
std::fs::remove_file(&*CONFIG_FILE)?;
pub async fn delete_user_config(&self) -> Result<(), Error> {
let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
operator.delete(&CONFIG_FILENAME).await?;
// Empty user config
let usr = ConfigBuilder::default();
@@ -1284,7 +1399,7 @@ impl Config {
inner._enable_smtp && (inner.smtp_host.is_some() || inner.use_sendmail)
}
pub fn get_duo_akey(&self) -> String {
pub async fn get_duo_akey(&self) -> String {
if let Some(akey) = self._duo_akey() {
akey
} else {
@@ -1295,7 +1410,7 @@ impl Config {
_duo_akey: Some(akey_s.clone()),
..Default::default()
};
self.update_config_partial(builder).ok();
self.update_config_partial(builder).await.ok();
akey_s
}
@@ -1308,6 +1423,23 @@ impl Config {
token.is_some() && !token.unwrap().trim().is_empty()
}
pub fn opendal_operator_for_path_type(&self, path_type: PathType) -> Result<opendal::Operator, Error> {
let path = match path_type {
PathType::Data => self.data_folder(),
PathType::IconCache => self.icon_cache_folder(),
PathType::Attachments => self.attachments_folder(),
PathType::Sends => self.sends_folder(),
PathType::RsaKey => std::path::Path::new(&self.rsa_key_filename())
.parent()
.ok_or_else(|| std::io::Error::other("Failed to get directory of RSA key file"))?
.to_str()
.ok_or_else(|| std::io::Error::other("Failed to convert RSA key file directory to UTF-8 string"))?
.to_string(),
};
opendal_operator_for_path(&path)
}
pub fn render_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> {
if self.reload_templates() {
warn!("RELOADING TEMPLATES");

View File

@@ -1,11 +1,11 @@
use std::io::ErrorKind;
use std::time::Duration;
use bigdecimal::{BigDecimal, ToPrimitive};
use derive_more::{AsRef, Deref, Display};
use serde_json::Value;
use super::{CipherId, OrganizationId, UserId};
use crate::CONFIG;
use crate::{config::PathType, CONFIG};
use macros::IdFromParam;
db_object! {
@@ -41,24 +41,30 @@ impl Attachment {
}
pub fn get_file_path(&self) -> String {
format!("{}/{}/{}", CONFIG.attachments_folder(), self.cipher_uuid, self.id)
format!("{}/{}", self.cipher_uuid, self.id)
}
pub fn get_url(&self, host: &str) -> String {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone()));
format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id)
pub async fn get_url(&self, host: &str) -> Result<String, crate::Error> {
let operator = CONFIG.opendal_operator_for_path_type(PathType::Attachments)?;
if operator.info().scheme() == opendal::Scheme::Fs {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone()));
Ok(format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id))
} else {
Ok(operator.presign_read(&self.get_file_path(), Duration::from_secs(5 * 60)).await?.uri().to_string())
}
}
pub fn to_json(&self, host: &str) -> Value {
json!({
pub async fn to_json(&self, host: &str) -> Result<Value, crate::Error> {
Ok(json!({
"id": self.id,
"url": self.get_url(host),
"url": self.get_url(host).await?,
"fileName": self.file_name,
"size": self.file_size.to_string(),
"sizeName": crate::util::get_display_size(self.file_size),
"key": self.akey,
"object": "attachment"
})
}))
}
}
@@ -104,26 +110,26 @@ impl Attachment {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
let _: () = crate::util::retry(
crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
10,
)
.map_res("Error deleting attachment")?;
.map(|_| ())
.map_res("Error deleting attachment")
}}?;
let file_path = &self.get_file_path();
let operator = CONFIG.opendal_operator_for_path_type(PathType::Attachments)?;
let file_path = self.get_file_path();
match std::fs::remove_file(file_path) {
// Ignore "file not found" errors. This can happen when the
// upstream caller has already cleaned up the file as part of
// its own error handling.
Err(e) if e.kind() == ErrorKind::NotFound => {
debug!("File '{file_path}' already deleted.");
Ok(())
}
Err(e) => Err(e.into()),
_ => Ok(()),
if let Err(e) = operator.delete(&file_path).await {
if e.kind() == opendal::ErrorKind::NotFound {
debug!("File '{file_path}' already deleted.");
} else {
return Err(e.into());
}
}}
}
Ok(())
}
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {

View File

@@ -141,18 +141,28 @@ impl Cipher {
cipher_sync_data: Option<&CipherSyncData>,
sync_type: CipherSyncType,
conn: &mut DbConn,
) -> Value {
) -> Result<Value, crate::Error> {
use crate::util::{format_date, validate_and_format_date};
let mut attachments_json: Value = Value::Null;
if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(attachments) = cipher_sync_data.cipher_attachments.get(&self.uuid) {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect();
if !attachments.is_empty() {
let mut attachments_json_vec = vec![];
for attachment in attachments {
attachments_json_vec.push(attachment.to_json(host).await?);
}
attachments_json = Value::Array(attachments_json_vec);
}
}
} else {
let attachments = Attachment::find_by_cipher(&self.uuid, conn).await;
if !attachments.is_empty() {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect()
let mut attachments_json_vec = vec![];
for attachment in attachments {
attachments_json_vec.push(attachment.to_json(host).await?);
}
attachments_json = Value::Array(attachments_json_vec);
}
}
@@ -372,6 +382,11 @@ impl Cipher {
// the "Read Only" or "Hide Passwords" restrictions for the user.
json_object["edit"] = json!(!read_only);
json_object["viewPassword"] = json!(!hide_passwords);
// The new key used by clients since v2025.6.0
json_object["permissions"] = json!({
"delete": !read_only,
"restore": !read_only,
});
}
let key = match self.atype {
@@ -384,7 +399,7 @@ impl Cipher {
};
json_object[key] = type_data_json;
json_object
Ok(json_object)
}
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {
@@ -594,22 +609,23 @@ impl Cipher {
let mut rows: Vec<(bool, bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections {
//User permissions
// User permissions
if let Some(cu) = cipher_sync_data.user_collections.get(collection) {
rows.push((cu.read_only, cu.hide_passwords, cu.manage));
}
//Group permissions
if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
// Group permissions
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
rows.push((cg.read_only, cg.hide_passwords, cg.manage));
}
}
}
rows
} else {
let mut access_flags = self.get_user_collections_access_flags(user_uuid, conn).await;
access_flags.append(&mut self.get_group_collections_access_flags(user_uuid, conn).await);
access_flags
let user_permissions = self.get_user_collections_access_flags(user_uuid, conn).await;
if !user_permissions.is_empty() {
user_permissions
} else {
self.get_group_collections_access_flags(user_uuid, conn).await
}
};
if rows.is_empty() {
@@ -618,6 +634,9 @@ impl Cipher {
}
// A cipher can be in multiple collections with inconsistent access flags.
// Also, user permission overrule group permissions
// and only user permissions are returned by the code above.
//
// For example, a cipher could be in one collection where the user has
// read-only access, but also in another collection where the user has
// read/write access. For a flag to be in effect for a cipher, upstream
@@ -626,13 +645,15 @@ impl Cipher {
// and `hide_passwords` columns. This could ideally be done as part of the
// query, but Diesel doesn't support a min() or bool_and() function on
// booleans and this behavior isn't portable anyway.
//
// The only exception is for the `manage` flag, that needs a boolean OR!
let mut read_only = true;
let mut hide_passwords = true;
let mut manage = false;
for (ro, hp, mn) in rows.iter() {
read_only &= ro;
hide_passwords &= hp;
manage &= mn;
manage |= mn;
}
Some((read_only, hide_passwords, manage))

View File

@@ -97,13 +97,13 @@ impl Collection {
(
cu.read_only,
cu.hide_passwords,
cu.manage || (is_manager && !cu.read_only && !cu.hide_passwords),
is_manager && (cu.manage || (!cu.read_only && !cu.hide_passwords)),
)
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(&self.uuid) {
(
cg.read_only,
cg.hide_passwords,
cg.manage || (is_manager && !cg.read_only && !cg.hide_passwords),
is_manager && (cg.manage || (!cg.read_only && !cg.hide_passwords)),
)
} else {
(false, false, false)
@@ -114,7 +114,9 @@ impl Collection {
} else {
match Membership::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(_) if self.is_manageable_by_user(user_uuid, conn).await => (false, false, true),
Some(m) if m.atype == MembershipType::Manager && self.is_manageable_by_user(user_uuid, conn).await => {
(false, false, true)
}
Some(m) => {
let is_manager = m.atype == MembershipType::Manager;
let read_only = !self.is_writable_by_user(user_uuid, conn).await;

View File

@@ -211,7 +211,7 @@ impl OrgPolicy {
pub async fn find_accepted_and_confirmed_by_user_and_active_policy(
user_uuid: &UserId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
conn: &DbConn,
) -> Vec<Self> {
db_run! { conn: {
org_policies::table

View File

@@ -1,7 +1,7 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use crate::util::LowerCase;
use crate::{config::PathType, util::LowerCase, CONFIG};
use super::{OrganizationId, User, UserId};
use id::SendId;
@@ -226,7 +226,8 @@ impl Send {
self.update_users_revision(conn).await;
if self.atype == SendType::File as i32 {
std::fs::remove_dir_all(std::path::Path::new(&crate::CONFIG.sends_folder()).join(&self.uuid)).ok();
let operator = CONFIG.opendal_operator_for_path_type(PathType::Sends)?;
operator.remove_all(&self.uuid).await.ok();
}
db_run! { conn: {

View File

@@ -46,6 +46,7 @@ use jsonwebtoken::errors::Error as JwtErr;
use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr;
use lettre::transport::smtp::Error as SmtpErr;
use opendal::Error as OpenDALErr;
use openssl::error::ErrorStack as SSLErr;
use regex::Error as RegexErr;
use reqwest::Error as ReqErr;
@@ -98,6 +99,8 @@ make_error! {
DieselCon(DieselConErr): _has_source, _api_error,
Webauthn(WebauthnErr): _has_source, _api_error,
OpenDAL(OpenDALErr): _has_source, _api_error,
}
impl std::fmt::Debug for Error {

View File

@@ -244,3 +244,61 @@ impl Resolve for CustomDnsResolver {
})
}
}
#[cfg(s3)]
pub(crate) mod aws {
use aws_smithy_runtime_api::client::{
http::{HttpClient, HttpConnector, HttpConnectorFuture, HttpConnectorSettings, SharedHttpConnector},
orchestrator::HttpResponse,
result::ConnectorError,
runtime_components::RuntimeComponents,
};
use reqwest::Client;
// Adapter that wraps reqwest to be compatible with the AWS SDK
#[derive(Debug)]
pub(crate) struct AwsReqwestConnector {
pub(crate) client: Client,
}
impl HttpConnector for AwsReqwestConnector {
fn call(&self, request: aws_smithy_runtime_api::client::orchestrator::HttpRequest) -> HttpConnectorFuture {
// Convert the AWS-style request to a reqwest request
let client = self.client.clone();
let future = async move {
let method = reqwest::Method::from_bytes(request.method().as_bytes())
.map_err(|e| ConnectorError::user(Box::new(e)))?;
let mut req_builder = client.request(method, request.uri().to_string());
for (name, value) in request.headers() {
req_builder = req_builder.header(name, value);
}
if let Some(body_bytes) = request.body().bytes() {
req_builder = req_builder.body(body_bytes.to_vec());
}
let response = req_builder.send().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
let status = response.status().into();
let bytes = response.bytes().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
Ok(HttpResponse::new(status, bytes.into()))
};
HttpConnectorFuture::new(Box::pin(future))
}
}
impl HttpClient for AwsReqwestConnector {
fn http_connector(
&self,
_settings: &HttpConnectorSettings,
_components: &RuntimeComponents,
) -> SharedHttpConnector {
SharedHttpConnector::new(AwsReqwestConnector {
client: self.client.clone(),
})
}
}
}

View File

@@ -61,7 +61,7 @@ mod util;
use crate::api::core::two_factor::duo_oidc::purge_duo_contexts;
use crate::api::purge_auth_requests;
use crate::api::{WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS};
pub use config::CONFIG;
pub use config::{PathType, CONFIG};
pub use error::{Error, MapResult};
use rocket::data::{Limits, ToByteUnit};
use std::sync::{atomic::Ordering, Arc};
@@ -75,16 +75,13 @@ async fn main() -> Result<(), Error> {
let level = init_logging()?;
check_data_folder().await;
auth::initialize_keys().unwrap_or_else(|e| {
auth::initialize_keys().await.unwrap_or_else(|e| {
error!("Error creating private key '{}'\n{e:?}\nExiting Vaultwarden!", CONFIG.private_rsa_key());
exit(1);
});
check_web_vault();
create_dir(&CONFIG.icon_cache_folder(), "icon cache");
create_dir(&CONFIG.tmp_folder(), "tmp folder");
create_dir(&CONFIG.sends_folder(), "sends folder");
create_dir(&CONFIG.attachments_folder(), "attachments folder");
let pool = create_db_pool().await;
schedule_jobs(pool.clone());
@@ -464,6 +461,24 @@ fn create_dir(path: &str, description: &str) {
async fn check_data_folder() {
let data_folder = &CONFIG.data_folder();
if data_folder.starts_with("s3://") {
if let Err(e) = CONFIG
.opendal_operator_for_path_type(PathType::Data)
.unwrap_or_else(|e| {
error!("Failed to create S3 operator for data folder '{data_folder}': {e:?}");
exit(1);
})
.check()
.await
{
error!("Could not access S3 data folder '{data_folder}': {e:?}");
exit(1);
}
return;
}
let path = Path::new(data_folder);
if !path.exists() {
error!("Data folder '{data_folder}' doesn't exist.");

View File

@@ -54,3 +54,7 @@ img {
.vw-copy-toast {
width: 15rem;
}
.abbr-badge {
cursor: help;
}

View File

@@ -208,11 +208,9 @@ function initVersionCheck(dj) {
}
checkVersions("server", serverInstalled, serverLatest, serverLatestCommit);
if (!dj.running_within_container) {
const webInstalled = dj.web_vault_version;
const webLatest = dj.latest_web_build;
checkVersions("web", webInstalled, webLatest, null, dj.web_vault_pre_release);
}
const webInstalled = dj.web_vault_version;
const webLatest = dj.latest_web_build;
checkVersions("web", webInstalled, webLatest, null, dj.web_vault_pre_release);
}
function checkDns(dns_resolved) {

View File

@@ -1,5 +1,5 @@
/*!
* Bootstrap v5.3.6 (https://getbootstrap.com/)
* Bootstrap v5.3.7 (https://getbootstrap.com/)
* Copyright 2011-2025 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
@@ -647,7 +647,7 @@
* Constants
*/
const VERSION = '5.3.6';
const VERSION = '5.3.7';
/**
* Class definition
@@ -4805,7 +4805,6 @@
*
* Shout-out to Angular https://github.com/angular/angular/blob/15.2.8/packages/core/src/sanitization/url_sanitizer.ts#L38
*/
// eslint-disable-next-line unicorn/better-regex
const SAFE_URL_PATTERN = /^(?!javascript:)(?:[a-z0-9+.-]+:|[^&:/?#]*(?:[/?#]|$))/i;
const allowedAttribute = (attribute, allowedAttributeList) => {
const attributeName = attribute.nodeName.toLowerCase();
@@ -5349,6 +5348,7 @@
if (trigger === 'click') {
EventHandler.on(this._element, this.constructor.eventName(EVENT_CLICK$1), this._config.selector, event => {
const context = this._initializeOnDelegatedTarget(event);
context._activeTrigger[TRIGGER_CLICK] = !(context._isShown() && context._activeTrigger[TRIGGER_CLICK]);
context.toggle();
});
} else if (trigger !== TRIGGER_MANUAL) {

View File

@@ -1,6 +1,6 @@
@charset "UTF-8";
/*!
* Bootstrap v5.3.6 (https://getbootstrap.com/)
* Bootstrap v5.3.7 (https://getbootstrap.com/)
* Copyright 2011-2025 The Bootstrap Authors
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/

View File

@@ -4,10 +4,10 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-2.3.1
* https://datatables.net/download/#bs5/dt-2.3.2
*
* Included libraries:
* DataTables 2.3.1
* DataTables 2.3.2
*/
:root {
@@ -17,17 +17,18 @@
--dt-row-stripe: 0, 0, 0;
--dt-row-hover: 0, 0, 0;
--dt-column-ordering: 0, 0, 0;
--dt-header-align-items: center;
--dt-html-background: white;
}
:root.dark {
--dt-html-background: rgb(33, 37, 41);
}
table.dataTable td.dt-control {
table.dataTable tbody td.dt-control {
text-align: center;
cursor: pointer;
}
table.dataTable td.dt-control:before {
table.dataTable tbody td.dt-control:before {
display: inline-block;
box-sizing: border-box;
content: "";
@@ -36,7 +37,7 @@ table.dataTable td.dt-control:before {
border-bottom: 5px solid transparent;
border-right: 0px solid transparent;
}
table.dataTable tr.dt-hasChild td.dt-control:before {
table.dataTable tbody tr.dt-hasChild td.dt-control:before {
border-top: 10px solid rgba(0, 0, 0, 0.5);
border-left: 5px solid transparent;
border-bottom: 0px solid transparent;
@@ -163,7 +164,7 @@ table.dataTable tfoot > tr > td div.dt-column-header,
table.dataTable tfoot > tr > td div.dt-column-footer {
display: flex;
justify-content: space-between;
align-items: center;
align-items: var(--dt-header-align-items);
gap: 4px;
}
table.dataTable thead > tr > th div.dt-column-header span.dt-column-title,
@@ -421,6 +422,10 @@ table.dataTable tbody td.dt-body-nowrap {
white-space: nowrap;
}
:root {
--dt-header-align-items: flex-end;
}
/*! Bootstrap 5 integration for DataTables
*
* ©2020 SpryMedia Ltd, all rights reserved.

View File

@@ -4,13 +4,13 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-2.3.1
* https://datatables.net/download/#bs5/dt-2.3.2
*
* Included libraries:
* DataTables 2.3.1
* DataTables 2.3.2
*/
/*! DataTables 2.3.1
/*! DataTables 2.3.2
* © SpryMedia Ltd - datatables.net/license
*/
@@ -124,7 +124,7 @@
_fnCamelToHungarian( defaults.column, defaults.column, true );
/* Setting up the initialisation object */
_fnCamelToHungarian( defaults, $.extend( oInit, $this.data() ), true );
_fnCamelToHungarian( defaults, $.extend( oInit, _fnEscapeObject($this.data()) ), true );
@@ -513,7 +513,7 @@
*
* @type string
*/
builder: "bs5/dt-2.3.1",
builder: "bs5/dt-2.3.2",
/**
* Buttons. For use with the Buttons extension for DataTables. This is
@@ -554,6 +554,11 @@
*/
errMode: "alert",
/** HTML entity escaping */
escape: {
/** When reading data-* attributes for initialisation options */
attributes: false
},
/**
* Legacy so v1 plug-ins don't throw js errors on load
@@ -4025,7 +4030,7 @@
if ( write ) {
if (unique) {
// Allow column options to be set from HTML attributes
_fnColumnOptions( settings, shifted, jqCell.data() );
_fnColumnOptions( settings, shifted, _fnEscapeObject(jqCell.data()) );
// Get the width for the column. This can be defined from the
// width attribute, style attribute or `columns.width` option
@@ -4271,7 +4276,7 @@
// to the object for the callback.
var empty = {};
DataTable.util.set(ajax.dataSrc)(empty, []);
_fnAjaxDataSrc(oSettings, empty, []);
callback(empty);
}
else {
@@ -5799,9 +5804,11 @@
var run = false;
var columns = column === undefined
? _fnColumnsFromHeader( e.target )
: Array.isArray(column)
? column
: [column];
: typeof column === 'function'
? column()
: Array.isArray(column)
? column
: [column];
if ( columns.length ) {
for ( var i=0, ien=columns.length ; i<ien ; i++ ) {
@@ -6866,6 +6873,19 @@
}
}
/**
* Escape HTML entities in strings, in an object
*/
function _fnEscapeObject(obj) {
if (DataTable.ext.escape.attributes) {
$.each(obj, function (key, val) {
obj[key] = _escapeHtml(val);
})
}
return obj;
}
/**
@@ -10211,7 +10231,7 @@
* @type string
* @default Version number
*/
DataTable.version = "2.3.1";
DataTable.version = "2.3.2";
/**
* Private data store, containing all of the settings objects that are

View File

@@ -7,36 +7,34 @@
<div class="col-md">
<dl class="row">
<dt class="col-sm-5">Server Installed
<span class="badge bg-success d-none" id="server-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none" id="server-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none" id="server-branch" title="This is a branched version.">Branched</span>
<span class="badge bg-success d-none abbr-badge" id="server-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none abbr-badge" id="server-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none abbr-badge" id="server-branch" title="This is a branched version.">Branched</span>
</dt>
<dd class="col-sm-7">
<span id="server-installed">{{page_data.current_release}}</span>
</dd>
<dt class="col-sm-5">Server Latest
<span class="badge bg-secondary d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span>
<span class="badge bg-secondary d-none abbr-badge" id="server-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="server-latest">{{page_data.latest_release}}<span id="server-latest-commit" class="d-none">-{{page_data.latest_commit}}</span></span>
</dd>
{{#if page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed
<span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none" id="web-prerelease" title="You seem to be using a pre-release version.">Pre-Release</span>
<span class="badge bg-success d-none abbr-badge" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none abbr-badge" id="web-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none abbr-badge" id="web-prerelease" title="You seem to be using a pre-release version.">Pre-Release</span>
</dt>
<dd class="col-sm-7">
<span id="web-installed">{{page_data.web_vault_version}}</span>
</dd>
{{#unless page_data.running_within_container}}
<dt class="col-sm-5">Web Latest
<span class="badge bg-secondary d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
<span class="badge bg-secondary d-none abbr-badge" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="web-latest">{{page_data.latest_web_build}}</span>
</dd>
{{/unless}}
{{/if}}
{{#unless page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed</dt>
@@ -69,14 +67,11 @@
<span class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Uses config.json
{{#if page_data.overrides}}
<span class="badge bg-info text-dark" title="Environment variables are overwritten by a config.json.">Note</span>
{{/if}}
</dt>
<dt class="col-sm-5">Uses config.json</dt>
<dd class="col-sm-7">
{{#if page_data.overrides}}
<abbr class="d-block" title="The following settings are overridden: {{page_data.overrides}}"><b>Yes</b></abbr>
<span class="d-inline"><b>Yes</b></span>
<span class="badge bg-info text-dark abbr-badge" title="Environment variables are overwritten by a config.json.&#013;&#010;{{page_data.overrides}}">Details</span>
{{/if}}
{{#unless page_data.overrides}}
<span class="d-block"><b>No</b></span>
@@ -95,10 +90,10 @@
{{#if page_data.ip_header_exists}}
<dt class="col-sm-5">IP header
{{#if page_data.ip_header_match}}
<span class="badge bg-success" title="IP_HEADER config seems to be valid.">Match</span>
<span class="badge bg-success abbr-badge" title="IP_HEADER config seems to be valid.">Match</span>
{{/if}}
{{#unless page_data.ip_header_match}}
<span class="badge bg-danger" title="IP_HEADER config seems to be invalid. IP's in the log could be invalid. Please fix.">No Match</span>
<span class="badge bg-danger abbr-badge" title="IP_HEADER config seems to be invalid. IP's in the log could be invalid. Please fix.">No Match</span>
{{/unless}}
</dt>
<dd class="col-sm-7">
@@ -114,10 +109,10 @@
{{!-- End if IP Header Exists --}}
<dt class="col-sm-5">Internet access
{{#if page_data.has_http_access}}
<span class="badge bg-success" title="We have internet access!">Ok</span>
<span class="badge bg-success abbr-badge" title="We have internet access!">Ok</span>
{{/if}}
{{#unless page_data.has_http_access}}
<span class="badge bg-danger" title="There seems to be no internet access. Please fix.">Error</span>
<span class="badge bg-danger abbr-badge" title="There seems to be no internet access. Please fix.">Error</span>
{{/unless}}
</dt>
<dd class="col-sm-7">
@@ -139,8 +134,8 @@
</dd>
<dt class="col-sm-5">Websocket enabled
{{#if page_data.enable_websocket}}
<span class="badge bg-success d-none" id="websocket-success" title="Websocket connection is working.">Ok</span>
<span class="badge bg-danger d-none" id="websocket-error" title="Websocket connection error, validate your reverse proxy configuration!">Error</span>
<span class="badge bg-success d-none abbr-badge" id="websocket-success" title="Websocket connection is working.">Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="websocket-error" title="Websocket connection error, validate your reverse proxy configuration!">Error</span>
{{/if}}
</dt>
<dd class="col-sm-7">
@@ -153,27 +148,27 @@
</dd>
<dt class="col-sm-5">DNS (github.com)
<span class="badge bg-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge bg-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
<span class="badge bg-success d-none abbr-badge" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
</dt>
<dd class="col-sm-7">
<span id="dns-resolved">{{page_data.dns_resolved}}</span>
</dd>
<dt class="col-sm-5">Date & Time (Local)
{{#if page_data.tz_env}}
<span class="badge bg-success" title="Configured TZ environment variable">{{page_data.tz_env}}</span>
<span class="badge bg-success abbr-badge" title="Configured TZ environment variable">{{page_data.tz_env}}</span>
{{/if}}
</dt>
<dd class="col-sm-7">
<span><b>Server:</b> {{page_data.server_time_local}}</span>
</dd>
<dt class="col-sm-5">Date & Time (UTC)
<span class="badge bg-success d-none" id="time-success" title="Server and browser times are within 15 seconds of each other.">Server/Browser Ok</span>
<span class="badge bg-danger d-none" id="time-warning" title="Server and browser times are more than 15 seconds apart.">Server/Browser Error</span>
<span class="badge bg-success d-none" id="ntp-server-success" title="Server and NTP times are within 15 seconds of each other.">Server NTP Ok</span>
<span class="badge bg-danger d-none" id="ntp-server-warning" title="Server and NTP times are more than 15 seconds apart.">Server NTP Error</span>
<span class="badge bg-success d-none" id="ntp-browser-success" title="Browser and NTP times are within 15 seconds of each other.">Browser NTP Ok</span>
<span class="badge bg-danger d-none" id="ntp-browser-warning" title="Browser and NTP times are more than 15 seconds apart.">Browser NTP Error</span>
<span class="badge bg-success d-none abbr-badge" id="time-success" title="Server and browser times are within 15 seconds of each other.">Server/Browser Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="time-warning" title="Server and browser times are more than 15 seconds apart.">Server/Browser Error</span>
<span class="badge bg-success d-none abbr-badge" id="ntp-server-success" title="Server and NTP times are within 15 seconds of each other.">Server NTP Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="ntp-server-warning" title="Server and NTP times are more than 15 seconds apart.">Server NTP Error</span>
<span class="badge bg-success d-none abbr-badge" id="ntp-browser-success" title="Browser and NTP times are within 15 seconds of each other.">Browser NTP Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="ntp-browser-warning" title="Browser and NTP times are more than 15 seconds apart.">Browser NTP Error</span>
</dt>
<dd class="col-sm-7">
<span id="ntp-time" class="d-block"><b>NTP:</b> <span id="ntp-server-string">{{page_data.ntp_time}}</span></span>
@@ -182,10 +177,10 @@
</dd>
<dt class="col-sm-5">Domain configuration
<span class="badge bg-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not match the browser location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span>
<span class="badge bg-success d-none" id="https-success" title="Configured to use HTTPS">HTTPS</span>
<span class="badge bg-danger d-none" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span>
<span class="badge bg-success d-none abbr-badge" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
<span class="badge bg-danger d-none abbr-badge" id="domain-warning" title="The domain variable does not match the browser location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span>
<span class="badge bg-success d-none abbr-badge" id="https-success" title="Configured to use HTTPS">HTTPS</span>
<span class="badge bg-danger d-none abbr-badge" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span>
</dt>
<dd class="col-sm-7">
<span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{page_data.admin_url}}</span></span>
@@ -193,8 +188,8 @@
</dd>
<dt class="col-sm-5">HTTP Response validation
<span class="badge bg-success d-none" id="http-response-success" title="All headers and HTTP request responses seem to be ok.">Ok</span>
<span class="badge bg-danger d-none" id="http-response-warning" title="Some headers or HTTP request responses return invalid data!">Error</span>
<span class="badge bg-success d-none abbr-badge" id="http-response-success" title="All headers and HTTP request responses seem to be ok.">Ok</span>
<span class="badge bg-danger d-none abbr-badge" id="http-response-warning" title="Some headers or HTTP request responses return invalid data!">Error</span>
</dt>
<dd class="col-sm-7">
<span id="http-response-errors" class="d-block"></span>

View File

@@ -20,16 +20,50 @@ a[href$="/settings/sponsored-families"] {
@extend %vw-hide;
}
/* Hide the sso `Email` input field */
.vw-email-sso {
@extend %vw-hide;
}
/* Hide the `Enterprise Single Sign-On` button on the login page */
app-root form.ng-untouched button.\!tw-text-primary-600:nth-child(4) {
{{#if (webver ">=2025.5.1")}}
.vw-sso-login {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(4) {
@extend %vw-hide;
}
{{/if}}
/* Hide the `Log in with passkey` settings */
app-change-password app-webauthn-login-settings {
@extend %vw-hide;
}
/* Hide Log in with passkey on the login page */
app-root form.ng-untouched a[routerlink="/login-with-passkey"] {
{{#if (webver ">=2025.5.1")}}
.vw-passkey-login {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(3) {
@extend %vw-hide;
}
{{/if}}
/* Hide the or text followed by the two buttons hidden above */
app-root form.ng-untouched > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
{{#if (webver ">=2025.5.1")}}
.vw-or-text {
@extend %vw-hide;
}
{{else}}
app-root ng-component > form > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
@extend %vw-hide;
}
{{/if}}
/* Hide the `Other` button on the login page */
.vw-other-login {
@extend %vw-hide;
}
@@ -98,7 +132,7 @@ app-root a[routerlink="/signup"] {
{{/if}}
{{/if}}
{{#unless mail_enabled}}
{{#unless mail_2fa_enabled}}
/* Hide `Email` 2FA if mail is not enabled */
.providers-2fa-1 {
@extend %vw-hide;

View File

@@ -16,7 +16,7 @@ use tokio::{
time::{sleep, Duration},
};
use crate::CONFIG;
use crate::{config::PathType, CONFIG};
pub struct AppHeaders();
@@ -61,9 +61,11 @@ impl Fairing for AppHeaders {
// The `Cross-Origin-Resource-Policy` header should not be set on images or on the `icon_external` route.
// Otherwise some clients, like the Bitwarden Desktop, will fail to download the icons
let mut is_image = true;
if !(res.headers().get_one("Content-Type").is_some_and(|v| v.starts_with("image/"))
|| req.route().is_some_and(|v| v.name.as_deref() == Some("icon_external")))
{
is_image = false;
res.set_raw_header("Cross-Origin-Resource-Policy", "same-origin");
}
@@ -71,49 +73,56 @@ impl Fairing for AppHeaders {
// This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo.
// This is the same behavior as upstream Bitwarden.
if !req_uri_path.ends_with("connector.html") {
// # Frame Ancestors:
// Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb
// Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US
// Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/
// # img/child/frame src:
// Have I Been Pwned to allow those calls to work.
// # Connect src:
// Leaked Passwords check: api.pwnedpasswords.com
// 2FA/MFA Site check: api.2fa.directory
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
// app.simplelogin.io, app.addy.io, api.fastmail.com, quack.duckduckgo.com
let csp = format!(
"default-src 'none'; \
font-src 'self'; \
manifest-src 'self'; \
base-uri 'self'; \
form-action 'self'; \
object-src 'self' blob:; \
script-src 'self' 'wasm-unsafe-eval'; \
style-src 'self' 'unsafe-inline'; \
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-ancestors 'self' \
chrome-extension://nngceckbapebfimnlniiiahkandclblb \
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \
moz-extension://* \
{allowed_iframe_ancestors}; \
img-src 'self' data: \
https://haveibeenpwned.com \
{icon_service_csp}; \
connect-src 'self' \
https://api.pwnedpasswords.com \
https://api.2fa.directory \
https://app.simplelogin.io/api/ \
https://app.addy.io/api/ \
https://api.fastmail.com/ \
https://api.forwardemail.net \
{allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
);
let csp = if is_image {
// Prevent scripts, frames, objects, etc., from loading with images, mainly for SVG images, since these could contain JavaScript and other unsafe items.
// Even though we sanitize SVG images before storing and viewing them, it's better to prevent allowing these elements.
String::from("default-src 'none'; img-src 'self' data:; style-src 'unsafe-inline'; script-src 'none'; frame-src 'none'; object-src 'none")
} else {
// # Frame Ancestors:
// Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb
// Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US
// Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/
// # img/child/frame src:
// Have I Been Pwned to allow those calls to work.
// # Connect src:
// Leaked Passwords check: api.pwnedpasswords.com
// 2FA/MFA Site check: api.2fa.directory
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
// app.simplelogin.io, app.addy.io, api.fastmail.com, api.forwardemail.net
format!(
"default-src 'none'; \
font-src 'self'; \
manifest-src 'self'; \
base-uri 'self'; \
form-action 'self'; \
object-src 'self' blob:; \
script-src 'self' 'wasm-unsafe-eval'; \
style-src 'self' 'unsafe-inline'; \
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
frame-ancestors 'self' \
chrome-extension://nngceckbapebfimnlniiiahkandclblb \
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \
moz-extension://* \
{allowed_iframe_ancestors}; \
img-src 'self' data: \
https://haveibeenpwned.com \
{icon_service_csp}; \
connect-src 'self' \
https://api.pwnedpasswords.com \
https://api.2fa.directory \
https://app.simplelogin.io/api/ \
https://app.addy.io/api/ \
https://api.fastmail.com/ \
https://api.forwardemail.net \
{allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
)
};
res.set_raw_header("Content-Security-Policy", csp);
res.set_raw_header("X-Frame-Options", "SAMEORIGIN");
} else {
@@ -827,6 +836,26 @@ pub fn is_global(ip: std::net::IpAddr) -> bool {
ip.is_global()
}
/// Saves a Rocket temporary file to the OpenDAL Operator at the given path.
pub async fn save_temp_file(
path_type: PathType,
path: &str,
temp_file: rocket::fs::TempFile<'_>,
overwrite: bool,
) -> Result<(), crate::Error> {
use futures::AsyncWriteExt as _;
use tokio_util::compat::TokioAsyncReadCompatExt as _;
let operator = CONFIG.opendal_operator_for_path_type(path_type)?;
let mut read_stream = temp_file.open().await?.compat();
let mut writer = operator.writer_with(path).if_not_exists(!overwrite).await?.into_futures_async_write();
futures::io::copy(&mut read_stream, &mut writer).await?;
writer.close().await?;
Ok(())
}
/// These are some tests to check that the implementations match
/// The IPv4 can be all checked in 30 seconds or so and they are correct as of nightly 2023-07-17
/// The IPV6 can't be checked in a reasonable time, so we check over a hundred billion random ones, so far correct