Compare commits

...

16 Commits

Author SHA1 Message Date
Mathijs van Veluw
3b48e6e903 Fix v2025.6.x clients and newer to delete items (#6004) 2025-07-01 10:33:22 +02:00
Chase Douglas
6b9333b33e Use existing reqwest client for AWS S3 requests (#5917)
This removes a lot of duplicate client dependency bloat for roughly
equivalent functionality.

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2025-06-30 22:57:00 +02:00
Daniel García
a545636ee5 Update flags version and enable manual error reporting (#5994) 2025-06-27 21:39:38 +02:00
Mathijs van Veluw
f125d5f1a1 Misc Updates and favicon fixes (#5993)
- Updated crates
- Switched to rustls instead of native-tls
  Some dependency were already using rustls by default or without option.
  By removing native-tls we also have just one way of working here.

Updated favicon fetching which now is able to fetch more icons.
- Use rustls instead of native-tls
  This seems to work better, probably because of tls sniffing
- Use different user-agent and added several other headers
- Added SVG support. SVG Images will be sanitized first before stored or presented.
  Also, a special CSP for images will be sent to prevent scripts etc.. from SVG images.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-27 21:20:36 +02:00
Mathijs van Veluw
ad75ce281e Fix an issue with yubico keys not validating (#5991)
* Fix an issue with yubico keys not validating

When adding or updating yubico otp keys there were some issues with the validation.
Looks like the web-vault sends all keys, not only filled-in keys, which triggered a check on empty keys.
Also, we should only return filled-in keys, not the empty ones too.

Fixes #5986

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use more idomatic code - take 2

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-26 21:46:56 +02:00
Stefan Melmuk
9059437c35 fix account recovery withdrawal (#5968)
since `web-v2025.4.0` the client sends `""` instead of `null`, so we
also have to check whether the `reset_password_key` is empty or not.
2025-06-17 18:55:11 +02:00
Stefan Melmuk
c84db0daca allow signup for invited users (#5967)
invited users (e.g. via /admin panel or org invite) should be able to
register if email is disabled.
2025-06-17 11:15:36 +02:00
Mathijs van Veluw
72adc239f5 Update crates and web-vault (#5955)
- Updated crates
- Updated web-vault to v2025.6.0

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-06-15 01:19:53 +02:00
Nick Grimshaw
34ebeeca76 Minor fixes to copy in .env.template (#5928) 2025-06-15 01:19:08 +02:00
Stefan Melmuk
0469d9ba4c make css for login-page position independent (#5906)
* make css for login-page position independent

starting with v2025.5.1 the login page will have custom classes so the
fields to be disabled can be targeted specifically without risking
side-effects

* hide buttons after cancelling login
2025-06-14 19:31:51 +02:00
Daniel
eaa6ad06ed Update Alpine to version 3.22 (#5938) 2025-06-14 19:30:19 +02:00
Timshel
0d3f283c37 Fix and improvements to policies (#5923) 2025-06-02 21:47:12 +02:00
Mathijs van Veluw
51a1d641c5 Some small admin updates (#5909)
- Some tweaks on the diagnostics layout
- Always show the latest web-vault version also when running in a container
  Users can override the web-vault folder and forget
- Also updated to the latest crates.

Kinda fixes #5908

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-30 16:56:29 +02:00
Chase Douglas
90f7e5ff80 Abstract persistent files through Apache OpenDAL (#5626)
* Abstract file access through Apache OpenDAL

* Add AWS S3 support via OpenDAL for data files

* PR improvements

* Additional PR improvements

* Config setting comments for local/remote data locations
2025-05-29 21:40:58 +02:00
Stefan Melmuk
200999c94e fix css for locked screen (#5905)
by making the selector more specific to the login page
the logout button on the locked screen should be visible again
2025-05-29 08:08:32 +02:00
Stefan Melmuk
d363e647e9 fix css to hide login with passkey (#5890) 2025-05-27 06:31:48 +02:00
35 changed files with 1916 additions and 675 deletions

View File

@@ -15,6 +15,14 @@
#################### ####################
## Main data folder ## Main data folder
## This can be a path to local folder or a path to an external location
## depending on features enabled at build time. Possible external locations:
##
## - AWS S3 Bucket (via `s3` feature): s3://bucket-name/path/to/folder
##
## When using an external location, make sure to set TMP_FOLDER,
## TEMPLATES_FOLDER, and DATABASE_URL to local paths and/or a remote database
## location.
# DATA_FOLDER=data # DATA_FOLDER=data
## Individual folders, these override %DATA_FOLDER% ## Individual folders, these override %DATA_FOLDER%
@@ -22,10 +30,13 @@
# ICON_CACHE_FOLDER=data/icon_cache # ICON_CACHE_FOLDER=data/icon_cache
# ATTACHMENTS_FOLDER=data/attachments # ATTACHMENTS_FOLDER=data/attachments
# SENDS_FOLDER=data/sends # SENDS_FOLDER=data/sends
## Temporary folder used for storing temporary file uploads
## Must be a local path.
# TMP_FOLDER=data/tmp # TMP_FOLDER=data/tmp
## Templates data folder, by default uses embedded templates ## HTML template overrides data folder
## Check source code to see the format ## Must be a local path.
# TEMPLATES_FOLDER=data/templates # TEMPLATES_FOLDER=data/templates
## Automatically reload the templates for every request, slow, use only for development ## Automatically reload the templates for every request, slow, use only for development
# RELOAD_TEMPLATES=false # RELOAD_TEMPLATES=false
@@ -39,7 +50,9 @@
######################### #########################
## Database URL ## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3 ## When using SQLite, this is the path to the DB file, and it defaults to
## %DATA_FOLDER%/db.sqlite3. If DATA_FOLDER is set to an external location, this
## must be set to a local sqlite3 file path.
# DATABASE_URL=data/db.sqlite3 # DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI. ## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/2.1.x/diesel/mysql/struct.MysqlConnection.html ## Details: https://docs.diesel.rs/2.1.x/diesel/mysql/struct.MysqlConnection.html
@@ -117,7 +130,7 @@
## and are always in terms of UTC time (regardless of your local time zone settings). ## and are always in terms of UTC time (regardless of your local time zone settings).
## ##
## The schedule format is a bit different from crontab as crontab does not contains seconds. ## The schedule format is a bit different from crontab as crontab does not contains seconds.
## You can test the the format here: https://crontab.guru, but remove the first digit! ## You can test the format here: https://crontab.guru, but remove the first digit!
## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK ## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK
## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri" ## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri"
## "0 30 * * * * " ## "0 30 * * * * "
@@ -260,7 +273,7 @@
## A comma-separated list means only those users can create orgs: ## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com # ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Invitations org admins to invite users, even when signups are disabled ## Allows org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true # INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization ## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden # INVITATION_ORG_NAME=Vaultwarden
@@ -328,16 +341,16 @@
## Icon download timeout ## Icon download timeout
## Configure the timeout value when downloading the favicons. ## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections ## The default is 10 seconds, but this could be too low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10 # ICON_DOWNLOAD_TIMEOUT=10
## Block HTTP domains/IPs by Regex ## Block HTTP domains/IPs by Regex
## Any domains or IPs that match this regex won't be fetched by the internal HTTP client. ## Any domains or IPs that match this regex won't be fetched by the internal HTTP client.
## Useful to hide other servers in the local network. Check the WIKI for more details ## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes! ## NOTE: Always enclose this regex within single quotes!
# HTTP_REQUEST_BLOCK_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$' # HTTP_REQUEST_BLOCK_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Enabling this will cause the internal HTTP client to refuse to connect to any non global IP address. ## Enabling this will cause the internal HTTP client to refuse to connect to any non-global IP address.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block ## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# HTTP_REQUEST_BLOCK_NON_GLOBAL_IPS=true # HTTP_REQUEST_BLOCK_NON_GLOBAL_IPS=true

1451
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,7 @@ name = "vaultwarden"
version = "1.0.0" version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"] authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021" edition = "2021"
rust-version = "1.85.0" rust-version = "1.86.0"
resolver = "2" resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden" repository = "https://github.com/dani-garcia/vaultwarden"
@@ -32,6 +32,7 @@ enable_mimalloc = ["dep:mimalloc"]
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile # You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run. # if you want to turn off the logging for a specific run.
query_logger = ["dep:diesel_logger"] query_logger = ["dep:diesel_logger"]
s3 = ["opendal/services-s3", "dep:aws-config", "dep:aws-credential-types", "dep:aws-smithy-runtime-api", "dep:anyhow", "dep:http", "dep:reqsign"]
# Enable unstable features, requires nightly # Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support # Currently only used to enable rusts official ip support
@@ -73,13 +74,14 @@ dashmap = "6.1.0"
# Async futures # Async futures
futures = "0.3.31" futures = "0.3.31"
tokio = { version = "1.45.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] } tokio = { version = "1.45.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio-util = { version = "0.7.15", features = ["compat"]}
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = { version = "1.0.219", features = ["derive"] } serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140" serde_json = "1.0.140"
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "2.2.10", features = ["chrono", "r2d2", "numeric"] } diesel = { version = "2.2.11", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0" diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true } diesel_logger = { version = "0.4.0", optional = true }
@@ -124,7 +126,7 @@ webauthn-rs = "0.3.2"
url = "2.5.4" url = "2.5.4"
# Email libraries # Email libraries
lettre = { version = "0.11.16", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false } lettre = { version = "0.11.17", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "hostname", "tracing", "tokio1-rustls", "ring", "rustls-native-certs"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9" email_address = "0.2.9"
@@ -132,7 +134,7 @@ email_address = "0.2.9"
handlebars = { version = "6.3.2", features = ["dir_source"] } handlebars = { version = "6.3.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API) # HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.15", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] } reqwest = { version = "0.12.20", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false}
hickory-resolver = "0.25.2" hickory-resolver = "0.25.2"
# Favicon extraction libraries # Favicon extraction libraries
@@ -140,6 +142,7 @@ html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false } regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1" data-url = "0.3.1"
bytes = "1.10.1" bytes = "1.10.1"
svg-hush = "0.9.5"
# Cache function results (Used for version check and favicon fetching) # Cache function results (Used for version check and favicon fetching)
cached = { version = "0.55.1", features = ["async"] } cached = { version = "0.55.1", features = ["async"] }
@@ -149,7 +152,7 @@ cookie = "0.18.1"
cookie_store = "0.21.1" cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL # Used by U2F, JWT and PostgreSQL
openssl = "0.10.72" openssl = "0.10.73"
# CLI argument parsing # CLI argument parsing
pico-args = "0.5.0" pico-args = "0.5.0"
@@ -163,9 +166,9 @@ semver = "1.0.26"
# Allow overriding the default memory allocator # Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow # Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.46", features = ["secure"], default-features = false, optional = true } mimalloc = { version = "0.1.47", features = ["secure"], default-features = false, optional = true }
which = "7.0.3" which = "8.0.0"
# Argon2 library with support for the PHC format # Argon2 library with support for the PHC format
argon2 = "0.5.3" argon2 = "0.5.3"
@@ -176,6 +179,17 @@ rpassword = "7.4.0"
# Loading a dynamic CSS Stylesheet # Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false } grass_compiler = { version = "0.13.4", default-features = false }
# File are accessed through Apache OpenDAL
opendal = { version = "0.53.3", features = ["services-fs"], default-features = false }
# For retrieving AWS credentials, including temporary SSO credentials
anyhow = { version = "1.0.98", optional = true }
aws-config = { version = "1.8.0", features = ["behavior-version-latest", "rt-tokio", "credentials-process", "sso"], default-features = false, optional = true }
aws-credential-types = { version = "1.2.3", optional = true }
aws-smithy-runtime-api = { version = "1.8.1", optional = true }
http = { version = "1.3.1", optional = true }
reqsign = { version = "0.16.5", optional = true }
# Strip debuginfo from the release builds # Strip debuginfo from the release builds
# The debug symbols are to provide better panic traces # The debug symbols are to provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations # Also enable fat LTO and use 1 codegen unit for optimizations
@@ -265,7 +279,6 @@ macro_use_imports = "deny"
manual_assert = "deny" manual_assert = "deny"
manual_instant_elapsed = "deny" manual_instant_elapsed = "deny"
manual_string_new = "deny" manual_string_new = "deny"
match_on_vec_items = "deny"
match_wildcard_for_single_variants = "deny" match_wildcard_for_single_variants = "deny"
mem_forget = "deny" mem_forget = "deny"
needless_continue = "deny" needless_continue = "deny"

View File

@@ -11,6 +11,8 @@ fn main() {
println!("cargo:rustc-cfg=postgresql"); println!("cargo:rustc-cfg=postgresql");
#[cfg(feature = "query_logger")] #[cfg(feature = "query_logger")]
println!("cargo:rustc-cfg=query_logger"); println!("cargo:rustc-cfg=query_logger");
#[cfg(feature = "s3")]
println!("cargo:rustc-cfg=s3");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))] #[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!( compile_error!(
@@ -23,6 +25,7 @@ fn main() {
println!("cargo::rustc-check-cfg=cfg(mysql)"); println!("cargo::rustc-check-cfg=cfg(mysql)");
println!("cargo::rustc-check-cfg=cfg(postgresql)"); println!("cargo::rustc-check-cfg=cfg(postgresql)");
println!("cargo::rustc-check-cfg=cfg(query_logger)"); println!("cargo::rustc-check-cfg=cfg(query_logger)");
println!("cargo::rustc-check-cfg=cfg(s3)");
// Rerun when these paths are changed. // Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed. // Someone could have checked-out a tag or specific commit, but no other files changed.

View File

@@ -1,13 +1,13 @@
--- ---
vault_version: "v2025.5.0" vault_version: "v2025.6.0"
vault_image_digest: "sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e" vault_image_digest: "sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c"
# Cross Compile Docker Helper Scripts v1.6.1 # Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts # We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags # https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894" xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.87.0 # Rust version to be used rust_version: 1.88.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used debian_version: bookworm # Debian release name to be used
alpine_version: "3.21" # Alpine version to be used alpine_version: "3.22" # Alpine version to be used
# For which platforms/architectures will we try to build images # For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"] platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch # Determine the build images per OS/Arch

View File

@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0 # $ docker pull docker.io/vaultwarden/web-vault:v2025.6.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.6.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e] # [docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c
# [docker.io/vaultwarden/web-vault:v2025.5.0] # [docker.io/vaultwarden/web-vault:v2025.6.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c AS vault
########################## ALPINE BUILD IMAGES ########################## ########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 ## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used ## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.87.0 AS build_amd64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.88.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.87.0 AS build_arm64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.88.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.87.0 AS build_armv7 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.88.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.87.0 AS build_armv6 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.88.0 AS build_armv6
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
@@ -127,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*' # To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
# #
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742 # We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.21 FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.22
ENV ROCKET_PROFILE="release" \ ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \ ROCKET_ADDRESS=0.0.0.0 \

View File

@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0 # $ docker pull docker.io/vaultwarden/web-vault:v2025.6.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.6.0
# [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e] # [docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c
# [docker.io/vaultwarden/web-vault:v2025.5.0] # [docker.io/vaultwarden/web-vault:v2025.6.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:494be10bd99d9d05c7bec13dad71ad99102ea920de9a5d3587529709a64fb42c AS vault
########################## Cross Compile Docker Helper Scripts ########################## ########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts ## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bd
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.87.0-slim-bookworm AS build FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.88.0-slim-bookworm AS build
COPY --from=xx / / COPY --from=xx / /
ARG TARGETARCH ARG TARGETARCH
ARG TARGETVARIANT ARG TARGETVARIANT

View File

@@ -10,7 +10,7 @@ proc-macro = true
[dependencies] [dependencies]
quote = "1.0.40" quote = "1.0.40"
syn = "2.0.101" syn = "2.0.104"
[lints] [lints]
workspace = true workspace = true

View File

@@ -1,4 +1,4 @@
[toolchain] [toolchain]
channel = "1.87.0" channel = "1.88.0"
components = [ "rustfmt", "clippy" ] components = [ "rustfmt", "clippy" ]
profile = "minimal" profile = "minimal"

View File

@@ -614,7 +614,7 @@ use cached::proc_macro::cached;
/// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit /// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit
/// Any cache will be lost if Vaultwarden is restarted /// Any cache will be lost if Vaultwarden is restarted
#[cached(time = 600, sync_writes = "default")] #[cached(time = 600, sync_writes = "default")]
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) { async fn get_release_info(has_http_access: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway. // If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access { if has_http_access {
( (
@@ -633,17 +633,11 @@ async fn get_release_info(has_http_access: bool, running_within_container: bool)
}, },
// Do not fetch the web-vault version when running within a container // Do not fetch the web-vault version when running within a container
// The web-vault version is embedded within the container it self, and should not be updated manually // The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_container { match get_json_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest")
"-".to_string()
} else {
match get_json_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
)
.await .await
{ {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(), Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(), _ => "-".to_string(),
}
}, },
) )
} else { } else {
@@ -689,8 +683,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
_ => "Unable to resolve domain name.".to_string(), _ => "Unable to resolve domain name.".to_string(),
}; };
let (latest_release, latest_commit, latest_web_build) = let (latest_release, latest_commit, latest_web_build) = get_release_info(has_http_access).await;
get_release_info(has_http_access, running_within_container).await;
let ip_header_name = &ip_header.0.unwrap_or_default(); let ip_header_name = &ip_header.0.unwrap_or_default();
@@ -753,17 +746,17 @@ fn get_diagnostics_http(code: u16, _token: AdminToken) -> EmptyResult {
} }
#[post("/config", format = "application/json", data = "<data>")] #[post("/config", format = "application/json", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult { async fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner(); let data: ConfigBuilder = data.into_inner();
if let Err(e) = CONFIG.update_config(data, true) { if let Err(e) = CONFIG.update_config(data, true).await {
err!(format!("Unable to save config: {e:?}")) err!(format!("Unable to save config: {e:?}"))
} }
Ok(()) Ok(())
} }
#[post("/config/delete", format = "application/json")] #[post("/config/delete", format = "application/json")]
fn delete_config(_token: AdminToken) -> EmptyResult { async fn delete_config(_token: AdminToken) -> EmptyResult {
if let Err(e) = CONFIG.delete_user_config() { if let Err(e) = CONFIG.delete_user_config().await {
err!(format!("Unable to delete config: {e:?}")) err!(format!("Unable to delete config: {e:?}"))
} }
Ok(()) Ok(())

View File

@@ -8,8 +8,8 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::{log_user_event, two_factor::email}, core::{log_user_event, two_factor::email},
register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult, Notify, master_password_policy, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
PasswordOrOtpData, UpdateType, Notify, PasswordOrOtpData, UpdateType,
}, },
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers}, auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto, crypto,
@@ -1068,7 +1068,7 @@ struct SecretVerificationRequest {
} }
#[post("/accounts/verify-password", data = "<data>")] #[post("/accounts/verify-password", data = "<data>")]
fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers) -> EmptyResult { async fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
let data: SecretVerificationRequest = data.into_inner(); let data: SecretVerificationRequest = data.into_inner();
let user = headers.user; let user = headers.user;
@@ -1076,7 +1076,7 @@ fn verify_password(data: Json<SecretVerificationRequest>, headers: Headers) -> E
err!("Invalid password") err!("Invalid password")
} }
Ok(()) Ok(Json(master_password_policy(&user, &conn).await))
} }
async fn _api_key(data: Json<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult { async fn _api_key(data: Json<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult {

View File

@@ -11,10 +11,11 @@ use rocket::{
use serde_json::Value; use serde_json::Value;
use crate::auth::ClientVersion; use crate::auth::ClientVersion;
use crate::util::NumberOrString; use crate::util::{save_temp_file, NumberOrString};
use crate::{ use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType}, api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType},
auth::Headers, auth::Headers,
config::PathType,
crypto, crypto,
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
CONFIG, CONFIG,
@@ -105,12 +106,7 @@ struct SyncData {
} }
#[get("/sync?<data..>")] #[get("/sync?<data..>")]
async fn sync( async fn sync(data: SyncData, headers: Headers, client_version: Option<ClientVersion>, mut conn: DbConn) -> JsonResult {
data: SyncData,
headers: Headers,
client_version: Option<ClientVersion>,
mut conn: DbConn,
) -> Json<Value> {
let user_json = headers.user.to_json(&mut conn).await; let user_json = headers.user.to_json(&mut conn).await;
// Get all ciphers which are visible by the user // Get all ciphers which are visible by the user
@@ -134,7 +130,7 @@ async fn sync(
for c in ciphers { for c in ciphers {
ciphers_json.push( ciphers_json.push(
c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn) c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn)
.await, .await?,
); );
} }
@@ -159,7 +155,7 @@ async fn sync(
api::core::_get_eq_domains(headers, true).into_inner() api::core::_get_eq_domains(headers, true).into_inner()
}; };
Json(json!({ Ok(Json(json!({
"profile": user_json, "profile": user_json,
"folders": folders_json, "folders": folders_json,
"collections": collections_json, "collections": collections_json,
@@ -168,11 +164,11 @@ async fn sync(
"domains": domains_json, "domains": domains_json,
"sends": sends_json, "sends": sends_json,
"object": "sync" "object": "sync"
})) })))
} }
#[get("/ciphers")] #[get("/ciphers")]
async fn get_ciphers(headers: Headers, mut conn: DbConn) -> Json<Value> { async fn get_ciphers(headers: Headers, mut conn: DbConn) -> JsonResult {
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &mut conn).await; let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &mut conn).await;
let cipher_sync_data = CipherSyncData::new(&headers.user.uuid, CipherSyncType::User, &mut conn).await; let cipher_sync_data = CipherSyncData::new(&headers.user.uuid, CipherSyncType::User, &mut conn).await;
@@ -180,15 +176,15 @@ async fn get_ciphers(headers: Headers, mut conn: DbConn) -> Json<Value> {
for c in ciphers { for c in ciphers {
ciphers_json.push( ciphers_json.push(
c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn) c.to_json(&headers.host, &headers.user.uuid, Some(&cipher_sync_data), CipherSyncType::User, &mut conn)
.await, .await?,
); );
} }
Json(json!({ Ok(Json(json!({
"data": ciphers_json, "data": ciphers_json,
"object": "list", "object": "list",
"continuationToken": null "continuationToken": null
})) })))
} }
#[get("/ciphers/<cipher_id>")] #[get("/ciphers/<cipher_id>")]
@@ -201,7 +197,7 @@ async fn get_cipher(cipher_id: CipherId, headers: Headers, mut conn: DbConn) ->
err!("Cipher is not owned by user") err!("Cipher is not owned by user")
} }
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
#[get("/ciphers/<cipher_id>/admin")] #[get("/ciphers/<cipher_id>/admin")]
@@ -339,7 +335,7 @@ async fn post_ciphers(data: Json<CipherData>, headers: Headers, mut conn: DbConn
let mut cipher = Cipher::new(data.r#type, data.name.clone()); let mut cipher = Cipher::new(data.r#type, data.name.clone());
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherCreate).await?; update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherCreate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
/// Enforces the personal ownership policy on user-owned ciphers, if applicable. /// Enforces the personal ownership policy on user-owned ciphers, if applicable.
@@ -676,7 +672,7 @@ async fn put_cipher(
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherUpdate).await?; update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherUpdate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
#[post("/ciphers/<cipher_id>/partial", data = "<data>")] #[post("/ciphers/<cipher_id>/partial", data = "<data>")]
@@ -714,7 +710,7 @@ async fn put_cipher_partial(
// Update favorite // Update favorite
cipher.set_favorite(Some(data.favorite), &headers.user.uuid, &mut conn).await?; cipher.set_favorite(Some(data.favorite), &headers.user.uuid, &mut conn).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -825,7 +821,7 @@ async fn post_collections_update(
) )
.await; .await;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
#[put("/ciphers/<cipher_id>/collections-admin", data = "<data>")] #[put("/ciphers/<cipher_id>/collections-admin", data = "<data>")]
@@ -1030,7 +1026,7 @@ async fn share_cipher_by_uuid(
update_cipher_from_data(&mut cipher, data.cipher, headers, Some(shared_to_collections), conn, nt, ut).await?; update_cipher_from_data(&mut cipher, data.cipher, headers, Some(shared_to_collections), conn, nt, ut).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
} }
/// v2 API for downloading an attachment. This just redirects the client to /// v2 API for downloading an attachment. This just redirects the client to
@@ -1055,7 +1051,7 @@ async fn get_attachment(
} }
match Attachment::find_by_id(&attachment_id, &mut conn).await { match Attachment::find_by_id(&attachment_id, &mut conn).await {
Some(attachment) if cipher_id == attachment.cipher_uuid => Ok(Json(attachment.to_json(&headers.host))), Some(attachment) if cipher_id == attachment.cipher_uuid => Ok(Json(attachment.to_json(&headers.host).await?)),
Some(_) => err!("Attachment doesn't belong to cipher"), Some(_) => err!("Attachment doesn't belong to cipher"),
None => err!("Attachment doesn't exist"), None => err!("Attachment doesn't exist"),
} }
@@ -1116,7 +1112,7 @@ async fn post_attachment_v2(
"attachmentId": attachment_id, "attachmentId": attachment_id,
"url": url, "url": url,
"fileUploadType": FileUploadType::Direct as i32, "fileUploadType": FileUploadType::Direct as i32,
response_key: cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await, response_key: cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?,
}))) })))
} }
@@ -1142,7 +1138,7 @@ async fn save_attachment(
mut conn: DbConn, mut conn: DbConn,
nt: Notify<'_>, nt: Notify<'_>,
) -> Result<(Cipher, DbConn), crate::error::Error> { ) -> Result<(Cipher, DbConn), crate::error::Error> {
let mut data = data.into_inner(); let data = data.into_inner();
let Some(size) = data.data.len().to_i64() else { let Some(size) = data.data.len().to_i64() else {
err!("Attachment data size overflow"); err!("Attachment data size overflow");
@@ -1269,13 +1265,7 @@ async fn save_attachment(
attachment.save(&mut conn).await.expect("Error saving attachment"); attachment.save(&mut conn).await.expect("Error saving attachment");
} }
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_id.as_ref()); save_temp_file(PathType::Attachments, &format!("{cipher_id}/{file_id}"), data.data, true).await?;
let file_path = folder_path.join(file_id.as_ref());
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
nt.send_cipher_update( nt.send_cipher_update(
UpdateType::SyncCipherUpdate, UpdateType::SyncCipherUpdate,
@@ -1342,7 +1332,7 @@ async fn post_attachment(
let (cipher, mut conn) = save_attachment(attachment, cipher_id, data, &headers, conn, nt).await?; let (cipher, mut conn) = save_attachment(attachment, cipher_id, data, &headers, conn, nt).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await?))
} }
#[post("/ciphers/<cipher_id>/attachment-admin", format = "multipart/form-data", data = "<data>")] #[post("/ciphers/<cipher_id>/attachment-admin", format = "multipart/form-data", data = "<data>")]
@@ -1786,7 +1776,7 @@ async fn _restore_cipher_by_uuid(
.await; .await;
} }
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await)) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
} }
async fn _restore_multiple_ciphers( async fn _restore_multiple_ciphers(
@@ -1859,7 +1849,7 @@ async fn _delete_cipher_attachment_by_id(
) )
.await; .await;
} }
let cipher_json = cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await; let cipher_json = cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?;
Ok(Json(json!({"cipher":cipher_json}))) Ok(Json(json!({"cipher":cipher_json})))
} }

View File

@@ -582,7 +582,7 @@ async fn view_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut
CipherSyncType::User, CipherSyncType::User,
&mut conn, &mut conn,
) )
.await, .await?,
); );
} }

View File

@@ -200,15 +200,17 @@ fn get_api_webauthn(_headers: Headers) -> Json<Value> {
fn config() -> Json<Value> { fn config() -> Json<Value> {
let domain = crate::CONFIG.domain(); let domain = crate::CONFIG.domain();
// Official available feature flags can be found here: // Official available feature flags can be found here:
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102 // Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10 // Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27 // Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7 // iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let mut feature_states = let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags()); parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
feature_states.insert("duo-redirect".to_string(), true); feature_states.insert("duo-redirect".to_string(), true);
feature_states.insert("email-verification".to_string(), true); feature_states.insert("email-verification".to_string(), true);
feature_states.insert("unauth-ui-refresh".to_string(), true); feature_states.insert("unauth-ui-refresh".to_string(), true);
feature_states.insert("enable-pm-flight-recorder".to_string(), true);
feature_states.insert("mobile-error-reporting".to_string(), true);
Json(json!({ Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns // Note: The clients use this version to handle backwards compatibility concerns
@@ -216,7 +218,7 @@ fn config() -> Json<Value> {
// We should make sure that we keep this updated when we support the new server features // We should make sure that we keep this updated when we support the new server features
// Version history: // Version history:
// - Individual cipher key encryption: 2024.2.0 // - Individual cipher key encryption: 2024.2.0
"version": "2025.4.0", "version": "2025.6.0",
"gitHash": option_env!("GIT_REV"), "gitHash": option_env!("GIT_REV"),
"server": { "server": {
"name": "Vaultwarden", "name": "Vaultwarden",

View File

@@ -917,21 +917,26 @@ async fn get_org_details(data: OrgIdData, headers: OrgMemberHeaders, mut conn: D
} }
Ok(Json(json!({ Ok(Json(json!({
"data": _get_org_details(&data.organization_id, &headers.host, &headers.user.uuid, &mut conn).await, "data": _get_org_details(&data.organization_id, &headers.host, &headers.user.uuid, &mut conn).await?,
"object": "list", "object": "list",
"continuationToken": null, "continuationToken": null,
}))) })))
} }
async fn _get_org_details(org_id: &OrganizationId, host: &str, user_id: &UserId, conn: &mut DbConn) -> Value { async fn _get_org_details(
org_id: &OrganizationId,
host: &str,
user_id: &UserId,
conn: &mut DbConn,
) -> Result<Value, crate::Error> {
let ciphers = Cipher::find_by_org(org_id, conn).await; let ciphers = Cipher::find_by_org(org_id, conn).await;
let cipher_sync_data = CipherSyncData::new(user_id, CipherSyncType::Organization, conn).await; let cipher_sync_data = CipherSyncData::new(user_id, CipherSyncType::Organization, conn).await;
let mut ciphers_json = Vec::with_capacity(ciphers.len()); let mut ciphers_json = Vec::with_capacity(ciphers.len());
for c in ciphers { for c in ciphers {
ciphers_json.push(c.to_json(host, user_id, Some(&cipher_sync_data), CipherSyncType::Organization, conn).await); ciphers_json.push(c.to_json(host, user_id, Some(&cipher_sync_data), CipherSyncType::Organization, conn).await?);
} }
json!(ciphers_json) Ok(json!(ciphers_json))
} }
#[derive(FromForm)] #[derive(FromForm)]
@@ -3329,13 +3334,17 @@ async fn put_reset_password_enrollment(
let reset_request = data.into_inner(); let reset_request = data.into_inner();
if reset_request.reset_password_key.is_none() let reset_password_key = match reset_request.reset_password_key {
&& OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await None => None,
{ Some(ref key) if key.is_empty() => None,
Some(key) => Some(key),
};
if reset_password_key.is_none() && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await {
err!("Reset password can't be withdrawn due to an enterprise policy"); err!("Reset password can't be withdrawn due to an enterprise policy");
} }
if reset_request.reset_password_key.is_some() { if reset_password_key.is_some() {
PasswordOrOtpData { PasswordOrOtpData {
master_password_hash: reset_request.master_password_hash, master_password_hash: reset_request.master_password_hash,
otp: reset_request.otp, otp: reset_request.otp,
@@ -3344,7 +3353,7 @@ async fn put_reset_password_enrollment(
.await?; .await?;
} }
member.reset_password_key = reset_request.reset_password_key; member.reset_password_key = reset_password_key;
member.save(&mut conn).await?; member.save(&mut conn).await?;
let log_id = if member.reset_password_key.is_some() { let log_id = if member.reset_password_key.is_some() {
@@ -3372,7 +3381,7 @@ async fn get_org_export(org_id: OrganizationId, headers: AdminHeaders, mut conn:
Ok(Json(json!({ Ok(Json(json!({
"collections": convert_json_key_lcase_first(_get_org_collections(&org_id, &mut conn).await), "collections": convert_json_key_lcase_first(_get_org_collections(&org_id, &mut conn).await),
"ciphers": convert_json_key_lcase_first(_get_org_details(&org_id, &headers.host, &headers.user.uuid, &mut conn).await), "ciphers": convert_json_key_lcase_first(_get_org_details(&org_id, &headers.host, &headers.user.uuid, &mut conn).await?),
}))) })))
} }

View File

@@ -1,4 +1,5 @@
use std::path::Path; use std::path::Path;
use std::time::Duration;
use chrono::{DateTime, TimeDelta, Utc}; use chrono::{DateTime, TimeDelta, Utc};
use num_traits::ToPrimitive; use num_traits::ToPrimitive;
@@ -12,8 +13,9 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType}, api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType},
auth::{ClientIp, Headers, Host}, auth::{ClientIp, Headers, Host},
config::PathType,
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
util::NumberOrString, util::{save_temp_file, NumberOrString},
CONFIG, CONFIG,
}; };
@@ -228,7 +230,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
let UploadData { let UploadData {
model, model,
mut data, data,
} = data.into_inner(); } = data.into_inner();
let model = model.into_inner(); let model = model.into_inner();
@@ -268,13 +270,8 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
} }
let file_id = crate::crypto::generate_send_file_id(); let file_id = crate::crypto::generate_send_file_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.persist_to(&file_path).await { save_temp_file(PathType::Sends, &format!("{}/{file_id}", send.uuid), data, true).await?;
data.move_copy_to(file_path).await?
}
let mut data_value: Value = serde_json::from_str(&send.data)?; let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() { if let Some(o) = data_value.as_object_mut() {
@@ -381,7 +378,7 @@ async fn post_send_file_v2_data(
) -> EmptyResult { ) -> EmptyResult {
enforce_disable_send_policy(&headers, &mut conn).await?; enforce_disable_send_policy(&headers, &mut conn).await?;
let mut data = data.into_inner(); let data = data.into_inner();
let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else { let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.", "Invalid send uuid or does not belong to user.") err!("Send not found. Unable to save the file.", "Invalid send uuid or does not belong to user.")
@@ -424,19 +421,9 @@ async fn post_send_file_v2_data(
err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size)); err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size));
} }
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_id); let file_path = format!("{send_id}/{file_id}");
let file_path = folder_path.join(file_id);
// Check if the file already exists, if that is the case do not overwrite it save_temp_file(PathType::Sends, &file_path, data.data, false).await?;
if tokio::fs::metadata(&file_path).await.is_ok() {
err!("Send file has already been uploaded.", format!("File {file_path:?} already exists"))
}
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
nt.send_send_update( nt.send_send_update(
UpdateType::SyncSendCreate, UpdateType::SyncSendCreate,
@@ -569,15 +556,26 @@ async fn post_access_file(
) )
.await; .await;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({ Ok(Json(json!({
"object": "send-fileDownload", "object": "send-fileDownload",
"id": file_id, "id": file_id,
"url": format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host) "url": download_url(&host, &send_id, &file_id).await?,
}))) })))
} }
async fn download_url(host: &Host, send_id: &SendId, file_id: &SendFileId) -> Result<String, crate::Error> {
let operator = CONFIG.opendal_operator_for_path_type(PathType::Sends)?;
if operator.info().scheme() == opendal::Scheme::Fs {
let token_claims = crate::auth::generate_send_claims(send_id, file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host))
} else {
Ok(operator.presign_read(&format!("{send_id}/{file_id}"), Duration::from_secs(5 * 60)).await?.uri().to_string())
}
}
#[get("/sends/<send_id>/<file_id>?<t>")] #[get("/sends/<send_id>/<file_id>?<t>")]
async fn download_send(send_id: SendId, file_id: SendFileId, t: &str) -> Option<NamedFile> { async fn download_send(send_id: SendId, file_id: SendFileId, t: &str) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(t) { if let Ok(claims) = crate::auth::decode_send(t) {

View File

@@ -261,7 +261,7 @@ pub(crate) async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiRes
} }
.map_res("Can't fetch Duo Keys")?; .map_res("Can't fetch Duo Keys")?;
Ok((data.ik, data.sk, CONFIG.get_duo_akey(), data.host)) Ok((data.ik, data.sk, CONFIG.get_duo_akey().await, data.host))
} }
pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult<(String, String)> { pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult<(String, String)> {

View File

@@ -145,15 +145,14 @@ async fn activate_yubikey(data: Json<EnableYubikeyData>, headers: Headers, mut c
// Ensure they are valid OTPs // Ensure they are valid OTPs
for yubikey in &yubikeys { for yubikey in &yubikeys {
if yubikey.len() == 12 { if yubikey.is_empty() || yubikey.len() == 12 {
// YubiKey ID
continue; continue;
} }
verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?; verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?;
} }
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect(); let yubikey_ids: Vec<String> = yubikeys.into_iter().filter_map(|x| x.get(..12).map(str::to_owned)).collect();
let yubikey_metadata = YubikeyMetadata { let yubikey_metadata = YubikeyMetadata {
keys: yubikey_ids, keys: yubikey_ids,

View File

@@ -14,14 +14,12 @@ use reqwest::{
Client, Response, Client, Response,
}; };
use rocket::{http::ContentType, response::Redirect, Route}; use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{ use svg_hush::{data_url_filter, Filter};
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
};
use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer}; use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer};
use crate::{ use crate::{
config::PathType,
error::Error, error::Error,
http_client::{get_reqwest_client_builder, should_block_address, CustomHttpClientError}, http_client::{get_reqwest_client_builder, should_block_address, CustomHttpClientError},
util::Cached, util::Cached,
@@ -38,11 +36,29 @@ pub fn routes() -> Vec<Route> {
static CLIENT: Lazy<Client> = Lazy::new(|| { static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers // Generate the default headers
let mut default_headers = HeaderMap::new(); let mut default_headers = HeaderMap::new();
default_headers.insert(header::USER_AGENT, HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)")); default_headers.insert(
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1")); header::USER_AGENT,
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en,*;q=0.1")); HeaderValue::from_static(
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36",
),
);
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en-US,en;q=0.9"));
default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache")); default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache")); default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache"));
default_headers.insert(header::UPGRADE_INSECURE_REQUESTS, HeaderValue::from_static("1"));
default_headers.insert("Sec-Ch-Ua-Mobile", HeaderValue::from_static("?0"));
default_headers.insert("Sec-Ch-Ua-Platform", HeaderValue::from_static("Linux"));
default_headers.insert(
"Sec-Ch-Ua",
HeaderValue::from_static("\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"138\", \"Google Chrome\";v=\"138\""),
);
default_headers.insert("Sec-Fetch-Site", HeaderValue::from_static("none"));
default_headers.insert("Sec-Fetch-Mode", HeaderValue::from_static("navigate"));
default_headers.insert("Sec-Fetch-User", HeaderValue::from_static("?1"));
default_headers.insert("Sec-Fetch-Dest", HeaderValue::from_static("document"));
// Generate the cookie store // Generate the cookie store
let cookie_store = Arc::new(Jar::default()); let cookie_store = Arc::new(Jar::default());
@@ -56,6 +72,7 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections .pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds .pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.default_headers(default_headers.clone()) .default_headers(default_headers.clone())
.http1_title_case_headers()
.build() .build()
.expect("Failed to build client") .expect("Failed to build client")
}); });
@@ -158,7 +175,7 @@ fn is_valid_domain(domain: &str) -> bool {
} }
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> { async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{domain}.png", CONFIG.icon_cache_folder()); let path = format!("{domain}.png");
// Check for expiration of negatively cached copy // Check for expiration of negatively cached copy
if icon_is_negcached(&path).await { if icon_is_negcached(&path).await {
@@ -177,7 +194,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
// Get the icon, or None in case of error // Get the icon, or None in case of error
match download_icon(domain).await { match download_icon(domain).await {
Ok((icon, icon_type)) => { Ok((icon, icon_type)) => {
save_icon(&path, &icon).await; save_icon(&path, icon.to_vec()).await;
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string())) Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
} }
Err(e) => { Err(e) => {
@@ -190,7 +207,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
warn!("Unable to download icon: {e:?}"); warn!("Unable to download icon: {e:?}");
let miss_indicator = path + ".miss"; let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]).await; save_icon(&miss_indicator, vec![]).await;
None None
} }
} }
@@ -203,11 +220,9 @@ async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
} }
// Try to read the cached icon, and return it if it exists // Try to read the cached icon, and return it if it exists
if let Ok(mut f) = File::open(path).await { if let Ok(operator) = CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
let mut buffer = Vec::new(); if let Ok(buf) = operator.read(path).await {
return Some(buf.to_vec());
if f.read_to_end(&mut buffer).await.is_ok() {
return Some(buffer);
} }
} }
@@ -215,9 +230,11 @@ async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
} }
async fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> { async fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path).await?; let operator = CONFIG.opendal_operator_for_path_type(PathType::IconCache)?;
let modified = meta.modified()?; let meta = operator.stat(path).await?;
let age = SystemTime::now().duration_since(modified)?; let modified =
meta.last_modified().ok_or_else(|| std::io::Error::other(format!("No last modified time for `{path}`")))?;
let age = SystemTime::now().duration_since(modified.into())?;
Ok(ttl > 0 && ttl <= age.as_secs()) Ok(ttl > 0 && ttl <= age.as_secs())
} }
@@ -229,8 +246,13 @@ async fn icon_is_negcached(path: &str) -> bool {
match expired { match expired {
// No longer negatively cached, drop the marker // No longer negatively cached, drop the marker
Ok(true) => { Ok(true) => {
if let Err(e) = remove_file(&miss_indicator).await { match CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
error!("Could not remove negative cache indicator for icon {path:?}: {e:?}"); Ok(operator) => {
if let Err(e) = operator.delete(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {path:?}: {e:?}");
}
}
Err(e) => error!("Could not remove negative cache indicator for icon {path:?}: {e:?}"),
} }
false false
} }
@@ -559,26 +581,46 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
if buffer.is_empty() { if buffer.is_empty() {
err_silent!("Empty response or unable find a valid icon", domain); err_silent!("Empty response or unable find a valid icon", domain);
} else if icon_type == Some("svg+xml") {
let mut svg_filter = Filter::new();
svg_filter.set_data_url_filter(data_url_filter::allow_standard_images);
let mut sanitized_svg = Vec::new();
if svg_filter.filter(&*buffer, &mut sanitized_svg).is_err() {
icon_type = None;
buffer.clear();
} else {
buffer = sanitized_svg.into();
}
} }
Ok((buffer, icon_type)) Ok((buffer, icon_type))
} }
async fn save_icon(path: &str, icon: &[u8]) { async fn save_icon(path: &str, icon: Vec<u8>) {
match File::create(path).await { let operator = match CONFIG.opendal_operator_for_path_type(PathType::IconCache) {
Ok(mut f) => { Ok(operator) => operator,
f.write_all(icon).await.expect("Error writing icon file");
}
Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => {
create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder");
}
Err(e) => { Err(e) => {
warn!("Unable to save icon: {e:?}"); warn!("Failed to get OpenDAL operator while saving icon: {e}");
return;
} }
};
if let Err(e) = operator.write(path, icon).await {
warn!("Unable to save icon: {e:?}");
} }
} }
fn get_icon_type(bytes: &[u8]) -> Option<&'static str> { fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
fn check_svg_after_xml_declaration(bytes: &[u8]) -> Option<&'static str> {
// Look for SVG tag within the first 1KB
if let Ok(content) = std::str::from_utf8(&bytes[..bytes.len().min(1024)]) {
if content.contains("<svg") || content.contains("<SVG") {
return Some("svg+xml");
}
}
None
}
match bytes { match bytes {
[137, 80, 78, 71, ..] => Some("png"), [137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"), [0, 0, 1, 0, ..] => Some("x-icon"),
@@ -586,6 +628,8 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
[255, 216, 255, ..] => Some("jpeg"), [255, 216, 255, ..] => Some("jpeg"),
[71, 73, 70, 56, ..] => Some("gif"), [71, 73, 70, 56, ..] => Some("gif"),
[66, 77, ..] => Some("bmp"), [66, 77, ..] => Some("bmp"),
[60, 115, 118, 103, ..] => Some("svg+xml"), // Normal svg
[60, 63, 120, 109, 108, ..] => check_svg_after_xml_declaration(bytes), // An svg starting with <?xml
_ => None, _ => None,
} }
} }
@@ -597,6 +641,12 @@ async fn stream_to_bytes_limit(res: Response, max_size: usize) -> Result<Bytes,
let mut buf = BytesMut::new(); let mut buf = BytesMut::new();
let mut size = 0; let mut size = 0;
while let Some(chunk) = stream.next().await { while let Some(chunk) = stream.next().await {
// It is possible that there might occure UnexpectedEof errors or others
// This is most of the time no issue, and if there is no chunked data anymore or at all parsing the HTML will not happen anyway.
// Therfore if chunk is an err, just break and continue with the data be have received.
if chunk.is_err() {
break;
}
let chunk = &chunk?; let chunk = &chunk?;
size += chunk.len(); size += chunk.len();
buf.extend(chunk); buf.extend(chunk);

View File

@@ -14,6 +14,7 @@ use crate::{
log_user_event, log_user_event,
two_factor::{authenticator, duo, duo_oidc, email, enforce_2fa_policy, webauthn, yubikey}, two_factor::{authenticator, duo, duo_oidc, email, enforce_2fa_policy, webauthn, yubikey},
}, },
master_password_policy,
push::register_push_device, push::register_push_device,
ApiResult, EmptyResult, JsonResult, ApiResult, EmptyResult, JsonResult,
}, },
@@ -132,18 +133,6 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
Ok(Json(result)) Ok(Json(result))
} }
#[derive(Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct MasterPasswordPolicy {
min_complexity: u8,
min_length: u32,
require_lower: bool,
require_upper: bool,
require_numbers: bool,
require_special: bool,
enforce_on_login: bool,
}
async fn _password_login( async fn _password_login(
data: ConnectData, data: ConnectData,
user_id: &mut Option<UserId>, user_id: &mut Option<UserId>,
@@ -300,36 +289,7 @@ async fn _password_login(
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id); let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id);
device.save(conn).await?; device.save(conn).await?;
// Fetch all valid Master Password Policies and merge them into one with all trues and largest numbers as one policy let master_password_policy = master_password_policy(&user, conn).await;
let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid,
OrgPolicyType::MasterPassword,
conn,
)
.await
.into_iter()
.filter_map(|p| serde_json::from_str(&p.data).ok())
.collect();
// NOTE: Upstream still uses PascalCase here for `Object`!
let master_password_policy = if !master_password_policies.is_empty() {
let mut mpp_json = json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy {
min_complexity: acc.min_complexity.max(policy.min_complexity),
min_length: acc.min_length.max(policy.min_length),
require_lower: acc.require_lower || policy.require_lower,
require_upper: acc.require_upper || policy.require_upper,
require_numbers: acc.require_numbers || policy.require_numbers,
require_special: acc.require_special || policy.require_special,
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
}
}));
mpp_json["Object"] = json!("masterPasswordPolicy");
mpp_json
} else {
json!({"Object": "masterPasswordPolicy"})
};
let mut result = json!({ let mut result = json!({
"access_token": access_token, "access_token": access_token,
@@ -758,7 +718,10 @@ async fn register_verification_email(
) -> ApiResult<RegisterVerificationResponse> { ) -> ApiResult<RegisterVerificationResponse> {
let data = data.into_inner(); let data = data.into_inner();
if !CONFIG.is_signup_allowed(&data.email) { // the registration can only continue if signup is allowed or there exists an invitation
if !(CONFIG.is_signup_allowed(&data.email)
|| (!CONFIG.mail_enabled() && Invitation::find_by_mail(&data.email, &mut conn).await.is_some()))
{
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
} }

View File

@@ -32,7 +32,10 @@ pub use crate::api::{
web::routes as web_routes, web::routes as web_routes,
web::static_files, web::static_files,
}; };
use crate::db::{models::User, DbConn}; use crate::db::{
models::{OrgPolicy, OrgPolicyType, User},
DbConn,
};
// Type aliases for API methods results // Type aliases for API methods results
type ApiResult<T> = Result<T, crate::error::Error>; type ApiResult<T> = Result<T, crate::error::Error>;
@@ -68,3 +71,49 @@ impl PasswordOrOtpData {
Ok(()) Ok(())
} }
} }
#[derive(Debug, Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct MasterPasswordPolicy {
min_complexity: Option<u8>,
min_length: Option<u32>,
require_lower: bool,
require_upper: bool,
require_numbers: bool,
require_special: bool,
enforce_on_login: bool,
}
// Fetch all valid Master Password Policies and merge them into one with all trues and largest numbers as one policy
async fn master_password_policy(user: &User, conn: &DbConn) -> Value {
let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid,
OrgPolicyType::MasterPassword,
conn,
)
.await
.into_iter()
.filter_map(|p| serde_json::from_str(&p.data).ok())
.collect();
let mut mpp_json = if !master_password_policies.is_empty() {
json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy {
min_complexity: acc.min_complexity.max(policy.min_complexity),
min_length: acc.min_length.max(policy.min_length),
require_lower: acc.require_lower || policy.require_lower,
require_upper: acc.require_upper || policy.require_upper,
require_numbers: acc.require_numbers || policy.require_numbers,
require_special: acc.require_special || policy.require_special,
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
}
}))
} else {
json!({})
};
// NOTE: Upstream still uses PascalCase here for `Object`!
mpp_json["Object"] = json!("masterPasswordPolicy");
mpp_json
}

View File

@@ -7,16 +7,14 @@ use once_cell::sync::{Lazy, OnceCell};
use openssl::rsa::Rsa; use openssl::rsa::Rsa;
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde::ser::Serialize; use serde::ser::Serialize;
use std::{ use std::{env, net::IpAddr};
env,
fs::File,
io::{Read, Write},
net::IpAddr,
};
use crate::db::models::{ use crate::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId, config::PathType,
SendFileId, SendId, UserId, db::models::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId,
SendFileId, SendId, UserId,
},
}; };
use crate::{error::Error, CONFIG}; use crate::{error::Error, CONFIG};
@@ -40,37 +38,33 @@ static JWT_REGISTER_VERIFY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|regis
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new(); static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new(); static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
pub fn initialize_keys() -> Result<(), Error> { pub async fn initialize_keys() -> Result<(), Error> {
fn read_key(create_if_missing: bool) -> Result<(Rsa<openssl::pkey::Private>, Vec<u8>), Error> { use std::io::Error;
let mut priv_key_buffer = Vec::with_capacity(2048);
let mut priv_key_file = File::options() let rsa_key_filename = std::path::PathBuf::from(CONFIG.private_rsa_key())
.create(create_if_missing) .file_name()
.truncate(false) .ok_or_else(|| Error::other("Private RSA key path missing filename"))?
.read(true) .to_str()
.write(create_if_missing) .ok_or_else(|| Error::other("Private RSA key path filename is not valid UTF-8"))?
.open(CONFIG.private_rsa_key())?; .to_string();
#[allow(clippy::verbose_file_reads)] let operator = CONFIG.opendal_operator_for_path_type(PathType::RsaKey).map_err(Error::other)?;
let bytes_read = priv_key_file.read_to_end(&mut priv_key_buffer)?;
let rsa_key = if bytes_read > 0 { let priv_key_buffer = match operator.read(&rsa_key_filename).await {
Rsa::private_key_from_pem(&priv_key_buffer[..bytes_read])? Ok(buffer) => Some(buffer),
} else if create_if_missing { Err(e) if e.kind() == opendal::ErrorKind::NotFound => None,
// Only create the key if the file doesn't exist or is empty Err(e) => return Err(e.into()),
let rsa_key = Rsa::generate(2048)?; };
priv_key_buffer = rsa_key.private_key_to_pem()?;
priv_key_file.write_all(&priv_key_buffer)?;
info!("Private key '{}' created correctly", CONFIG.private_rsa_key());
rsa_key
} else {
err!("Private key does not exist or invalid format", CONFIG.private_rsa_key());
};
Ok((rsa_key, priv_key_buffer)) let (priv_key, priv_key_buffer) = if let Some(priv_key_buffer) = priv_key_buffer {
} (Rsa::private_key_from_pem(priv_key_buffer.to_vec().as_slice())?, priv_key_buffer.to_vec())
} else {
let (priv_key, priv_key_buffer) = read_key(true).or_else(|_| read_key(false))?; let rsa_key = Rsa::generate(2048)?;
let priv_key_buffer = rsa_key.private_key_to_pem()?;
operator.write(&rsa_key_filename, priv_key_buffer.clone()).await?;
info!("Private key '{}' created correctly", CONFIG.private_rsa_key());
(rsa_key, priv_key_buffer)
};
let pub_key_buffer = priv_key.public_key_to_pem()?; let pub_key_buffer = priv_key.public_key_to_pem()?;
let enc = EncodingKey::from_rsa_pem(&priv_key_buffer)?; let enc = EncodingKey::from_rsa_pem(&priv_key_buffer)?;

View File

@@ -3,7 +3,7 @@ use std::{
process::exit, process::exit,
sync::{ sync::{
atomic::{AtomicBool, Ordering}, atomic::{AtomicBool, Ordering},
RwLock, LazyLock, RwLock,
}, },
}; };
@@ -22,10 +22,32 @@ static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
get_env("CONFIG_FILE").unwrap_or_else(|| format!("{data_folder}/config.json")) get_env("CONFIG_FILE").unwrap_or_else(|| format!("{data_folder}/config.json"))
}); });
static CONFIG_FILE_PARENT_DIR: LazyLock<String> = LazyLock::new(|| {
let path = std::path::PathBuf::from(&*CONFIG_FILE);
path.parent().unwrap_or(std::path::Path::new("data")).to_str().unwrap_or("data").to_string()
});
static CONFIG_FILENAME: LazyLock<String> = LazyLock::new(|| {
let path = std::path::PathBuf::from(&*CONFIG_FILE);
path.file_name().unwrap_or(std::ffi::OsStr::new("config.json")).to_str().unwrap_or("config.json").to_string()
});
pub static SKIP_CONFIG_VALIDATION: AtomicBool = AtomicBool::new(false); pub static SKIP_CONFIG_VALIDATION: AtomicBool = AtomicBool::new(false);
pub static CONFIG: Lazy<Config> = Lazy::new(|| { pub static CONFIG: Lazy<Config> = Lazy::new(|| {
Config::load().unwrap_or_else(|e| { std::thread::spawn(|| {
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
exit(12)
});
rt.block_on(Config::load()).unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
exit(12)
})
})
.join()
.unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n"); println!("Error loading config:\n {e:?}\n");
exit(12) exit(12)
}) })
@@ -110,10 +132,11 @@ macro_rules! make_config {
builder builder
} }
fn from_file(path: &str) -> Result<Self, Error> { async fn from_file() -> Result<Self, Error> {
let config_str = std::fs::read_to_string(path)?; let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
println!("[INFO] Using saved config from `{path}` for configuration.\n"); let config_bytes = operator.read(&CONFIG_FILENAME).await?;
serde_json::from_str(&config_str).map_err(Into::into) println!("[INFO] Using saved config from `{}` for configuration.\n", *CONFIG_FILE);
serde_json::from_slice(&config_bytes.to_vec()).map_err(Into::into)
} }
fn clear_non_editable(&mut self) { fn clear_non_editable(&mut self) {
@@ -833,10 +856,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
} }
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102 // Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10 // Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27 // Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7 // iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
// //
// NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const! // NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const!
const KNOWN_FLAGS: &[&str] = &[ const KNOWN_FLAGS: &[&str] = &[
@@ -1138,11 +1161,103 @@ fn smtp_convert_deprecated_ssl_options(smtp_ssl: Option<bool>, smtp_explicit_tls
"starttls".to_string() "starttls".to_string()
} }
fn opendal_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
// Cache of previously built operators by path
static OPERATORS_BY_PATH: LazyLock<dashmap::DashMap<String, opendal::Operator>> =
LazyLock::new(dashmap::DashMap::new);
if let Some(operator) = OPERATORS_BY_PATH.get(path) {
return Ok(operator.clone());
}
let operator = if path.starts_with("s3://") {
#[cfg(not(s3))]
return Err(opendal::Error::new(opendal::ErrorKind::ConfigInvalid, "S3 support is not enabled").into());
#[cfg(s3)]
opendal_s3_operator_for_path(path)?
} else {
let builder = opendal::services::Fs::default().root(path);
opendal::Operator::new(builder)?.finish()
};
OPERATORS_BY_PATH.insert(path.to_string(), operator.clone());
Ok(operator)
}
#[cfg(s3)]
fn opendal_s3_operator_for_path(path: &str) -> Result<opendal::Operator, Error> {
use crate::http_client::aws::AwsReqwestConnector;
use aws_config::{default_provider::credentials::DefaultCredentialsChain, provider_config::ProviderConfig};
// This is a custom AWS credential loader that uses the official AWS Rust
// SDK config crate to load credentials. This ensures maximum compatibility
// with AWS credential configurations. For example, OpenDAL doesn't support
// AWS SSO temporary credentials yet.
struct OpenDALS3CredentialLoader {}
#[async_trait]
impl reqsign::AwsCredentialLoad for OpenDALS3CredentialLoader {
async fn load_credential(&self, _client: reqwest::Client) -> anyhow::Result<Option<reqsign::AwsCredential>> {
use aws_credential_types::provider::ProvideCredentials as _;
use tokio::sync::OnceCell;
static DEFAULT_CREDENTIAL_CHAIN: OnceCell<DefaultCredentialsChain> = OnceCell::const_new();
let chain = DEFAULT_CREDENTIAL_CHAIN
.get_or_init(|| {
let reqwest_client = reqwest::Client::builder().build().unwrap();
let connector = AwsReqwestConnector {
client: reqwest_client,
};
let conf = ProviderConfig::default().with_http_client(connector);
DefaultCredentialsChain::builder().configure(conf).build()
})
.await;
let creds = chain.provide_credentials().await?;
Ok(Some(reqsign::AwsCredential {
access_key_id: creds.access_key_id().to_string(),
secret_access_key: creds.secret_access_key().to_string(),
session_token: creds.session_token().map(|s| s.to_string()),
expires_in: creds.expiry().map(|expiration| expiration.into()),
}))
}
}
const OPEN_DAL_S3_CREDENTIAL_LOADER: OpenDALS3CredentialLoader = OpenDALS3CredentialLoader {};
let url = Url::parse(path).map_err(|e| format!("Invalid path S3 URL path {path:?}: {e}"))?;
let bucket = url.host_str().ok_or_else(|| format!("Missing Bucket name in data folder S3 URL {path:?}"))?;
let builder = opendal::services::S3::default()
.customized_credential_load(Box::new(OPEN_DAL_S3_CREDENTIAL_LOADER))
.enable_virtual_host_style()
.bucket(bucket)
.root(url.path())
.default_storage_class("INTELLIGENT_TIERING");
Ok(opendal::Operator::new(builder)?.finish())
}
pub enum PathType {
Data,
IconCache,
Attachments,
Sends,
RsaKey,
}
impl Config { impl Config {
pub fn load() -> Result<Self, Error> { pub async fn load() -> Result<Self, Error> {
// Loading from env and file // Loading from env and file
let _env = ConfigBuilder::from_env(); let _env = ConfigBuilder::from_env();
let _usr = ConfigBuilder::from_file(&CONFIG_FILE).unwrap_or_default(); let _usr = ConfigBuilder::from_file().await.unwrap_or_default();
// Create merged config, config file overwrites env // Create merged config, config file overwrites env
let mut _overrides = Vec::new(); let mut _overrides = Vec::new();
@@ -1166,7 +1281,7 @@ impl Config {
}) })
} }
pub fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> { pub async fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> {
// Remove default values // Remove default values
//let builder = other.remove(&self.inner.read().unwrap()._env); //let builder = other.remove(&self.inner.read().unwrap()._env);
@@ -1198,20 +1313,19 @@ impl Config {
} }
//Save to file //Save to file
use std::{fs::File, io::Write}; let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
let mut file = File::create(&*CONFIG_FILE)?; operator.write(&CONFIG_FILENAME, config_str).await?;
file.write_all(config_str.as_bytes())?;
Ok(()) Ok(())
} }
fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> { async fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
let builder = { let builder = {
let usr = &self.inner.read().unwrap()._usr; let usr = &self.inner.read().unwrap()._usr;
let mut _overrides = Vec::new(); let mut _overrides = Vec::new();
usr.merge(&other, false, &mut _overrides) usr.merge(&other, false, &mut _overrides)
}; };
self.update_config(builder, false) self.update_config(builder, false).await
} }
/// Tests whether an email's domain is allowed. A domain is allowed if it /// Tests whether an email's domain is allowed. A domain is allowed if it
@@ -1253,8 +1367,9 @@ impl Config {
} }
} }
pub fn delete_user_config(&self) -> Result<(), Error> { pub async fn delete_user_config(&self) -> Result<(), Error> {
std::fs::remove_file(&*CONFIG_FILE)?; let operator = opendal_operator_for_path(&CONFIG_FILE_PARENT_DIR)?;
operator.delete(&CONFIG_FILENAME).await?;
// Empty user config // Empty user config
let usr = ConfigBuilder::default(); let usr = ConfigBuilder::default();
@@ -1284,7 +1399,7 @@ impl Config {
inner._enable_smtp && (inner.smtp_host.is_some() || inner.use_sendmail) inner._enable_smtp && (inner.smtp_host.is_some() || inner.use_sendmail)
} }
pub fn get_duo_akey(&self) -> String { pub async fn get_duo_akey(&self) -> String {
if let Some(akey) = self._duo_akey() { if let Some(akey) = self._duo_akey() {
akey akey
} else { } else {
@@ -1295,7 +1410,7 @@ impl Config {
_duo_akey: Some(akey_s.clone()), _duo_akey: Some(akey_s.clone()),
..Default::default() ..Default::default()
}; };
self.update_config_partial(builder).ok(); self.update_config_partial(builder).await.ok();
akey_s akey_s
} }
@@ -1308,6 +1423,23 @@ impl Config {
token.is_some() && !token.unwrap().trim().is_empty() token.is_some() && !token.unwrap().trim().is_empty()
} }
pub fn opendal_operator_for_path_type(&self, path_type: PathType) -> Result<opendal::Operator, Error> {
let path = match path_type {
PathType::Data => self.data_folder(),
PathType::IconCache => self.icon_cache_folder(),
PathType::Attachments => self.attachments_folder(),
PathType::Sends => self.sends_folder(),
PathType::RsaKey => std::path::Path::new(&self.rsa_key_filename())
.parent()
.ok_or_else(|| std::io::Error::other("Failed to get directory of RSA key file"))?
.to_str()
.ok_or_else(|| std::io::Error::other("Failed to convert RSA key file directory to UTF-8 string"))?
.to_string(),
};
opendal_operator_for_path(&path)
}
pub fn render_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> { pub fn render_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> {
if self.reload_templates() { if self.reload_templates() {
warn!("RELOADING TEMPLATES"); warn!("RELOADING TEMPLATES");

View File

@@ -1,11 +1,11 @@
use std::io::ErrorKind; use std::time::Duration;
use bigdecimal::{BigDecimal, ToPrimitive}; use bigdecimal::{BigDecimal, ToPrimitive};
use derive_more::{AsRef, Deref, Display}; use derive_more::{AsRef, Deref, Display};
use serde_json::Value; use serde_json::Value;
use super::{CipherId, OrganizationId, UserId}; use super::{CipherId, OrganizationId, UserId};
use crate::CONFIG; use crate::{config::PathType, CONFIG};
use macros::IdFromParam; use macros::IdFromParam;
db_object! { db_object! {
@@ -41,24 +41,30 @@ impl Attachment {
} }
pub fn get_file_path(&self) -> String { pub fn get_file_path(&self) -> String {
format!("{}/{}/{}", CONFIG.attachments_folder(), self.cipher_uuid, self.id) format!("{}/{}", self.cipher_uuid, self.id)
} }
pub fn get_url(&self, host: &str) -> String { pub async fn get_url(&self, host: &str) -> Result<String, crate::Error> {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone())); let operator = CONFIG.opendal_operator_for_path_type(PathType::Attachments)?;
format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id)
if operator.info().scheme() == opendal::Scheme::Fs {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone()));
Ok(format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id))
} else {
Ok(operator.presign_read(&self.get_file_path(), Duration::from_secs(5 * 60)).await?.uri().to_string())
}
} }
pub fn to_json(&self, host: &str) -> Value { pub async fn to_json(&self, host: &str) -> Result<Value, crate::Error> {
json!({ Ok(json!({
"id": self.id, "id": self.id,
"url": self.get_url(host), "url": self.get_url(host).await?,
"fileName": self.file_name, "fileName": self.file_name,
"size": self.file_size.to_string(), "size": self.file_size.to_string(),
"sizeName": crate::util::get_display_size(self.file_size), "sizeName": crate::util::get_display_size(self.file_size),
"key": self.akey, "key": self.akey,
"object": "attachment" "object": "attachment"
}) }))
} }
} }
@@ -104,26 +110,26 @@ impl Attachment {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult { pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
let _: () = crate::util::retry( crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn), || diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
10, 10,
) )
.map_res("Error deleting attachment")?; .map(|_| ())
.map_res("Error deleting attachment")
}}?;
let file_path = &self.get_file_path(); let operator = CONFIG.opendal_operator_for_path_type(PathType::Attachments)?;
let file_path = self.get_file_path();
match std::fs::remove_file(file_path) { if let Err(e) = operator.delete(&file_path).await {
// Ignore "file not found" errors. This can happen when the if e.kind() == opendal::ErrorKind::NotFound {
// upstream caller has already cleaned up the file as part of debug!("File '{file_path}' already deleted.");
// its own error handling. } else {
Err(e) if e.kind() == ErrorKind::NotFound => { return Err(e.into());
debug!("File '{file_path}' already deleted.");
Ok(())
}
Err(e) => Err(e.into()),
_ => Ok(()),
} }
}} }
Ok(())
} }
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {

View File

@@ -141,18 +141,28 @@ impl Cipher {
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
sync_type: CipherSyncType, sync_type: CipherSyncType,
conn: &mut DbConn, conn: &mut DbConn,
) -> Value { ) -> Result<Value, crate::Error> {
use crate::util::{format_date, validate_and_format_date}; use crate::util::{format_date, validate_and_format_date};
let mut attachments_json: Value = Value::Null; let mut attachments_json: Value = Value::Null;
if let Some(cipher_sync_data) = cipher_sync_data { if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(attachments) = cipher_sync_data.cipher_attachments.get(&self.uuid) { if let Some(attachments) = cipher_sync_data.cipher_attachments.get(&self.uuid) {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect(); if !attachments.is_empty() {
let mut attachments_json_vec = vec![];
for attachment in attachments {
attachments_json_vec.push(attachment.to_json(host).await?);
}
attachments_json = Value::Array(attachments_json_vec);
}
} }
} else { } else {
let attachments = Attachment::find_by_cipher(&self.uuid, conn).await; let attachments = Attachment::find_by_cipher(&self.uuid, conn).await;
if !attachments.is_empty() { if !attachments.is_empty() {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect() let mut attachments_json_vec = vec![];
for attachment in attachments {
attachments_json_vec.push(attachment.to_json(host).await?);
}
attachments_json = Value::Array(attachments_json_vec);
} }
} }
@@ -372,6 +382,11 @@ impl Cipher {
// the "Read Only" or "Hide Passwords" restrictions for the user. // the "Read Only" or "Hide Passwords" restrictions for the user.
json_object["edit"] = json!(!read_only); json_object["edit"] = json!(!read_only);
json_object["viewPassword"] = json!(!hide_passwords); json_object["viewPassword"] = json!(!hide_passwords);
// The new key used by clients since v2025.6.0
json_object["permissions"] = json!({
"delete": !read_only,
"restore": !read_only,
});
} }
let key = match self.atype { let key = match self.atype {
@@ -384,7 +399,7 @@ impl Cipher {
}; };
json_object[key] = type_data_json; json_object[key] = type_data_json;
json_object Ok(json_object)
} }
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> { pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {

View File

@@ -211,7 +211,7 @@ impl OrgPolicy {
pub async fn find_accepted_and_confirmed_by_user_and_active_policy( pub async fn find_accepted_and_confirmed_by_user_and_active_policy(
user_uuid: &UserId, user_uuid: &UserId,
policy_type: OrgPolicyType, policy_type: OrgPolicyType,
conn: &mut DbConn, conn: &DbConn,
) -> Vec<Self> { ) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table

View File

@@ -1,7 +1,7 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use serde_json::Value; use serde_json::Value;
use crate::util::LowerCase; use crate::{config::PathType, util::LowerCase, CONFIG};
use super::{OrganizationId, User, UserId}; use super::{OrganizationId, User, UserId};
use id::SendId; use id::SendId;
@@ -226,7 +226,8 @@ impl Send {
self.update_users_revision(conn).await; self.update_users_revision(conn).await;
if self.atype == SendType::File as i32 { if self.atype == SendType::File as i32 {
std::fs::remove_dir_all(std::path::Path::new(&crate::CONFIG.sends_folder()).join(&self.uuid)).ok(); let operator = CONFIG.opendal_operator_for_path_type(PathType::Sends)?;
operator.remove_all(&self.uuid).await.ok();
} }
db_run! { conn: { db_run! { conn: {

View File

@@ -46,6 +46,7 @@ use jsonwebtoken::errors::Error as JwtErr;
use lettre::address::AddressError as AddrErr; use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr; use lettre::error::Error as LettreErr;
use lettre::transport::smtp::Error as SmtpErr; use lettre::transport::smtp::Error as SmtpErr;
use opendal::Error as OpenDALErr;
use openssl::error::ErrorStack as SSLErr; use openssl::error::ErrorStack as SSLErr;
use regex::Error as RegexErr; use regex::Error as RegexErr;
use reqwest::Error as ReqErr; use reqwest::Error as ReqErr;
@@ -98,6 +99,8 @@ make_error! {
DieselCon(DieselConErr): _has_source, _api_error, DieselCon(DieselConErr): _has_source, _api_error,
Webauthn(WebauthnErr): _has_source, _api_error, Webauthn(WebauthnErr): _has_source, _api_error,
OpenDAL(OpenDALErr): _has_source, _api_error,
} }
impl std::fmt::Debug for Error { impl std::fmt::Debug for Error {

View File

@@ -244,3 +244,61 @@ impl Resolve for CustomDnsResolver {
}) })
} }
} }
#[cfg(s3)]
pub(crate) mod aws {
use aws_smithy_runtime_api::client::{
http::{HttpClient, HttpConnector, HttpConnectorFuture, HttpConnectorSettings, SharedHttpConnector},
orchestrator::HttpResponse,
result::ConnectorError,
runtime_components::RuntimeComponents,
};
use reqwest::Client;
// Adapter that wraps reqwest to be compatible with the AWS SDK
#[derive(Debug)]
pub(crate) struct AwsReqwestConnector {
pub(crate) client: Client,
}
impl HttpConnector for AwsReqwestConnector {
fn call(&self, request: aws_smithy_runtime_api::client::orchestrator::HttpRequest) -> HttpConnectorFuture {
// Convert the AWS-style request to a reqwest request
let client = self.client.clone();
let future = async move {
let method = reqwest::Method::from_bytes(request.method().as_bytes())
.map_err(|e| ConnectorError::user(Box::new(e)))?;
let mut req_builder = client.request(method, request.uri().to_string());
for (name, value) in request.headers() {
req_builder = req_builder.header(name, value);
}
if let Some(body_bytes) = request.body().bytes() {
req_builder = req_builder.body(body_bytes.to_vec());
}
let response = req_builder.send().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
let status = response.status().into();
let bytes = response.bytes().await.map_err(|e| ConnectorError::io(Box::new(e)))?;
Ok(HttpResponse::new(status, bytes.into()))
};
HttpConnectorFuture::new(Box::pin(future))
}
}
impl HttpClient for AwsReqwestConnector {
fn http_connector(
&self,
_settings: &HttpConnectorSettings,
_components: &RuntimeComponents,
) -> SharedHttpConnector {
SharedHttpConnector::new(AwsReqwestConnector {
client: self.client.clone(),
})
}
}
}

View File

@@ -61,7 +61,7 @@ mod util;
use crate::api::core::two_factor::duo_oidc::purge_duo_contexts; use crate::api::core::two_factor::duo_oidc::purge_duo_contexts;
use crate::api::purge_auth_requests; use crate::api::purge_auth_requests;
use crate::api::{WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS}; use crate::api::{WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS};
pub use config::CONFIG; pub use config::{PathType, CONFIG};
pub use error::{Error, MapResult}; pub use error::{Error, MapResult};
use rocket::data::{Limits, ToByteUnit}; use rocket::data::{Limits, ToByteUnit};
use std::sync::{atomic::Ordering, Arc}; use std::sync::{atomic::Ordering, Arc};
@@ -75,16 +75,13 @@ async fn main() -> Result<(), Error> {
let level = init_logging()?; let level = init_logging()?;
check_data_folder().await; check_data_folder().await;
auth::initialize_keys().unwrap_or_else(|e| { auth::initialize_keys().await.unwrap_or_else(|e| {
error!("Error creating private key '{}'\n{e:?}\nExiting Vaultwarden!", CONFIG.private_rsa_key()); error!("Error creating private key '{}'\n{e:?}\nExiting Vaultwarden!", CONFIG.private_rsa_key());
exit(1); exit(1);
}); });
check_web_vault(); check_web_vault();
create_dir(&CONFIG.icon_cache_folder(), "icon cache");
create_dir(&CONFIG.tmp_folder(), "tmp folder"); create_dir(&CONFIG.tmp_folder(), "tmp folder");
create_dir(&CONFIG.sends_folder(), "sends folder");
create_dir(&CONFIG.attachments_folder(), "attachments folder");
let pool = create_db_pool().await; let pool = create_db_pool().await;
schedule_jobs(pool.clone()); schedule_jobs(pool.clone());
@@ -464,6 +461,24 @@ fn create_dir(path: &str, description: &str) {
async fn check_data_folder() { async fn check_data_folder() {
let data_folder = &CONFIG.data_folder(); let data_folder = &CONFIG.data_folder();
if data_folder.starts_with("s3://") {
if let Err(e) = CONFIG
.opendal_operator_for_path_type(PathType::Data)
.unwrap_or_else(|e| {
error!("Failed to create S3 operator for data folder '{data_folder}': {e:?}");
exit(1);
})
.check()
.await
{
error!("Could not access S3 data folder '{data_folder}': {e:?}");
exit(1);
}
return;
}
let path = Path::new(data_folder); let path = Path::new(data_folder);
if !path.exists() { if !path.exists() {
error!("Data folder '{data_folder}' doesn't exist."); error!("Data folder '{data_folder}' doesn't exist.");

View File

@@ -54,3 +54,7 @@ img {
.vw-copy-toast { .vw-copy-toast {
width: 15rem; width: 15rem;
} }
.abbr-badge {
cursor: help;
}

View File

@@ -208,11 +208,9 @@ function initVersionCheck(dj) {
} }
checkVersions("server", serverInstalled, serverLatest, serverLatestCommit); checkVersions("server", serverInstalled, serverLatest, serverLatestCommit);
if (!dj.running_within_container) { const webInstalled = dj.web_vault_version;
const webInstalled = dj.web_vault_version; const webLatest = dj.latest_web_build;
const webLatest = dj.latest_web_build; checkVersions("web", webInstalled, webLatest, null, dj.web_vault_pre_release);
checkVersions("web", webInstalled, webLatest, null, dj.web_vault_pre_release);
}
} }
function checkDns(dns_resolved) { function checkDns(dns_resolved) {

View File

@@ -7,36 +7,34 @@
<div class="col-md"> <div class="col-md">
<dl class="row"> <dl class="row">
<dt class="col-sm-5">Server Installed <dt class="col-sm-5">Server Installed
<span class="badge bg-success d-none" id="server-success" title="Latest version is installed.">Ok</span> <span class="badge bg-success d-none abbr-badge" id="server-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none" id="server-warning" title="There seems to be an update available.">Update</span> <span class="badge bg-warning text-dark d-none abbr-badge" id="server-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none" id="server-branch" title="This is a branched version.">Branched</span> <span class="badge bg-info text-dark d-none abbr-badge" id="server-branch" title="This is a branched version.">Branched</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="server-installed">{{page_data.current_release}}</span> <span id="server-installed">{{page_data.current_release}}</span>
</dd> </dd>
<dt class="col-sm-5">Server Latest <dt class="col-sm-5">Server Latest
<span class="badge bg-secondary d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span> <span class="badge bg-secondary d-none abbr-badge" id="server-failed" title="Unable to determine latest version.">Unknown</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="server-latest">{{page_data.latest_release}}<span id="server-latest-commit" class="d-none">-{{page_data.latest_commit}}</span></span> <span id="server-latest">{{page_data.latest_release}}<span id="server-latest-commit" class="d-none">-{{page_data.latest_commit}}</span></span>
</dd> </dd>
{{#if page_data.web_vault_enabled}} {{#if page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed <dt class="col-sm-5">Web Installed
<span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span> <span class="badge bg-success d-none abbr-badge" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span> <span class="badge bg-warning text-dark d-none abbr-badge" id="web-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none" id="web-prerelease" title="You seem to be using a pre-release version.">Pre-Release</span> <span class="badge bg-info text-dark d-none abbr-badge" id="web-prerelease" title="You seem to be using a pre-release version.">Pre-Release</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-installed">{{page_data.web_vault_version}}</span> <span id="web-installed">{{page_data.web_vault_version}}</span>
</dd> </dd>
{{#unless page_data.running_within_container}}
<dt class="col-sm-5">Web Latest <dt class="col-sm-5">Web Latest
<span class="badge bg-secondary d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span> <span class="badge bg-secondary d-none abbr-badge" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-latest">{{page_data.latest_web_build}}</span> <span id="web-latest">{{page_data.latest_web_build}}</span>
</dd> </dd>
{{/unless}}
{{/if}} {{/if}}
{{#unless page_data.web_vault_enabled}} {{#unless page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed</dt> <dt class="col-sm-5">Web Installed</dt>
@@ -69,14 +67,11 @@
<span class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
{{/unless}} {{/unless}}
</dd> </dd>
<dt class="col-sm-5">Uses config.json <dt class="col-sm-5">Uses config.json</dt>
{{#if page_data.overrides}}
<span class="badge bg-info text-dark" title="Environment variables are overwritten by a config.json.">Note</span>
{{/if}}
</dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
{{#if page_data.overrides}} {{#if page_data.overrides}}
<abbr class="d-block" title="The following settings are overridden: {{page_data.overrides}}"><b>Yes</b></abbr> <span class="d-inline"><b>Yes</b></span>
<span class="badge bg-info text-dark abbr-badge" title="Environment variables are overwritten by a config.json.&#013;&#010;{{page_data.overrides}}">Details</span>
{{/if}} {{/if}}
{{#unless page_data.overrides}} {{#unless page_data.overrides}}
<span class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
@@ -95,10 +90,10 @@
{{#if page_data.ip_header_exists}} {{#if page_data.ip_header_exists}}
<dt class="col-sm-5">IP header <dt class="col-sm-5">IP header
{{#if page_data.ip_header_match}} {{#if page_data.ip_header_match}}
<span class="badge bg-success" title="IP_HEADER config seems to be valid.">Match</span> <span class="badge bg-success abbr-badge" title="IP_HEADER config seems to be valid.">Match</span>
{{/if}} {{/if}}
{{#unless page_data.ip_header_match}} {{#unless page_data.ip_header_match}}
<span class="badge bg-danger" title="IP_HEADER config seems to be invalid. IP's in the log could be invalid. Please fix.">No Match</span> <span class="badge bg-danger abbr-badge" title="IP_HEADER config seems to be invalid. IP's in the log could be invalid. Please fix.">No Match</span>
{{/unless}} {{/unless}}
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
@@ -114,10 +109,10 @@
{{!-- End if IP Header Exists --}} {{!-- End if IP Header Exists --}}
<dt class="col-sm-5">Internet access <dt class="col-sm-5">Internet access
{{#if page_data.has_http_access}} {{#if page_data.has_http_access}}
<span class="badge bg-success" title="We have internet access!">Ok</span> <span class="badge bg-success abbr-badge" title="We have internet access!">Ok</span>
{{/if}} {{/if}}
{{#unless page_data.has_http_access}} {{#unless page_data.has_http_access}}
<span class="badge bg-danger" title="There seems to be no internet access. Please fix.">Error</span> <span class="badge bg-danger abbr-badge" title="There seems to be no internet access. Please fix.">Error</span>
{{/unless}} {{/unless}}
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
@@ -139,8 +134,8 @@
</dd> </dd>
<dt class="col-sm-5">Websocket enabled <dt class="col-sm-5">Websocket enabled
{{#if page_data.enable_websocket}} {{#if page_data.enable_websocket}}
<span class="badge bg-success d-none" id="websocket-success" title="Websocket connection is working.">Ok</span> <span class="badge bg-success d-none abbr-badge" id="websocket-success" title="Websocket connection is working.">Ok</span>
<span class="badge bg-danger d-none" id="websocket-error" title="Websocket connection error, validate your reverse proxy configuration!">Error</span> <span class="badge bg-danger d-none abbr-badge" id="websocket-error" title="Websocket connection error, validate your reverse proxy configuration!">Error</span>
{{/if}} {{/if}}
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
@@ -153,27 +148,27 @@
</dd> </dd>
<dt class="col-sm-5">DNS (github.com) <dt class="col-sm-5">DNS (github.com)
<span class="badge bg-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span> <span class="badge bg-success d-none abbr-badge" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge bg-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span> <span class="badge bg-danger d-none abbr-badge" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="dns-resolved">{{page_data.dns_resolved}}</span> <span id="dns-resolved">{{page_data.dns_resolved}}</span>
</dd> </dd>
<dt class="col-sm-5">Date & Time (Local) <dt class="col-sm-5">Date & Time (Local)
{{#if page_data.tz_env}} {{#if page_data.tz_env}}
<span class="badge bg-success" title="Configured TZ environment variable">{{page_data.tz_env}}</span> <span class="badge bg-success abbr-badge" title="Configured TZ environment variable">{{page_data.tz_env}}</span>
{{/if}} {{/if}}
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span><b>Server:</b> {{page_data.server_time_local}}</span> <span><b>Server:</b> {{page_data.server_time_local}}</span>
</dd> </dd>
<dt class="col-sm-5">Date & Time (UTC) <dt class="col-sm-5">Date & Time (UTC)
<span class="badge bg-success d-none" id="time-success" title="Server and browser times are within 15 seconds of each other.">Server/Browser Ok</span> <span class="badge bg-success d-none abbr-badge" id="time-success" title="Server and browser times are within 15 seconds of each other.">Server/Browser Ok</span>
<span class="badge bg-danger d-none" id="time-warning" title="Server and browser times are more than 15 seconds apart.">Server/Browser Error</span> <span class="badge bg-danger d-none abbr-badge" id="time-warning" title="Server and browser times are more than 15 seconds apart.">Server/Browser Error</span>
<span class="badge bg-success d-none" id="ntp-server-success" title="Server and NTP times are within 15 seconds of each other.">Server NTP Ok</span> <span class="badge bg-success d-none abbr-badge" id="ntp-server-success" title="Server and NTP times are within 15 seconds of each other.">Server NTP Ok</span>
<span class="badge bg-danger d-none" id="ntp-server-warning" title="Server and NTP times are more than 15 seconds apart.">Server NTP Error</span> <span class="badge bg-danger d-none abbr-badge" id="ntp-server-warning" title="Server and NTP times are more than 15 seconds apart.">Server NTP Error</span>
<span class="badge bg-success d-none" id="ntp-browser-success" title="Browser and NTP times are within 15 seconds of each other.">Browser NTP Ok</span> <span class="badge bg-success d-none abbr-badge" id="ntp-browser-success" title="Browser and NTP times are within 15 seconds of each other.">Browser NTP Ok</span>
<span class="badge bg-danger d-none" id="ntp-browser-warning" title="Browser and NTP times are more than 15 seconds apart.">Browser NTP Error</span> <span class="badge bg-danger d-none abbr-badge" id="ntp-browser-warning" title="Browser and NTP times are more than 15 seconds apart.">Browser NTP Error</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="ntp-time" class="d-block"><b>NTP:</b> <span id="ntp-server-string">{{page_data.ntp_time}}</span></span> <span id="ntp-time" class="d-block"><b>NTP:</b> <span id="ntp-server-string">{{page_data.ntp_time}}</span></span>
@@ -182,10 +177,10 @@
</dd> </dd>
<dt class="col-sm-5">Domain configuration <dt class="col-sm-5">Domain configuration
<span class="badge bg-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span> <span class="badge bg-success d-none abbr-badge" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not match the browser location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span> <span class="badge bg-danger d-none abbr-badge" id="domain-warning" title="The domain variable does not match the browser location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span>
<span class="badge bg-success d-none" id="https-success" title="Configured to use HTTPS">HTTPS</span> <span class="badge bg-success d-none abbr-badge" id="https-success" title="Configured to use HTTPS">HTTPS</span>
<span class="badge bg-danger d-none" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span> <span class="badge bg-danger d-none abbr-badge" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{page_data.admin_url}}</span></span> <span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{page_data.admin_url}}</span></span>
@@ -193,8 +188,8 @@
</dd> </dd>
<dt class="col-sm-5">HTTP Response validation <dt class="col-sm-5">HTTP Response validation
<span class="badge bg-success d-none" id="http-response-success" title="All headers and HTTP request responses seem to be ok.">Ok</span> <span class="badge bg-success d-none abbr-badge" id="http-response-success" title="All headers and HTTP request responses seem to be ok.">Ok</span>
<span class="badge bg-danger d-none" id="http-response-warning" title="Some headers or HTTP request responses return invalid data!">Error</span> <span class="badge bg-danger d-none abbr-badge" id="http-response-warning" title="Some headers or HTTP request responses return invalid data!">Error</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="http-response-errors" class="d-block"></span> <span id="http-response-errors" class="d-block"></span>

View File

@@ -21,17 +21,37 @@ a[href$="/settings/sponsored-families"] {
} }
/* Hide the `Enterprise Single Sign-On` button on the login page */ /* Hide the `Enterprise Single Sign-On` button on the login page */
app-root form.ng-untouched button.\!tw-text-primary-600:nth-child(4) { {{#if (webver ">=2025.5.1")}}
.vw-sso-login {
@extend %vw-hide; @extend %vw-hide;
} }
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(4) {
@extend %vw-hide;
}
{{/if}}
/* Hide Log in with passkey on the login page */ /* Hide Log in with passkey on the login page */
app-root form.ng-untouched a[routerlink="/login-with-passkey"] { {{#if (webver ">=2025.5.1")}}
.vw-passkey-login {
@extend %vw-hide; @extend %vw-hide;
} }
{{else}}
app-root ng-component > form > div:nth-child(1) > div > button[buttontype="secondary"].\!tw-text-primary-600:nth-child(3) {
@extend %vw-hide;
}
{{/if}}
/* Hide the or text followed by the two buttons hidden above */ /* Hide the or text followed by the two buttons hidden above */
app-root form.ng-untouched > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) { {{#if (webver ">=2025.5.1")}}
.vw-or-text {
@extend %vw-hide; @extend %vw-hide;
} }
{{else}}
app-root ng-component > form > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
@extend %vw-hide;
}
{{/if}}
/* Hide Two-Factor menu in Organization settings */ /* Hide Two-Factor menu in Organization settings */
bit-nav-item[route="settings/two-factor"], bit-nav-item[route="settings/two-factor"],

View File

@@ -16,7 +16,7 @@ use tokio::{
time::{sleep, Duration}, time::{sleep, Duration},
}; };
use crate::CONFIG; use crate::{config::PathType, CONFIG};
pub struct AppHeaders(); pub struct AppHeaders();
@@ -61,9 +61,11 @@ impl Fairing for AppHeaders {
// The `Cross-Origin-Resource-Policy` header should not be set on images or on the `icon_external` route. // The `Cross-Origin-Resource-Policy` header should not be set on images or on the `icon_external` route.
// Otherwise some clients, like the Bitwarden Desktop, will fail to download the icons // Otherwise some clients, like the Bitwarden Desktop, will fail to download the icons
let mut is_image = true;
if !(res.headers().get_one("Content-Type").is_some_and(|v| v.starts_with("image/")) if !(res.headers().get_one("Content-Type").is_some_and(|v| v.starts_with("image/"))
|| req.route().is_some_and(|v| v.name.as_deref() == Some("icon_external"))) || req.route().is_some_and(|v| v.name.as_deref() == Some("icon_external")))
{ {
is_image = false;
res.set_raw_header("Cross-Origin-Resource-Policy", "same-origin"); res.set_raw_header("Cross-Origin-Resource-Policy", "same-origin");
} }
@@ -71,49 +73,56 @@ impl Fairing for AppHeaders {
// This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo. // This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo.
// This is the same behavior as upstream Bitwarden. // This is the same behavior as upstream Bitwarden.
if !req_uri_path.ends_with("connector.html") { if !req_uri_path.ends_with("connector.html") {
// # Frame Ancestors: let csp = if is_image {
// Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb // Prevent scripts, frames, objects, etc., from loading with images, mainly for SVG images, since these could contain JavaScript and other unsafe items.
// Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US // Even though we sanitize SVG images before storing and viewing them, it's better to prevent allowing these elements.
// Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/ String::from("default-src 'none'; img-src 'self' data:; style-src 'unsafe-inline'; script-src 'none'; frame-src 'none'; object-src 'none")
// # img/child/frame src: } else {
// Have I Been Pwned to allow those calls to work. // # Frame Ancestors:
// # Connect src: // Chrome Web Store: https://chrome.google.com/webstore/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb
// Leaked Passwords check: api.pwnedpasswords.com // Edge Add-ons: https://microsoftedge.microsoft.com/addons/detail/bitwarden-free-password/jbkfoedolllekgbhcbcoahefnbanhhlh?hl=en-US
// 2FA/MFA Site check: api.2fa.directory // Firefox Browser Add-ons: https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/ // # img/child/frame src:
// app.simplelogin.io, app.addy.io, api.fastmail.com, quack.duckduckgo.com // Have I Been Pwned to allow those calls to work.
let csp = format!( // # Connect src:
"default-src 'none'; \ // Leaked Passwords check: api.pwnedpasswords.com
font-src 'self'; \ // 2FA/MFA Site check: api.2fa.directory
manifest-src 'self'; \ // # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
base-uri 'self'; \ // app.simplelogin.io, app.addy.io, api.fastmail.com, api.forwardemail.net
form-action 'self'; \ format!(
object-src 'self' blob:; \ "default-src 'none'; \
script-src 'self' 'wasm-unsafe-eval'; \ font-src 'self'; \
style-src 'self' 'unsafe-inline'; \ manifest-src 'self'; \
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \ base-uri 'self'; \
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \ form-action 'self'; \
frame-ancestors 'self' \ object-src 'self' blob:; \
chrome-extension://nngceckbapebfimnlniiiahkandclblb \ script-src 'self' 'wasm-unsafe-eval'; \
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \ style-src 'self' 'unsafe-inline'; \
moz-extension://* \ child-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
{allowed_iframe_ancestors}; \ frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; \
img-src 'self' data: \ frame-ancestors 'self' \
https://haveibeenpwned.com \ chrome-extension://nngceckbapebfimnlniiiahkandclblb \
{icon_service_csp}; \ chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh \
connect-src 'self' \ moz-extension://* \
https://api.pwnedpasswords.com \ {allowed_iframe_ancestors}; \
https://api.2fa.directory \ img-src 'self' data: \
https://app.simplelogin.io/api/ \ https://haveibeenpwned.com \
https://app.addy.io/api/ \ {icon_service_csp}; \
https://api.fastmail.com/ \ connect-src 'self' \
https://api.forwardemail.net \ https://api.pwnedpasswords.com \
{allowed_connect_src};\ https://api.2fa.directory \
", https://app.simplelogin.io/api/ \
icon_service_csp = CONFIG._icon_service_csp(), https://app.addy.io/api/ \
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(), https://api.fastmail.com/ \
allowed_connect_src = CONFIG.allowed_connect_src(), https://api.forwardemail.net \
); {allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
)
};
res.set_raw_header("Content-Security-Policy", csp); res.set_raw_header("Content-Security-Policy", csp);
res.set_raw_header("X-Frame-Options", "SAMEORIGIN"); res.set_raw_header("X-Frame-Options", "SAMEORIGIN");
} else { } else {
@@ -827,6 +836,26 @@ pub fn is_global(ip: std::net::IpAddr) -> bool {
ip.is_global() ip.is_global()
} }
/// Saves a Rocket temporary file to the OpenDAL Operator at the given path.
pub async fn save_temp_file(
path_type: PathType,
path: &str,
temp_file: rocket::fs::TempFile<'_>,
overwrite: bool,
) -> Result<(), crate::Error> {
use futures::AsyncWriteExt as _;
use tokio_util::compat::TokioAsyncReadCompatExt as _;
let operator = CONFIG.opendal_operator_for_path_type(path_type)?;
let mut read_stream = temp_file.open().await?.compat();
let mut writer = operator.writer_with(path).if_not_exists(!overwrite).await?.into_futures_async_write();
futures::io::copy(&mut read_stream, &mut writer).await?;
writer.close().await?;
Ok(())
}
/// These are some tests to check that the implementations match /// These are some tests to check that the implementations match
/// The IPv4 can be all checked in 30 seconds or so and they are correct as of nightly 2023-07-17 /// The IPv4 can be all checked in 30 seconds or so and they are correct as of nightly 2023-07-17
/// The IPV6 can't be checked in a reasonable time, so we check over a hundred billion random ones, so far correct /// The IPV6 can't be checked in a reasonable time, so we check over a hundred billion random ones, so far correct