Compare commits

..

18 Commits

Author SHA1 Message Date
Mathijs van Veluw f21a3adae2 Update hickory (#7175)
Signed-off-by: BlackDex <black.dex@gmail.com>
2026-05-02 18:56:15 +02:00
Mathijs van Veluw 07aa377af7 Update crates and web-vault (#7171)
- Update crates including fixing a regression of Diesel
- Update web-vault to v2026.4.1
- Adjusted the README to address the secure context and needing HTTPS

Fixes #7132
Closes #7137

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-30 21:45:45 +02:00
Eldred Habert 14258caec9 Allow SQLite to be linked against dynamically (#7057)
Keeping the default behaviour of SQLite being built statically,
so as not to break anyone's workflow, but allowing for downstream
packagers to link dynamically against SQLite (where it's fine because
that's the point of package managers).

Note that SQLite is still *not* enabled by default, thanks to the `?` operator.

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2026-04-29 22:59:18 +02:00
eason cb46fcb948 fix: return Err instead of panic on unknown cipher atype in to_json() (#7068)
`Cipher::to_json()` returns `Result<Value, Error>` but its match arm for
unknown `atype` values called `panic!("Wrong type")` instead of
propagating an error. This means if a cipher with an invalid/unknown type
ends up in the database (via direct DB edits, data migration issues, or
future type additions in the upstream Bitwarden protocol), the entire
server process would crash on the next sync request.

Replace the `panic!` with `err!()` so callers receive a proper `Err` and
can handle or log it gracefully without taking down the server.

Co-authored-by: easonysliu <easonysliu@tencent.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2026-04-29 22:58:50 +02:00
Johny Jiménez b89648a136 Replace organization_uuid unwrap with proper error handling (#6936)
The collection update endpoints (post_collections_update and
post_collections_admin) call .unwrap() on cipher.organization_uuid
in four places. If a user-owned cipher without an organization
somehow reaches these code paths, the server would panic.

Extract the organization UUID early with a descriptive error message
instead of relying on .unwrap(), preventing potential panics and
providing a clear API error response.

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2026-04-29 22:58:39 +02:00
Daniel García c3bd1eb565 Fix merge conflict (#7164) 2026-04-29 22:47:42 +02:00
Shocker 8c3c969938 Fix favicon fetching to check all icon links instead of just the first one (#6880)
* Fix favicon fetching to check all icon links instead of just the first one

* revert max icons limit removal

* optimize code

* code formatting
2026-04-29 22:32:48 +02:00
Matt Aaron 38a6850b8d Add support for archiving items (#6916)
* Add archiving

* Update Diesel macros and remove unnecessary SUPPORTED_FEATURE_FLAG

* Add IF EXISTS to down.sql migratinos

* Rename migration folders, separate logic based on PR threads
2026-04-29 22:29:42 +02:00
Mathijs van Veluw d297e274a3 Several SSO Fixes (#7163)
* Ensure SSO token is only usable on the same client

This commit adds an extra check via cookies to ensure the same browser/client is used to request and provide the SSO token.
Previously it would be able to provide a custom link which attackers could use to steal data.
While an attacker would still need the Master Password to be able to decrypt or execute specific actions, they were able to fetch encrypted data.

Solved with some help of Claude Code.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Check email-verified on SSO login/create

This commit prevents possible account takeover via SSO which doesn't check/validate or provide validated status of the email.
It was checked at other locations, but was skipped here.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Prevent data disclosure via SSO endpoints

This commit prevents some data disclosure and user enumeration by only returning the fake SSO identifier.
Since we do not check the identifier anywhere useful, returning the fake one is just fine.

During an invite to an org, that link contains the correct UUID and will be used for the master password requirements.
For anything else, server admins should set the `SSO_MASTER_PASSWORD_POLICY` env variable.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust admin layout to fix issues when SSO is enabled

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-29 22:25:36 +02:00
Mathijs van Veluw a354e57659 Fix Host/IP resolving (#7162)
IPv4 addresses can also be in decimal or hex formats.
These were not checked during the Global IP check, and could bypass it.

We now convert everything to the right format before running this check and it will catch these formats.

Also updated the `is_global()` function to match Rust's still unstable version.
And updated the Image Magic checks to be more precise and filter out any possible broken or invalid formats.

While at it, also added several checks to ensure these special formatted IPv4 addresses are still blocked and punycode domains are also correctly resolved.

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-29 22:20:59 +02:00
Mathijs van Veluw 5cc7360816 Update crates and fix a nightly lint (#7161)
Updated all the crates including two which reported a possible CVE
Updated Typos

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-29 22:10:26 +02:00
Timshel 62748100f0 Fix hardcoded sso identifier (#7157)
Co-authored-by: Timshel <timshel@users.noreply.github.com>
2026-04-28 19:09:47 +02:00
Daniel fcbdebd6d7 Apply ref_option lint findings (#7143)
Quote from the lint description:
"More flexibility, better memory optimization, and more idiomatic Rust code.

&Option<T> in a function signature breaks encapsulation because the caller must own T and move it into an Option to call with it. When returned, the owner must internally store it as Option<T> in order to return it. At a lower level, &Option<T> points to memory with the presence bit flag plus the T value, whereas Option<&T> is usually optimized to a single pointer, so it may be more optimal."
2026-04-28 18:34:40 +02:00
Daniel 454b8e2a35 Apply duration_suboptimal_units lint findings (#7144)
Quote from lint description:
"Using a smaller unit for a duration that is evenly divisible by a larger unit reduces readability. Readers have to mentally convert values, which can be error-prone and makes the code less clear."
2026-04-28 18:34:15 +02:00
Daniel 7883da554e Add DuckDuckGo browser device type (#7147)
- sync with upstream
2026-04-28 18:34:03 +02:00
Stefan Melmuk fd2b6528a9 add new /identity/accounts/prelogin/password (#7156) 2026-04-28 18:33:52 +02:00
Timshel cc57e60886 Dummy identifier need to pass for a guid (#7154)
Co-authored-by: Timshel <timshel@users.noreply.github.com>
2026-04-28 18:33:49 +02:00
Timshel e5681258f0 SSO fallback to UserInfo preferred_username (#7128)
Co-authored-by: Timshel <timshel@users.noreply.github.com>
2026-04-28 18:33:45 +02:00
54 changed files with 988 additions and 332 deletions
+1 -1
View File
@@ -23,4 +23,4 @@ jobs:
# When this version is updated, do not forget to update this in `.pre-commit-config.yaml` too
- name: Spell Check Repo
uses: crate-ci/typos@cf5f1c29a8ac336af8568821ec41919923b05a83 # v1.45.1
uses: crate-ci/typos@7c572958218557a3272c2d6719629443b5cc26fd # v1.45.2
+1 -1
View File
@@ -18,7 +18,7 @@ repos:
# When this version is updated, do not forget to update this in `.github/workflows/typos.yaml` too
- repo: https://github.com/crate-ci/typos
rev: cf5f1c29a8ac336af8568821ec41919923b05a83 # v1.45.1
rev: 7c572958218557a3272c2d6719629443b5cc26fd # v1.45.2
hooks:
- id: typos
+2
View File
@@ -23,4 +23,6 @@ extend-ignore-re = [
# https://github.com/bitwarden/server/blob/dff9f1cf538198819911cf2c20f8cda3307701c5/src/Notifications/HubHelpers.cs#L86
# https://github.com/bitwarden/clients/blob/9612a4ac45063e372a6fbe87eb253c7cb3c588fb/libs/common/src/auth/services/anonymous-hub.service.ts#L45
"AuthRequestResponseRecieved",
# Ignore Punycode/IDN tests
"xn--.+"
]
Generated
+74 -74
View File
@@ -152,9 +152,9 @@ dependencies = [
[[package]]
name = "async-compression"
version = "0.4.41"
version = "0.4.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0f9ee0f6e02ffd7ad5816e9464499fba7b3effd01123b515c41d1697c43dad1"
checksum = "e79b3f8a79cccc2898f31920fc69f304859b3bd567490f75ebf51ae1c792a9ac"
dependencies = [
"compression-codecs",
"compression-core",
@@ -717,9 +717,9 @@ checksum = "2af50177e190e07a26ab74f8b1efbfe2ef87da2116221318cb1c2e82baf7de06"
[[package]]
name = "base64urlsafedata"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f7f6be94fa637132933fd0a68b9140bcb60e3d46164cb68e82a2bb8d102b3a"
checksum = "b08e33815c87d8cadcddb1e74ac307368a3751fbe40c961538afa21a1899f21c"
dependencies = [
"base64 0.21.7",
"pastey 0.1.1",
@@ -903,9 +903,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.2.60"
version = "1.2.61"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43c5703da9466b66a946814e1adf53ea2c90f10063b86290cc9eb67ce3478a20"
checksum = "d16d90359e986641506914ba71350897565610e87ce0ad9e6f28569db3dd5c6d"
dependencies = [
"find-msvc-tools",
"jobserver",
@@ -994,9 +994,9 @@ dependencies = [
[[package]]
name = "compression-codecs"
version = "0.4.37"
version = "0.4.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb7b51a7d9c967fc26773061ba86150f19c50c0d65c887cb1fbe295fd16619b7"
checksum = "ce2548391e9c1929c21bf6aa2680af86fe4c1b33e6cea9ac1cfeec0bd11218cf"
dependencies = [
"brotli",
"compression-core",
@@ -1008,9 +1008,9 @@ dependencies = [
[[package]]
name = "compression-core"
version = "0.4.31"
version = "0.4.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75984efb6ed102a0d42db99afb6c1948f0380d1d91808d5529916e6c08b49d8d"
checksum = "cc14f565cf027a105f7a44ccf9e5b424348421a1d8952a8fc9d499d313107789"
[[package]]
name = "concurrent-queue"
@@ -1387,9 +1387,9 @@ dependencies = [
[[package]]
name = "data-encoding"
version = "2.10.0"
version = "2.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea"
checksum = "a4ae5f15dda3c708c0ade84bfee31ccab44a3da4f88015ed22f63732abe300c8"
[[package]]
name = "data-url"
@@ -1521,9 +1521,9 @@ dependencies = [
[[package]]
name = "diesel"
version = "2.3.7"
version = "2.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4ae09a41a4b89f94ec1e053623da8340d996bc32c6517d325a9daad9b239358"
checksum = "9940fb8467a0a06312218ed384185cb8536aa10d8ec017d0ce7fad2c1bd882d5"
dependencies = [
"bigdecimal",
"bitflags",
@@ -1558,9 +1558,9 @@ dependencies = [
[[package]]
name = "diesel_derives"
version = "2.3.7"
version = "2.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "47618bf0fac06bb670c036e48404c26a865e6a71af4114dfd97dfe89936e404e"
checksum = "d1817b7f4279b947fc4cafddec12b0e5f8727141706561ce3ac94a60bddd1cf5"
dependencies = [
"diesel_table_macro_syntax",
"dsl_auto_type",
@@ -1571,9 +1571,9 @@ dependencies = [
[[package]]
name = "diesel_migrations"
version = "2.3.1"
version = "2.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "745fd255645f0f1135f9ec55c7b00e0882192af9683ab4731e4bba3da82b8f9c"
checksum = "28d0f4a98124ba6d4ca75da535f65984badec16a003b6e2f94a01e31a79490b8"
dependencies = [
"diesel",
"migrations_internals",
@@ -2262,9 +2262,9 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hickory-net"
version = "0.26.0"
version = "0.26.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c61c8db47fae51ba9f8f2a2748bd87542acfbe22f2ec9cf9c8ec72d1ee6e9a6"
checksum = "e2295ed2f9c31e471e1428a8f88a3f0e1f4b27c15049592138d1eebe9c35b183"
dependencies = [
"async-trait",
"cfg-if",
@@ -2286,9 +2286,9 @@ dependencies = [
[[package]]
name = "hickory-proto"
version = "0.26.0"
version = "0.26.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a916d0494600d99ecb15aadfab677ad97c4de559e8f1af0c129353a733ac1fcc"
checksum = "0bab31817bfb44672a252e97fe81cd0c18d1b2cf892108922f6818820df8c643"
dependencies = [
"data-encoding",
"idna",
@@ -2306,9 +2306,9 @@ dependencies = [
[[package]]
name = "hickory-resolver"
version = "0.26.0"
version = "0.26.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a10bd64d950b4d38ca21e25c8ae230712e4955fb8290cfcb29a5e5dc6017e544"
checksum = "f0d58d28879ceecde6607729660c2667a081ccdc082e082675042793960f178c"
dependencies = [
"cfg-if",
"futures-util",
@@ -2455,9 +2455,9 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "hybrid-array"
version = "0.4.10"
version = "0.4.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3944cf8cf766b40e2a1a333ee5e9b563f854d5fa49d6a8ca2764e97c6eddb214"
checksum = "08d46837a0ed51fe95bd3b05de33cd64a1ee88fc797477ca48446872504507c5"
dependencies = [
"typenum",
]
@@ -2515,7 +2515,7 @@ dependencies = [
"http 1.4.0",
"hyper 1.9.0",
"hyper-util",
"rustls 0.23.38",
"rustls 0.23.40",
"rustls-native-certs",
"tokio",
"tokio-rustls 0.26.4",
@@ -2679,9 +2679,9 @@ dependencies = [
[[package]]
name = "idna_adapter"
version = "1.2.1"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344"
checksum = "cb68373c0d6620ef8105e855e7745e18b0d00d3bdb07fb532e434244cdb9a714"
dependencies = [
"icu_normalizer",
"icu_properties",
@@ -2792,9 +2792,9 @@ checksum = "47f142fe24a9c9944451e8349de0a56af5f3e7226dc46f3ed4d4ecc0b85af75e"
[[package]]
name = "jiff"
version = "0.2.23"
version = "0.2.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a3546dc96b6d42c5f24902af9e2538e82e39ad350b0c766eb3fbf2d8f3d8359"
checksum = "f00b5dbd620d61dfdcb6007c9c1f6054ebd75319f163d886a9055cec1155073d"
dependencies = [
"jiff-static",
"jiff-tzdb-platform",
@@ -2807,9 +2807,9 @@ dependencies = [
[[package]]
name = "jiff-static"
version = "0.2.23"
version = "0.2.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a8c8b344124222efd714b73bb41f8b5120b27a7cc1c75593a6ff768d9d05aa4"
checksum = "e000de030ff8022ea1da3f466fbb0f3a809f5e51ed31f6dd931c35181ad8e6d7"
dependencies = [
"proc-macro2",
"quote",
@@ -2903,9 +2903,9 @@ dependencies = [
[[package]]
name = "js-sys"
version = "0.3.95"
version = "0.3.97"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2964e92d1d9dc3364cae4d718d93f227e3abb088e747d92e0395bfdedf1c12ca"
checksum = "a1840c94c045fbcf8ba2812c95db44499f7c64910a912551aaaa541decebcacf"
dependencies = [
"cfg-if",
"futures-util",
@@ -3005,7 +3005,7 @@ dependencies = [
"nom 8.0.0",
"percent-encoding",
"quoted_printable",
"rustls 0.23.38",
"rustls 0.23.40",
"rustls-native-certs",
"serde",
"socket2 0.6.3",
@@ -3017,9 +3017,9 @@ dependencies = [
[[package]]
name = "libc"
version = "0.2.185"
version = "0.2.186"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "52ff2c0fe9bc6cb6b14a0592c2ff4fa9ceb83eea9db979b0487cd054946a2b8f"
checksum = "68ab91017fe16c622486840e4c83c9a37afeff978bd239b5293d61ece587de66"
[[package]]
name = "libm"
@@ -3038,9 +3038,9 @@ dependencies = [
[[package]]
name = "libsqlite3-sys"
version = "0.36.0"
version = "0.37.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95b4103cffefa72eb8428cb6b47d6627161e51c2739fc5e3b734584157bc642a"
checksum = "b1f111c8c41e7c61a49cd34e44c7619462967221a6443b0ec299e0ac30cfb9b1"
dependencies = [
"cc",
"pkg-config",
@@ -3647,9 +3647,9 @@ checksum = "35fb2e5f958ec131621fdd531e9fc186ed768cbe395337403ae56c17a74c68ec"
[[package]]
name = "pastey"
version = "0.2.1"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b867cad97c0791bbd3aaa6472142568c6c9e8f71937e98379f584cfb0cf35bec"
checksum = "c5a797f0e07bdf071d15742978fc3128ec6c22891c31a3a931513263904c982a"
[[package]]
name = "pbkdf2"
@@ -4070,7 +4070,7 @@ dependencies = [
"quinn-proto",
"quinn-udp",
"rustc-hash",
"rustls 0.23.38",
"rustls 0.23.40",
"socket2 0.6.3",
"thiserror 2.0.18",
"tokio",
@@ -4090,7 +4090,7 @@ dependencies = [
"rand 0.9.4",
"ring",
"rustc-hash",
"rustls 0.23.38",
"rustls 0.23.40",
"rustls-pki-types",
"slab",
"thiserror 2.0.18",
@@ -4371,7 +4371,7 @@ dependencies = [
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls 0.23.38",
"rustls 0.23.40",
"rustls-native-certs",
"rustls-pki-types",
"serde",
@@ -4537,13 +4537,13 @@ dependencies = [
[[package]]
name = "rpassword"
version = "7.4.0"
version = "7.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66d4c8b64f049c6721ec8ccec37ddfc3d641c4a7fca57e8f2a89de509c73df39"
checksum = "2501c67132bd19c3005b0111fba298907ef002c8c1cf68e25634707e38bf66fe"
dependencies = [
"libc",
"rtoolbox",
"windows-sys 0.59.0",
"windows-sys 0.61.2",
]
[[package]]
@@ -4648,9 +4648,9 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.38"
version = "0.23.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69f9466fb2c14ea04357e91413efb882e2a6d4a406e625449bc0a5d360d53a21"
checksum = "ef86cd5876211988985292b91c96a8f2d298df24e75989a43a3c73f2d4d8168b"
dependencies = [
"log",
"once_cell",
@@ -4684,9 +4684,9 @@ dependencies = [
[[package]]
name = "rustls-pki-types"
version = "1.14.0"
version = "1.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd"
checksum = "30a7197ae7eb376e574fe940d068c30fe0462554a3ddbe4eca7838e049c937a9"
dependencies = [
"web-time",
"zeroize",
@@ -5493,7 +5493,7 @@ version = "0.26.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61"
dependencies = [
"rustls 0.23.38",
"rustls 0.23.40",
"tokio",
]
@@ -5911,7 +5911,7 @@ dependencies = [
"opendal",
"openidconnect",
"openssl",
"pastey 0.2.1",
"pastey 0.2.2",
"percent-encoding",
"pico-args",
"rand 0.10.1",
@@ -6006,9 +6006,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen"
version = "0.2.118"
version = "0.2.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bf938a0bacb0469e83c1e148908bd7d5a6010354cf4fb73279b7447422e3a89"
checksum = "df52b6d9b87e0c74c9edfa1eb2d9bf85e5d63515474513aa50fa181b3c4f5db1"
dependencies = [
"cfg-if",
"once_cell",
@@ -6019,9 +6019,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.68"
version = "0.4.70"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f371d383f2fb139252e0bfac3b81b265689bf45b6874af544ffa4c975ac1ebf8"
checksum = "af934872acec734c2d80e6617bbb5ff4f12b052dd8e6332b0817bce889516084"
dependencies = [
"js-sys",
"wasm-bindgen",
@@ -6029,9 +6029,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.118"
version = "0.2.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eeff24f84126c0ec2db7a449f0c2ec963c6a49efe0698c4242929da037ca28ed"
checksum = "78b1041f495fb322e64aca85f5756b2172e35cd459376e67f2a6c9dffcedb103"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -6039,9 +6039,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.118"
version = "0.2.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d08065faf983b2b80a79fd87d8254c409281cf7de75fc4b773019824196c904"
checksum = "9dcd0ff20416988a18ac686d4d4d0f6aae9ebf08a389ff5d29012b05af2a1b41"
dependencies = [
"bumpalo",
"proc-macro2",
@@ -6052,9 +6052,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.118"
version = "0.2.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fd04d9e306f1907bd13c6361b5c6bfc7b3b3c095ed3f8a9246390f8dbdee129"
checksum = "49757b3c82ebf16c57d69365a142940b384176c24df52a087fb748e2085359ea"
dependencies = [
"unicode-ident",
]
@@ -6108,9 +6108,9 @@ dependencies = [
[[package]]
name = "web-sys"
version = "0.3.95"
version = "0.3.97"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f2dfbb17949fa2088e5d39408c48368947b86f7834484e87b73de55bc14d97d"
checksum = "2eadbac71025cd7b0834f20d1fe8472e8495821b4e9801eb0a60bd1f19827602"
dependencies = [
"js-sys",
"wasm-bindgen",
@@ -6128,9 +6128,9 @@ dependencies = [
[[package]]
name = "webauthn-attestation-ca"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fafcf13f7dc1fb292ed4aea22cdd3757c285d7559e9748950ee390249da4da6b"
checksum = "6475c0bbd1a3f04afaa3e98880408c5be61680c5e6bd3c6f8c250990d5d3e18e"
dependencies = [
"base64urlsafedata",
"openssl",
@@ -6142,9 +6142,9 @@ dependencies = [
[[package]]
name = "webauthn-rs"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b24d082d3360258fefb6ffe56123beef7d6868c765c779f97b7a2fcf06727f8"
checksum = "6c548915e0e92ee946bbf2aecf01ea21bef53d974b0793cc6732ba81a03fc422"
dependencies = [
"base64urlsafedata",
"serde",
@@ -6156,9 +6156,9 @@ dependencies = [
[[package]]
name = "webauthn-rs-core"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15784340a24c170ce60567282fb956a0938742dbfbf9eff5df793a686a009b8b"
checksum = "296d2d501feb715d80b8e186fb88bab1073bca17f460303a1013d17b673bea6a"
dependencies = [
"base64 0.21.7",
"base64urlsafedata",
@@ -6183,9 +6183,9 @@ dependencies = [
[[package]]
name = "webauthn-rs-proto"
version = "0.5.4"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "16a1fb2580ce73baa42d3011a24de2ceab0d428de1879ece06e02e8c416e497c"
checksum = "c37393beac9c1ed1ca6dbb30b1e01783fb316ab3a45d90ecd48c99052dd7ef1e"
dependencies = [
"base64 0.21.7",
"base64urlsafedata",
+17 -13
View File
@@ -23,15 +23,17 @@ publish.workspace = true
[features]
default = [
# "sqlite",
# "sqlite" or "sqlite_system",
# "mysql",
# "postgresql",
]
# Empty to keep compatibility, prefer to set USE_SYSLOG=true
enable_syslog = []
# Please enable at least one of these DB backends.
mysql = ["diesel/mysql", "diesel_migrations/mysql"]
postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "dep:libsqlite3-sys"]
sqlite_system = ["diesel/sqlite", "diesel_migrations/sqlite"]
sqlite = ["sqlite_system", "libsqlite3-sys/bundled"] # Alternative to the above, statically linked SQLite into the binary instead of dynamically.
# Enable to use a vendored and statically linked openssl
vendored_openssl = ["openssl/vendored"]
# Enable MiMalloc memory allocator to replace the default malloc
@@ -88,14 +90,14 @@ serde_json = "1.0.149"
# A safe, extensible ORM and Query builder
# Currently pinned diesel to v2.3.3 as newer version break MySQL/MariaDB compatibility
diesel = { version = "2.3.7", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.3.1"
diesel = { version = "2.3.9", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.3.2"
derive_more = { version = "2.1.1", features = ["from", "into", "as_ref", "deref", "display"] }
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.36.0", features = ["bundled"], optional = true }
# SQLite, statically bundled unless the `sqlite_system` feature is enabled
libsqlite3-sys = { version = "0.37.0", optional = true }
# Crypto-related libraries
rand = "0.10.1"
@@ -114,7 +116,7 @@ time = "0.3.47"
job_scheduler_ng = "2.4.0"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.10.0"
data-encoding = "2.11.0"
# JWT library
jsonwebtoken = { version = "10.3.0", features = ["use_pem", "rust_crypto"], default-features = false }
@@ -128,9 +130,9 @@ yubico = { package = "yubico_ng", version = "0.14.1", features = ["online-tokio"
# WebAuthn libraries
# danger-allow-state-serialisation is needed to save the state in the db
# danger-credential-internals is needed to support U2F to Webauthn migration
webauthn-rs = { version = "0.5.4", features = ["danger-allow-state-serialisation", "danger-credential-internals"] }
webauthn-rs-proto = "0.5.4"
webauthn-rs-core = "0.5.4"
webauthn-rs = { version = "0.5.5", features = ["danger-allow-state-serialisation", "danger-credential-internals"] }
webauthn-rs-proto = "0.5.5"
webauthn-rs-core = "0.5.5"
# Handling of URL's for WebAuthn and favicons
url = "2.5.8"
@@ -145,7 +147,7 @@ handlebars = { version = "6.4.0", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.28", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false}
hickory-resolver = "0.26.0"
hickory-resolver = "0.26.1"
# Favicon extraction libraries
html5gum = "0.8.3"
@@ -168,7 +170,7 @@ openssl = "0.10.78"
pico-args = "0.5.0"
# Macro ident concatenation
pastey = "0.2.1"
pastey = "0.2.2"
governor = "0.10.4"
# OIDC for SSO
@@ -188,7 +190,7 @@ which = "8.0.2"
argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.4.0"
rpassword = "7.5.1"
# Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false }
@@ -301,6 +303,7 @@ branches_sharing_code = "deny"
case_sensitive_file_extension_comparisons = "deny"
cast_lossless = "deny"
clone_on_ref_ptr = "deny"
duration_suboptimal_units = "deny"
equatable_if_let = "deny"
excessive_precision = "deny"
filter_map_next = "deny"
@@ -322,6 +325,7 @@ needless_continue = "deny"
needless_lifetimes = "deny"
option_option = "deny"
redundant_clone = "deny"
ref_option = "deny"
string_add_assign = "deny"
unnecessary_join = "deny"
unnecessary_self_imports = "deny"
+3 -2
View File
@@ -59,8 +59,9 @@ A nearly complete implementation of the Bitwarden Client API is provided, includ
## Usage
> [!IMPORTANT]
> The web-vault requires the use a secure context for the [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API).
> That means it will only work via `http://localhost:8000` (using the port from the example below) or if you [enable HTTPS](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS).
> The web-vault requires the use of HTTPS and a secure context for the [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API). <br>
> That means it will only work if you [enable HTTPS](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS). <br>
> We also suggest to use a [reverse proxy](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples).
The recommended way to install and use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
See [which container image to use](https://github.com/dani-garcia/vaultwarden/wiki/Which-container-image-to-use) for an explanation of the provided tags.
+6 -6
View File
@@ -2,21 +2,21 @@ use std::env;
use std::process::Command;
fn main() {
// This allow using #[cfg(sqlite)] instead of #[cfg(feature = "sqlite")], which helps when trying to add them through macros
#[cfg(feature = "sqlite")]
// These allow using e.g. #[cfg(mysql)] instead of #[cfg(feature = "mysql")], which helps when trying to add them through macros
#[cfg(feature = "sqlite_system")] // The `sqlite` feature implies this one.
println!("cargo:rustc-cfg=sqlite");
#[cfg(feature = "mysql")]
println!("cargo:rustc-cfg=mysql");
#[cfg(feature = "postgresql")]
println!("cargo:rustc-cfg=postgresql");
#[cfg(feature = "s3")]
println!("cargo:rustc-cfg=s3");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
#[cfg(not(any(feature = "sqlite_system", feature = "mysql", feature = "postgresql")))]
compile_error!(
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
#[cfg(feature = "s3")]
println!("cargo:rustc-cfg=s3");
// Use check-cfg to let cargo know which cfg's we define,
// and avoid warnings when they are used in the code.
println!("cargo::rustc-check-cfg=cfg(sqlite)");
+2 -2
View File
@@ -1,6 +1,6 @@
---
vault_version: "v2026.3.1"
vault_image_digest: "sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767"
vault_version: "v2026.4.1"
vault_image_digest: "sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe"
# Cross Compile Docker Helper Scripts v1.9.0
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
+6 -6
View File
@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2026.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.3.1
# [docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767]
# $ docker pull docker.io/vaultwarden/web-vault:v2026.4.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.4.1
# [docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767
# [docker.io/vaultwarden/web-vault:v2026.3.1]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe
# [docker.io/vaultwarden/web-vault:v2026.4.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767 AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe AS vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 and linux/arm64
+6 -6
View File
@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2026.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.3.1
# [docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767]
# $ docker pull docker.io/vaultwarden/web-vault:v2026.4.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.4.1
# [docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767
# [docker.io/vaultwarden/web-vault:v2026.3.1]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe
# [docker.io/vaultwarden/web-vault:v2026.4.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:c1b1f212333f95bff4ef8d00e8e3589c4ae8eda018691f28f8bddc7e971dd767 AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ca2a4251c4e63c9ad428262b4dd452789a1b9f6fce71da351e93dceed0d2edbe AS vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -0,0 +1 @@
DROP TABLE IF EXISTS archives;
@@ -0,0 +1,10 @@
DROP TABLE IF EXISTS archives;
CREATE TABLE archives (
user_uuid CHAR(36) NOT NULL,
cipher_uuid CHAR(36) NOT NULL,
archived_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (user_uuid, cipher_uuid),
FOREIGN KEY (user_uuid) REFERENCES users (uuid) ON DELETE CASCADE,
FOREIGN KEY (cipher_uuid) REFERENCES ciphers (uuid) ON DELETE CASCADE
);
@@ -0,0 +1 @@
ALTER TABLE sso_auth DROP COLUMN binding_hash;
@@ -0,0 +1 @@
ALTER TABLE sso_auth ADD COLUMN binding_hash TEXT;
@@ -0,0 +1 @@
DROP TABLE IF EXISTS archives;
@@ -0,0 +1,8 @@
DROP TABLE IF EXISTS archives;
CREATE TABLE archives (
user_uuid CHAR(36) NOT NULL REFERENCES users (uuid) ON DELETE CASCADE,
cipher_uuid CHAR(36) NOT NULL REFERENCES ciphers (uuid) ON DELETE CASCADE,
archived_at TIMESTAMP NOT NULL DEFAULT now(),
PRIMARY KEY (user_uuid, cipher_uuid)
);
@@ -0,0 +1 @@
ALTER TABLE sso_auth DROP COLUMN binding_hash;
@@ -0,0 +1 @@
ALTER TABLE sso_auth ADD COLUMN binding_hash TEXT;
@@ -0,0 +1 @@
DROP TABLE IF EXISTS archives;
@@ -0,0 +1,8 @@
DROP TABLE IF EXISTS archives;
CREATE TABLE archives (
user_uuid CHAR(36) NOT NULL REFERENCES users (uuid) ON DELETE CASCADE,
cipher_uuid CHAR(36) NOT NULL REFERENCES ciphers (uuid) ON DELETE CASCADE,
archived_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (user_uuid, cipher_uuid)
);
@@ -0,0 +1 @@
ALTER TABLE sso_auth DROP COLUMN binding_hash;
@@ -0,0 +1 @@
ALTER TABLE sso_auth ADD COLUMN binding_hash TEXT;
+1 -1
View File
@@ -469,7 +469,7 @@ async fn deauth_user(user_id: UserId, _token: AdminToken, conn: DbConn, nt: Noti
if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &conn).await {
match unregister_push_device(&device.push_uuid).await {
match unregister_push_device(device.push_uuid.as_ref()).await {
Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {e}"),
};
+9 -9
View File
@@ -137,7 +137,7 @@ struct KeysData {
}
/// Trims whitespace from password hints, and converts blank password hints to `None`.
fn clean_password_hint(password_hint: &Option<String>) -> Option<String> {
fn clean_password_hint(password_hint: Option<&String>) -> Option<String> {
match password_hint {
None => None,
Some(h) => match h.trim() {
@@ -147,7 +147,7 @@ fn clean_password_hint(password_hint: &Option<String>) -> Option<String> {
}
}
fn enforce_password_hint_setting(password_hint: &Option<String>) -> EmptyResult {
fn enforce_password_hint_setting(password_hint: Option<&String>) -> EmptyResult {
if password_hint.is_some() && !CONFIG.password_hints_allowed() {
err!("Password hints have been disabled by the administrator. Remove the hint and try again.");
}
@@ -245,8 +245,8 @@ pub async fn _register(data: Json<RegisterData>, email_verification: bool, conn:
// Check against the password hint setting here so if it fails, the user
// can retry without losing their invitation below.
let password_hint = clean_password_hint(&data.master_password_hint);
enforce_password_hint_setting(&password_hint)?;
let password_hint = clean_password_hint(data.master_password_hint.as_ref());
enforce_password_hint_setting(password_hint.as_ref())?;
let mut user = match User::find_by_mail(&email, &conn).await {
Some(user) => {
@@ -353,8 +353,8 @@ async fn post_set_password(data: Json<SetPasswordData>, headers: Headers, conn:
// Check against the password hint setting here so if it fails,
// the user can retry without losing their invitation below.
let password_hint = clean_password_hint(&data.master_password_hint);
enforce_password_hint_setting(&password_hint)?;
let password_hint = clean_password_hint(data.master_password_hint.as_ref());
enforce_password_hint_setting(password_hint.as_ref())?;
set_kdf_data(&mut user, &data.kdf)?;
@@ -515,8 +515,8 @@ async fn post_password(data: Json<ChangePassData>, headers: Headers, conn: DbCon
err!("Invalid password")
}
user.password_hint = clean_password_hint(&data.master_password_hint);
enforce_password_hint_setting(&user.password_hint)?;
user.password_hint = clean_password_hint(data.master_password_hint.as_ref());
enforce_password_hint_setting(user.password_hint.as_ref())?;
log_user_event(EventType::UserChangedPassword as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &conn)
.await;
@@ -1438,7 +1438,7 @@ async fn put_clear_device_token(device_id: DeviceId, conn: DbConn) -> EmptyResul
if let Some(device) = Device::find_by_uuid(&device_id, &conn).await {
Device::clear_push_token_by_uuid(&device_id, &conn).await?;
unregister_push_device(&device.push_uuid).await?;
unregister_push_device(device.push_uuid.as_ref()).await?;
}
Ok(())
+189 -16
View File
@@ -19,9 +19,9 @@ use crate::{
crypto,
db::{
models::{
Attachment, AttachmentId, Cipher, CipherId, Collection, CollectionCipher, CollectionGroup, CollectionId,
CollectionUser, EventType, Favorite, Folder, FolderCipher, FolderId, Group, Membership, MembershipType,
OrgPolicy, OrgPolicyType, OrganizationId, RepromptType, Send, UserId,
Archive, Attachment, AttachmentId, Cipher, CipherId, Collection, CollectionCipher, CollectionGroup,
CollectionId, CollectionUser, EventType, Favorite, Folder, FolderCipher, FolderId, Group, Membership,
MembershipType, OrgPolicy, OrgPolicyType, OrganizationId, RepromptType, Send, UserId,
},
DbConn, DbPool,
},
@@ -96,6 +96,10 @@ pub fn routes() -> Vec<Route> {
post_collections_update,
post_collections_admin,
put_collections_admin,
archive_cipher_put,
archive_cipher_selected,
unarchive_cipher_put,
unarchive_cipher_selected,
]
}
@@ -293,6 +297,7 @@ pub struct CipherData {
// when using older client versions, or if the operation doesn't involve
// updating an existing cipher.
last_known_revision_date: Option<String>,
archived_date: Option<String>,
}
#[derive(Debug, Deserialize)]
@@ -534,6 +539,13 @@ pub async fn update_cipher_from_data(
cipher.move_to_folder(data.folder_id, &headers.user.uuid, conn).await?;
cipher.set_favorite(data.favorite, &headers.user.uuid, conn).await?;
if let Some(dt_str) = data.archived_date {
match NaiveDateTime::parse_from_str(&dt_str, "%+") {
Ok(dt) => cipher.set_archived_at(dt, &headers.user.uuid, conn).await?,
Err(err) => warn!("Error parsing ArchivedDate '{dt_str}': {err}"),
}
}
if ut != UpdateType::None {
// Only log events for organizational ciphers
if let Some(org_id) = &cipher.organization_uuid {
@@ -630,7 +642,7 @@ async fn post_ciphers_import(data: Json<ImportData>, headers: Headers, conn: DbC
let mut user = headers.user;
user.update_revision(&conn).await?;
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncVault, &user, headers.device.push_uuid.as_ref(), &conn).await;
Ok(())
}
@@ -802,12 +814,16 @@ async fn post_collections_update(
err!("Collection cannot be changed")
}
let Some(ref org_uuid) = cipher.organization_uuid else {
err!("Cipher is not owned by an organization")
};
let posted_collections = HashSet::<CollectionId>::from_iter(data.collection_ids);
let current_collections =
HashSet::<CollectionId>::from_iter(cipher.get_collections(headers.user.uuid.clone(), &conn).await);
for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid_and_org(collection, cipher.organization_uuid.as_ref().unwrap(), &conn).await {
match Collection::find_by_uuid_and_org(collection, org_uuid, &conn).await {
None => err!("Invalid collection ID provided"),
Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &conn).await {
@@ -838,7 +854,7 @@ async fn post_collections_update(
log_event(
EventType::CipherUpdatedCollections as i32,
&cipher.uuid,
&cipher.organization_uuid.clone().unwrap(),
org_uuid,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
@@ -878,12 +894,16 @@ async fn post_collections_admin(
err!("Collection cannot be changed")
}
let Some(ref org_uuid) = cipher.organization_uuid else {
err!("Cipher is not owned by an organization")
};
let posted_collections = HashSet::<CollectionId>::from_iter(data.collection_ids);
let current_collections =
HashSet::<CollectionId>::from_iter(cipher.get_admin_collections(headers.user.uuid.clone(), &conn).await);
for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid_and_org(collection, cipher.organization_uuid.as_ref().unwrap(), &conn).await {
match Collection::find_by_uuid_and_org(collection, org_uuid, &conn).await {
None => err!("Invalid collection ID provided"),
Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &conn).await {
@@ -914,7 +934,7 @@ async fn post_collections_admin(
log_event(
EventType::CipherUpdatedCollections as i32,
&cipher.uuid,
&cipher.organization_uuid.unwrap(),
org_uuid,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
@@ -1005,7 +1025,7 @@ async fn put_cipher_share_selected(
}
// Multi share actions do not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), &conn).await;
Ok(())
}
@@ -1618,7 +1638,7 @@ async fn move_cipher_selected(
.await;
} else {
// Multi move actions do not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), &conn).await;
}
if cipher_count != accessible_ciphers_count {
@@ -1670,7 +1690,7 @@ async fn purge_org_vault(
match Membership::find_confirmed_by_user_and_org(&user.uuid, &organization.org_id, &conn).await {
Some(member) if member.atype == MembershipType::Owner => {
Cipher::delete_all_by_organization(&organization.org_id, &conn).await?;
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncVault, &user, headers.device.push_uuid.as_ref(), &conn).await;
log_event(
EventType::OrganizationPurgedVault as i32,
@@ -1710,11 +1730,41 @@ async fn purge_personal_vault(
}
user.update_revision(&conn).await?;
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncVault, &user, headers.device.push_uuid.as_ref(), &conn).await;
Ok(())
}
#[put("/ciphers/<cipher_id>/archive")]
async fn archive_cipher_put(cipher_id: CipherId, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
archive_cipher(&cipher_id, &headers, false, &conn, &nt).await
}
#[put("/ciphers/archive", data = "<data>")]
async fn archive_cipher_selected(
data: Json<CipherIdsData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
archive_multiple_ciphers(data, &headers, &conn, &nt).await
}
#[put("/ciphers/<cipher_id>/unarchive")]
async fn unarchive_cipher_put(cipher_id: CipherId, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
unarchive_cipher(&cipher_id, &headers, false, &conn, &nt).await
}
#[put("/ciphers/unarchive", data = "<data>")]
async fn unarchive_cipher_selected(
data: Json<CipherIdsData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
unarchive_multiple_ciphers(data, &headers, &conn, &nt).await
}
#[derive(PartialEq)]
pub enum CipherDeleteOptions {
SoftSingle,
@@ -1805,7 +1855,7 @@ async fn _delete_multiple_ciphers(
}
// Multi delete actions do not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), &conn).await;
Ok(())
}
@@ -1873,7 +1923,7 @@ async fn _restore_multiple_ciphers(
}
// Multi move actions do not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, &headers.device.push_uuid, conn).await;
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), conn).await;
Ok(Json(json!({
"data": ciphers,
@@ -1933,6 +1983,122 @@ async fn _delete_cipher_attachment_by_id(
Ok(Json(json!({"cipher":cipher_json})))
}
async fn archive_cipher(
cipher_id: &CipherId,
headers: &Headers,
multi_archive: bool,
conn: &DbConn,
nt: &Notify<'_>,
) -> JsonResult {
let Some(cipher) = Cipher::find_by_uuid(cipher_id, conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_accessible_to_user(&headers.user.uuid, conn).await {
err!("Cipher is not accessible for the current user")
}
cipher.set_archived_at(Utc::now().naive_utc(), &headers.user.uuid, conn).await?;
if !multi_archive {
nt.send_cipher_update(
UpdateType::SyncCipherUpdate,
&cipher,
&cipher.update_users_revision(conn).await,
&headers.device,
None,
conn,
)
.await;
}
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
}
async fn unarchive_cipher(
cipher_id: &CipherId,
headers: &Headers,
multi_unarchive: bool,
conn: &DbConn,
nt: &Notify<'_>,
) -> JsonResult {
let Some(cipher) = Cipher::find_by_uuid(cipher_id, conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_accessible_to_user(&headers.user.uuid, conn).await {
err!("Cipher is not accessible for the current user")
}
cipher.unarchive(&headers.user.uuid, conn).await?;
if !multi_unarchive {
nt.send_cipher_update(
UpdateType::SyncCipherUpdate,
&cipher,
&cipher.update_users_revision(conn).await,
&headers.device,
None,
conn,
)
.await;
}
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await?))
}
async fn archive_multiple_ciphers(
data: Json<CipherIdsData>,
headers: &Headers,
conn: &DbConn,
nt: &Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let mut ciphers: Vec<Value> = Vec::new();
for cipher_id in data.ids {
match archive_cipher(&cipher_id, headers, true, conn, nt).await {
Ok(json) => ciphers.push(json.into_inner()),
err => return err,
}
}
// Multi archive does not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), conn).await;
Ok(Json(json!({
"data": ciphers,
"object": "list",
"continuationToken": null
})))
}
async fn unarchive_multiple_ciphers(
data: Json<CipherIdsData>,
headers: &Headers,
conn: &DbConn,
nt: &Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let mut ciphers: Vec<Value> = Vec::new();
for cipher_id in data.ids {
match unarchive_cipher(&cipher_id, headers, true, conn, nt).await {
Ok(json) => ciphers.push(json.into_inner()),
err => return err,
}
}
// Multi unarchive does not send out a push for each cipher, we need to send a general sync here
nt.send_user_update(UpdateType::SyncCiphers, &headers.user, headers.device.push_uuid.as_ref(), conn).await;
Ok(Json(json!({
"data": ciphers,
"object": "list",
"continuationToken": null
})))
}
/// This will hold all the necessary data to improve a full sync of all the ciphers
/// It can be used during the `Cipher::to_json()` call.
/// It will prevent the so called N+1 SQL issue by running just a few queries which will hold all the data needed.
@@ -1942,6 +2108,7 @@ pub struct CipherSyncData {
pub cipher_folders: HashMap<CipherId, FolderId>,
pub cipher_favorites: HashSet<CipherId>,
pub cipher_collections: HashMap<CipherId, Vec<CollectionId>>,
pub cipher_archives: HashMap<CipherId, NaiveDateTime>,
pub members: HashMap<OrganizationId, Membership>,
pub user_collections: HashMap<CollectionId, CollectionUser>,
pub user_collections_groups: HashMap<CollectionId, CollectionGroup>,
@@ -1958,20 +2125,25 @@ impl CipherSyncData {
pub async fn new(user_id: &UserId, sync_type: CipherSyncType, conn: &DbConn) -> Self {
let cipher_folders: HashMap<CipherId, FolderId>;
let cipher_favorites: HashSet<CipherId>;
let cipher_archives: HashMap<CipherId, NaiveDateTime>;
match sync_type {
// User Sync supports Folders and Favorites
// User Sync supports Folders, Favorites, and Archives
CipherSyncType::User => {
// Generate a HashMap with the Cipher UUID as key and the Folder UUID as value
cipher_folders = FolderCipher::find_by_user(user_id, conn).await.into_iter().collect();
// Generate a HashSet of all the Cipher UUID's which are marked as favorite
cipher_favorites = Favorite::get_all_cipher_uuid_by_user(user_id, conn).await.into_iter().collect();
// Generate a HashMap with the Cipher UUID as key and the archived date time as value
cipher_archives = Archive::find_by_user(user_id, conn).await.into_iter().collect();
}
// Organization Sync does not support Folders and Favorites.
// Organization Sync does not support Folders, Favorites, or Archives.
// If these are set, it will cause issues in the web-vault.
CipherSyncType::Organization => {
cipher_folders = HashMap::with_capacity(0);
cipher_favorites = HashSet::with_capacity(0);
cipher_archives = HashMap::with_capacity(0);
}
}
@@ -2034,6 +2206,7 @@ impl CipherSyncData {
};
Self {
cipher_archives,
cipher_attachments,
cipher_folders,
cipher_favorites,
+3 -3
View File
@@ -124,7 +124,7 @@ async fn post_eq_domains(data: Json<EquivDomainData>, headers: Headers, conn: Db
user.save(&conn).await?;
nt.send_user_update(UpdateType::SyncSettings, &user, &headers.device.push_uuid, &conn).await;
nt.send_user_update(UpdateType::SyncSettings, &user, headers.device.push_uuid.as_ref(), &conn).await;
Ok(Json(json!({})))
}
@@ -204,11 +204,11 @@ fn config() -> Json<Value> {
// Client (v2026.2.1): https://github.com/bitwarden/clients/blob/f96380c3138291a028bdd2c7a5fee540d5c98ba5/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2026.2.1): https://github.com/bitwarden/android/blob/6902c19c0093fa476bbf74ccaa70c9f14afbb82f/core/src/main/kotlin/com/bitwarden/core/data/manager/model/FlagKey.kt#L31
// iOS (v2026.2.1): https://github.com/bitwarden/ios/blob/cdd9ba1770ca2ffc098d02d12cc3208e3a830454/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let feature_states = parse_experimental_client_feature_flags(
let mut feature_states = parse_experimental_client_feature_flags(
&CONFIG.experimental_client_feature_flags(),
FeatureFlagFilter::ValidOnly,
);
// Add default feature_states here if needed, currently no features are needed by default.
feature_states.insert("pm-19148-innovation-archive".to_string(), true);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
+15 -33
View File
@@ -907,36 +907,21 @@ async fn _get_org_details(
Ok(json!(ciphers_json))
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct OrgDomainDetails {
email: String,
}
// Returning a Domain/Organization here allow to prefill it and prevent prompting the user
// So we either return an Org name associated to the user or a dummy value.
// So we return a dummy value, since we only support a single SSO integration, and do not use the response anywhere
// In use since `v2025.6.0`, appears to use only the first `organizationIdentifier`
#[post("/organizations/domain/sso/verified", data = "<data>")]
async fn get_org_domain_sso_verified(data: Json<OrgDomainDetails>, conn: DbConn) -> JsonResult {
let data: OrgDomainDetails = data.into_inner();
let identifiers = match Organization::find_org_user_email(&data.email, &conn)
.await
.into_iter()
.map(|o| (o.name, o.uuid.to_string()))
.collect::<Vec<(String, String)>>()
{
v if !v.is_empty() => v,
_ => vec![(FAKE_SSO_IDENTIFIER.to_string(), FAKE_SSO_IDENTIFIER.to_string())],
};
#[post("/organizations/domain/sso/verified")]
fn get_org_domain_sso_verified() -> JsonResult {
// Always return a dummy value, no matter if SSO is enabled or not
Ok(Json(json!({
"object": "list",
"data": identifiers.into_iter().map(|(name, identifier)| json!({
"organizationName": name, // appear unused
"organizationIdentifier": identifier,
"domainName": CONFIG.domain(), // appear unused
})).collect::<Vec<Value>>()
"data": [{
"organizationIdentifier": FAKE_SSO_IDENTIFIER,
// These appear to be unused
"organizationName": FAKE_SSO_IDENTIFIER,
"domainName": CONFIG.domain()
}],
"continuationToken": null
})))
}
@@ -1463,7 +1448,7 @@ async fn _confirm_invite(
let save_result = member_to_confirm.save(conn).await;
if let Some(user) = User::find_by_uuid(&member_to_confirm.user_uuid, conn).await {
nt.send_user_update(UpdateType::SyncOrgKeys, &user, &headers.device.push_uuid, conn).await;
nt.send_user_update(UpdateType::SyncOrgKeys, &user, headers.device.push_uuid.as_ref(), conn).await;
}
save_result
@@ -1721,7 +1706,7 @@ async fn _delete_member(
.await;
if let Some(user) = User::find_by_uuid(&member_to_delete.user_uuid, conn).await {
nt.send_user_update(UpdateType::SyncOrgKeys, &user, &headers.device.push_uuid, conn).await;
nt.send_user_update(UpdateType::SyncOrgKeys, &user, headers.device.push_uuid.as_ref(), conn).await;
}
member_to_delete.delete(conn).await
@@ -1979,7 +1964,7 @@ async fn list_policies_token(org_id: OrganizationId, token: &str, conn: DbConn)
}
// Called during the SSO enrollment return the default policy
#[get("/organizations/vaultwarden-dummy-oidc-identifier/policies/master-password", rank = 1)]
#[get("/organizations/00000000-01DC-01DC-01DC-000000000000/policies/master-password", rank = 1)]
fn get_dummy_master_password_policy() -> JsonResult {
let (enabled, data) = match CONFIG.sso_master_password_policy_value() {
Some(policy) if CONFIG.sso_enabled() => (true, policy.to_string()),
@@ -3049,10 +3034,7 @@ async fn put_reset_password_enrollment(
err!("User to enroll isn't member of required organization", "The user_id and acting user do not match");
}
let Some(mut membership) = Membership::find_confirmed_by_user_and_org(&headers.user.uuid, &org_id, &conn).await
else {
err!("User to enroll isn't member of required organization")
};
let mut membership = headers.membership;
check_reset_password_applicable(&org_id, &conn).await?;
+1 -1
View File
@@ -574,7 +574,7 @@ async fn download_url(host: &Host, send_id: &SendId, file_id: &SendFileId) -> Re
Ok(format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host))
} else {
Ok(operator.presign_read(&format!("{send_id}/{file_id}"), Duration::from_secs(5 * 60)).await?.uri().to_string())
Ok(operator.presign_read(&format!("{send_id}/{file_id}"), Duration::from_mins(5)).await?.uri().to_string())
}
}
+1 -1
View File
@@ -38,7 +38,7 @@ static WEBAUTHN: LazyLock<Webauthn> = LazyLock::new(|| {
let webauthn = WebauthnBuilder::new(&rp_id, &rp_origin)
.expect("Creating WebauthnBuilder failed")
.rp_name(&domain)
.timeout(Duration::from_millis(60000));
.timeout(Duration::from_mins(1));
webauthn.build().expect("Building Webauthn failed")
});
+53 -73
View File
@@ -19,7 +19,7 @@ use svg_hush::{data_url_filter, Filter};
use crate::{
config::PathType,
error::Error,
http_client::{get_reqwest_client_builder, should_block_address, CustomHttpClientError},
http_client::{get_reqwest_client_builder, get_valid_host, should_block_host, CustomHttpClientError},
util::Cached,
CONFIG,
};
@@ -81,19 +81,19 @@ static ICON_SIZE_REGEX: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"(?x)(\d+
// The function name `icon_external` is checked in the `on_response` function in `AppHeaders`
// It is used to prevent sending a specific header which breaks icon downloads.
// If this function needs to be renamed, also adjust the code in `util.rs`
#[get("/<domain>/icon.png")]
fn icon_external(domain: &str) -> Cached<Option<Redirect>> {
if !is_valid_domain(domain) {
warn!("Invalid domain: {domain}");
#[get("/<host>/icon.png")]
fn icon_external(host: &str) -> Cached<Option<Redirect>> {
let Ok(host) = get_valid_host(host) else {
warn!("Invalid host: {host}");
return Cached::ttl(None, CONFIG.icon_cache_negttl(), true);
};
if should_block_host(&host).is_err() {
warn!("Blocked address: {host}");
return Cached::ttl(None, CONFIG.icon_cache_negttl(), true);
}
if should_block_address(domain) {
warn!("Blocked address: {domain}");
return Cached::ttl(None, CONFIG.icon_cache_negttl(), true);
}
let url = CONFIG._icon_service_url().replace("{}", domain);
let url = CONFIG._icon_service_url().replace("{}", &host.to_string());
let redir = match CONFIG.icon_redirect_code() {
301 => Some(Redirect::moved(url)), // legacy permanent redirect
302 => Some(Redirect::found(url)), // legacy temporary redirect
@@ -107,12 +107,21 @@ fn icon_external(domain: &str) -> Cached<Option<Redirect>> {
Cached::ttl(redir, CONFIG.icon_cache_ttl(), true)
}
#[get("/<domain>/icon.png")]
async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
#[get("/<host>/icon.png")]
async fn icon_internal(host: &str) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(domain) {
warn!("Invalid domain: {domain}");
let Ok(host) = get_valid_host(host) else {
warn!("Invalid host: {host}");
return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
);
};
if should_block_host(&host).is_err() {
warn!("Blocked address: {host}");
return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
@@ -120,16 +129,7 @@ async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
);
}
if should_block_address(domain) {
warn!("Blocked address: {domain}");
return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
);
}
match get_icon(domain).await {
match get_icon(&host.to_string()).await {
Some((icon, icon_type)) => {
Cached::ttl((ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
}
@@ -137,42 +137,6 @@ async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
}
}
/// Returns if the domain provided is valid or not.
///
/// This does some manual checks and makes use of Url to do some basic checking.
/// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255.
fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "-.";
// If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = url::Url::parse(format!("https://{domain}").as_str()) {
debug!("Domain parse error: '{domain}' - {parse_error:?}");
return false;
} else if domain.is_empty()
|| domain.contains("..")
|| domain.starts_with('.')
|| domain.starts_with('-')
|| domain.ends_with('-')
{
debug!(
"Domain validation error: '{domain}' is either empty, contains '..', starts with an '.', starts or ends with a '-'"
);
return false;
} else if domain.len() > 255 {
debug!("Domain validation error: '{domain}' exceeds 255 characters");
return false;
}
for c in domain.chars() {
if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) {
debug!("Domain validation error: '{domain}' contains an invalid character '{c}'");
return false;
}
}
true
}
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{domain}.png");
@@ -367,7 +331,7 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
if get_valid_host(&base_domain).is_ok() {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
@@ -378,7 +342,7 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
if get_valid_host(&www_domain).is_ok() {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
@@ -532,7 +496,8 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
use data_url::DataUrl;
for icon in icon_result.iconlist.iter().take(5) {
let mut icons = icon_result.iconlist.iter().take(5).peekable();
while let Some(icon) = icons.next() {
if icon.href.starts_with("data:image") {
let Ok(datauri) = DataUrl::process(&icon.href) else {
continue;
@@ -560,11 +525,23 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
_ => debug!("Extracted icon from data:image uri is invalid"),
};
} else {
let res = get_page_with_referer(&icon.href, &icon_result.referer).await?;
debug!("Trying {}", icon.href);
// Make sure all icons are checked before returning error
let res = match get_page_with_referer(&icon.href, &icon_result.referer).await {
Ok(r) => r,
Err(e) if icons.peek().is_none() => return Err(e),
Err(e) if CustomHttpClientError::downcast_ref(&e).is_some() => return Err(e), // If blacklisted stop immediately instead of checking the rest of the icons. see explanation and actual handling inside get_icon()
Err(e) => {
warn!("Unable to download icon: {e:?}");
// Continue to next icon
continue;
}
};
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
// Check if the icon type is allowed, else try an icon from the list.
// Check if the icon type is allowed, else try another icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
@@ -618,14 +595,17 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
None
}
// Some details can be found here:
// - https://www.garykessler.net/library/file_sigs_GCK_latest.html
// - https://en.wikipedia.org/wiki/List_of_file_signatures
match bytes {
[137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"),
[82, 73, 70, 70, ..] => Some("webp"),
[255, 216, 255, ..] => Some("jpeg"),
[71, 73, 70, 56, ..] => Some("gif"),
[66, 77, ..] => Some("bmp"),
[60, 115, 118, 103, ..] => Some("svg+xml"), // Normal svg
[137, 80, 78, 71, 13, 10, 26, 10, ..] => Some("png"),
[0, 0, 1, 0, n1, n2, ..] if u16::from_le_bytes([*n1, *n2]) > 0 => Some("x-icon"), // https://en.wikipedia.org/wiki/ICO_(file_format)
[82, 73, 70, 70, _, _, _, _, 87, 69, 66, 80, ..] => Some("webp"), // Only match WebP Images
[255, 216, 255, b, ..] if *b >= 0xC0 => Some("jpeg"),
[71, 73, 70, 56, 55 | 57, 97, ..] => Some("gif"),
[66, 77, _, _, _, _, 0, 0, 0, 0, ..] => Some("bmp"), // https://en.wikipedia.org/wiki/BMP_file_format
[60, 115, 118, 103, ..] => Some("svg+xml"), // Normal svg
[60, 63, 120, 109, 108, ..] => check_svg_after_xml_declaration(bytes), // An svg starting with <?xml
_ => None,
}
+99 -32
View File
@@ -2,6 +2,7 @@ use chrono::Utc;
use num_traits::FromPrimitive;
use rocket::{
form::{Form, FromForm},
http::{Cookie, CookieJar, SameSite},
response::Redirect,
serde::json::Json,
Route,
@@ -23,7 +24,8 @@ use crate::{
ApiResult, EmptyResult, JsonResult,
},
auth,
auth::{generate_organization_api_key_login_claims, AuthMethod, ClientHeaders, ClientIp, ClientVersion},
auth::{generate_organization_api_key_login_claims, AuthMethod, ClientHeaders, ClientIp, ClientVersion, Secure},
crypto,
db::{
models::{
AuthRequest, AuthRequestId, Device, DeviceId, EventType, Invitation, OIDCCodeWrapper, OrganizationApiKey,
@@ -41,6 +43,7 @@ pub fn routes() -> Vec<Route> {
routes![
login,
prelogin,
prelogin_password,
identity_register,
register_verification_email,
register_finish,
@@ -64,43 +67,43 @@ async fn login(
let login_result = match data.grant_type.as_ref() {
"refresh_token" => {
_check_is_some(&data.refresh_token, "refresh_token cannot be blank")?;
_check_is_some(data.refresh_token.as_ref(), "refresh_token cannot be blank")?;
_refresh_login(data, &conn, &client_header.ip).await
}
"password" if CONFIG.sso_enabled() && CONFIG.sso_only() => err!("SSO sign-in is required"),
"password" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.password, "password cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_check_is_some(&data.username, "username cannot be blank")?;
_check_is_some(data.client_id.as_ref(), "client_id cannot be blank")?;
_check_is_some(data.password.as_ref(), "password cannot be blank")?;
_check_is_some(data.scope.as_ref(), "scope cannot be blank")?;
_check_is_some(data.username.as_ref(), "username cannot be blank")?;
_check_is_some(&data.device_identifier, "device_identifier cannot be blank")?;
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_check_is_some(data.device_identifier.as_ref(), "device_identifier cannot be blank")?;
_check_is_some(data.device_name.as_ref(), "device_name cannot be blank")?;
_check_is_some(data.device_type.as_ref(), "device_type cannot be blank")?;
_password_login(data, &mut user_id, &conn, &client_header.ip, &client_version).await
_password_login(data, &mut user_id, &conn, &client_header.ip, client_version.as_ref()).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.client_secret, "client_secret cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_check_is_some(data.client_id.as_ref(), "client_id cannot be blank")?;
_check_is_some(data.client_secret.as_ref(), "client_secret cannot be blank")?;
_check_is_some(data.scope.as_ref(), "scope cannot be blank")?;
_check_is_some(&data.device_identifier, "device_identifier cannot be blank")?;
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_check_is_some(data.device_identifier.as_ref(), "device_identifier cannot be blank")?;
_check_is_some(data.device_name.as_ref(), "device_name cannot be blank")?;
_check_is_some(data.device_type.as_ref(), "device_type cannot be blank")?;
_api_key_login(data, &mut user_id, &conn, &client_header.ip).await
}
"authorization_code" if CONFIG.sso_enabled() => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.code, "code cannot be blank")?;
_check_is_some(&data.code_verifier, "code verifier cannot be blank")?;
_check_is_some(data.client_id.as_ref(), "client_id cannot be blank")?;
_check_is_some(data.code.as_ref(), "code cannot be blank")?;
_check_is_some(data.code_verifier.as_ref(), "code verifier cannot be blank")?;
_check_is_some(&data.device_identifier, "device_identifier cannot be blank")?;
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_check_is_some(data.device_identifier.as_ref(), "device_identifier cannot be blank")?;
_check_is_some(data.device_name.as_ref(), "device_name cannot be blank")?;
_check_is_some(data.device_type.as_ref(), "device_type cannot be blank")?;
_sso_login(data, &mut user_id, &conn, &client_header.ip, &client_version).await
_sso_login(data, &mut user_id, &conn, &client_header.ip, client_version.as_ref()).await
}
"authorization_code" => err!("SSO sign-in is not available"),
t => err!("Invalid type", t),
@@ -176,7 +179,7 @@ async fn _sso_login(
user_id: &mut Option<UserId>,
conn: &DbConn,
ip: &ClientIp,
client_version: &Option<ClientVersion>,
client_version: Option<&ClientVersion>,
) -> JsonResult {
AuthMethod::Sso.check_scope(data.scope.as_ref())?;
@@ -227,7 +230,33 @@ async fn _sso_login(
}
)
}
Some((user, None)) => Some((user, None)),
Some((user, None)) => match user_infos.email_verified {
None if !CONFIG.sso_allow_unknown_email_verification() => {
error!(
"Login failure ({}), existing non SSO user ({}) with same email ({}) and email verification status is unknown",
user_infos.identifier, user.uuid, user.email
);
err_silent!(
"Email verification status is unknown",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
Some(false) => {
error!(
"Login failure ({}), existing non SSO user ({}) with same email ({}) and email is not verified",
user_infos.identifier, user.uuid, user.email
);
err_silent!(
"Email is not verified by the SSO provider",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
_ => Some((user, None)),
},
},
Some((user, sso_user)) => Some((user, Some(sso_user))),
};
@@ -319,7 +348,7 @@ async fn _password_login(
user_id: &mut Option<UserId>,
conn: &DbConn,
ip: &ClientIp,
client_version: &Option<ClientVersion>,
client_version: Option<&ClientVersion>,
) -> JsonResult {
// Validate scope
AuthMethod::Password.check_scope(data.scope.as_ref())?;
@@ -733,7 +762,7 @@ async fn twofactor_auth(
data: &ConnectData,
device: &mut Device,
ip: &ClientIp,
client_version: &Option<ClientVersion>,
client_version: Option<&ClientVersion>,
conn: &DbConn,
) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(&user.uuid, conn).await;
@@ -878,7 +907,7 @@ async fn _json_err_twofactor(
providers: &[i32],
user_id: &UserId,
data: &ConnectData,
client_version: &Option<ClientVersion>,
client_version: Option<&ClientVersion>,
conn: &DbConn,
) -> ApiResult<Value> {
let mut result = json!({
@@ -982,6 +1011,11 @@ async fn prelogin(data: Json<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
#[post("/accounts/prelogin/password", data = "<data>")]
async fn prelogin_password(data: Json<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
#[post("/accounts/register", data = "<data>")]
async fn identity_register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, false, conn).await
@@ -1108,7 +1142,7 @@ struct ConnectData {
#[field(name = uncased("code_verifier"))]
code_verifier: Option<OIDCCodeVerifier>,
}
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {
fn _check_is_some<T>(value: Option<&T>, msg: &str) -> EmptyResult {
if value.is_none() {
err!(msg)
}
@@ -1127,13 +1161,16 @@ fn prevalidate() -> JsonResult {
}
}
const SSO_BINDING_COOKIE: &str = "VW_SSO_BINDING";
#[get("/connect/oidc-signin?<code>&<state>", rank = 1)]
async fn oidcsignin(code: OIDCCode, state: String, mut conn: DbConn) -> ApiResult<Redirect> {
async fn oidcsignin(code: OIDCCode, state: String, cookies: &CookieJar<'_>, mut conn: DbConn) -> ApiResult<Redirect> {
_oidcsignin_redirect(
state,
OIDCCodeWrapper::Ok {
code,
},
cookies,
&mut conn,
)
.await
@@ -1146,6 +1183,7 @@ async fn oidcsignin_error(
state: String,
error: String,
error_description: Option<String>,
cookies: &CookieJar<'_>,
mut conn: DbConn,
) -> ApiResult<Redirect> {
_oidcsignin_redirect(
@@ -1154,6 +1192,7 @@ async fn oidcsignin_error(
error,
error_description,
},
cookies,
&mut conn,
)
.await
@@ -1165,6 +1204,7 @@ async fn oidcsignin_error(
async fn _oidcsignin_redirect(
base64_state: String,
code_response: OIDCCodeWrapper,
cookies: &CookieJar<'_>,
conn: &mut DbConn,
) -> ApiResult<Redirect> {
let state = sso::decode_state(&base64_state)?;
@@ -1173,6 +1213,17 @@ async fn _oidcsignin_redirect(
None => err!(format!("Cannot retrieve sso_auth for {state}")),
Some(sso_auth) => sso_auth,
};
// Browser-binding check
// The cookie was set on /connect/authorize and must come from the same browser that initiated the flow.
let cookie_value = cookies.get(SSO_BINDING_COOKIE).map(|c| c.value().to_string());
let provided_hash = cookie_value.as_deref().map(|v| crypto::sha256_hex(v.as_bytes()));
match (sso_auth.binding_hash.as_deref(), provided_hash.as_deref()) {
(Some(expected), Some(actual)) if crypto::ct_eq(expected, actual) => {}
_ => err!(format!("SSO session binding mismatch for {state}")),
}
cookies.remove(Cookie::build(SSO_BINDING_COOKIE).path("/identity/connect/").build());
sso_auth.code_response = Some(code_response);
sso_auth.updated_at = Utc::now().naive_utc();
sso_auth.save(conn).await?;
@@ -1219,7 +1270,7 @@ struct AuthorizeData {
// The `redirect_uri` will change depending of the client (web, android, ios ..)
#[get("/connect/authorize?<data..>")]
async fn authorize(data: AuthorizeData, conn: DbConn) -> ApiResult<Redirect> {
async fn authorize(data: AuthorizeData, cookies: &CookieJar<'_>, secure: Secure, conn: DbConn) -> ApiResult<Redirect> {
let AuthorizeData {
client_id,
redirect_uri,
@@ -1233,7 +1284,23 @@ async fn authorize(data: AuthorizeData, conn: DbConn) -> ApiResult<Redirect> {
err!("Unsupported code challenge method");
}
let auth_url = sso::authorize_url(state, code_challenge, &client_id, &redirect_uri, conn).await?;
// Generate browser-binding token. Stored hashed in DB; raw value handed to the browser as a cookie.
// Validated on /connect/oidc-signin
let binding_token = data_encoding::BASE64URL_NOPAD.encode(&crypto::get_random_bytes::<32>());
let binding_hash = crypto::sha256_hex(binding_token.as_bytes());
let auth_url =
sso::authorize_url(state, code_challenge, &client_id, &redirect_uri, Some(binding_hash), conn).await?;
cookies.add(
Cookie::build((SSO_BINDING_COOKIE, binding_token))
.path("/identity/connect/")
.max_age(time::Duration::seconds(sso::SSO_AUTH_EXPIRATION.num_seconds()))
.same_site(SameSite::Lax) // Lax is needed because the IdP runs on a different FQDN
.http_only(true)
.secure(secure.https)
.build(),
);
Ok(Redirect::temporary(String::from(auth_url)))
}
+1 -1
View File
@@ -338,7 +338,7 @@ impl WebSocketUsers {
}
// NOTE: The last modified date needs to be updated before calling these methods
pub async fn send_user_update(&self, ut: UpdateType, user: &User, push_uuid: &Option<PushId>, conn: &DbConn) {
pub async fn send_user_update(&self, ut: UpdateType, user: &User, push_uuid: Option<&PushId>, conn: &DbConn) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
+2 -2
View File
@@ -135,7 +135,7 @@ pub async fn register_push_device(device: &mut Device, conn: &DbConn) -> EmptyRe
Ok(())
}
pub async fn unregister_push_device(push_id: &Option<PushId>) -> EmptyResult {
pub async fn unregister_push_device(push_id: Option<&PushId>) -> EmptyResult {
if !CONFIG.push_enabled() || push_id.is_none() {
return Ok(());
}
@@ -206,7 +206,7 @@ pub async fn push_logout(user: &User, acting_device: Option<&Device>, conn: &DbC
}
}
pub async fn push_user_update(ut: UpdateType, user: &User, push_uuid: &Option<PushId>, conn: &DbConn) {
pub async fn push_user_update(ut: UpdateType, user: &User, push_uuid: Option<&PushId>, conn: &DbConn) {
if Device::check_user_has_push_device(&user.uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
+3 -3
View File
@@ -1076,7 +1076,7 @@ fn validate_config(cfg: &ConfigItems, on_update: bool) -> Result<(), Error> {
validate_internal_sso_issuer_url(&cfg.sso_authority)?;
validate_internal_sso_redirect_url(&cfg.sso_callback_path)?;
validate_sso_master_password_policy(&cfg.sso_master_password_policy)?;
validate_sso_master_password_policy(cfg.sso_master_password_policy.as_ref())?;
}
if cfg._enable_yubico {
@@ -1271,7 +1271,7 @@ fn validate_internal_sso_redirect_url(sso_callback_path: &String) -> Result<open
}
fn validate_sso_master_password_policy(
sso_master_password_policy: &Option<String>,
sso_master_password_policy: Option<&String>,
) -> Result<Option<serde_json::Value>, Error> {
let policy = sso_master_password_policy.as_ref().map(|mpp| serde_json::from_str::<serde_json::Value>(mpp));
@@ -1725,7 +1725,7 @@ impl Config {
}
pub fn sso_master_password_policy_value(&self) -> Option<serde_json::Value> {
validate_sso_master_password_policy(&self.sso_master_password_policy()).ok().flatten()
validate_sso_master_password_policy(self.sso_master_password_policy().as_ref()).ok().flatten()
}
pub fn sso_scopes_vec(&self) -> Vec<String> {
+7
View File
@@ -113,3 +113,10 @@ pub fn ct_eq<T: AsRef<[u8]>, U: AsRef<[u8]>>(a: T, b: U) -> bool {
use subtle::ConstantTimeEq;
a.as_ref().ct_eq(b.as_ref()).into()
}
//
// SHA256
//
pub fn sha256_hex(data: &[u8]) -> String {
HEXLOWER.encode(digest::digest(&digest::SHA256, data).as_ref())
}
+91
View File
@@ -0,0 +1,91 @@
use chrono::NaiveDateTime;
use diesel::prelude::*;
use super::{CipherId, User, UserId};
use crate::api::EmptyResult;
use crate::db::schema::archives;
use crate::db::DbConn;
use crate::error::MapResult;
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = archives)]
#[diesel(primary_key(user_uuid, cipher_uuid))]
pub struct Archive {
pub user_uuid: UserId,
pub cipher_uuid: CipherId,
pub archived_at: NaiveDateTime,
}
impl Archive {
// Returns the date the specified cipher was archived
pub async fn get_archived_at(cipher_uuid: &CipherId, user_uuid: &UserId, conn: &DbConn) -> Option<NaiveDateTime> {
db_run! { conn: {
archives::table
.filter(archives::cipher_uuid.eq(cipher_uuid))
.filter(archives::user_uuid.eq(user_uuid))
.select(archives::archived_at)
.first::<NaiveDateTime>(conn).ok()
}}
}
// Saves (inserts or updates) an archive record with the provided timestamp
pub async fn save(
user_uuid: &UserId,
cipher_uuid: &CipherId,
archived_at: NaiveDateTime,
conn: &DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
db_run! { conn:
sqlite, mysql {
diesel::replace_into(archives::table)
.values((
archives::user_uuid.eq(user_uuid),
archives::cipher_uuid.eq(cipher_uuid),
archives::archived_at.eq(archived_at),
))
.execute(conn)
.map_res("Error saving archive")
}
postgresql {
diesel::insert_into(archives::table)
.values((
archives::user_uuid.eq(user_uuid),
archives::cipher_uuid.eq(cipher_uuid),
archives::archived_at.eq(archived_at),
))
.on_conflict((archives::user_uuid, archives::cipher_uuid))
.do_update()
.set(archives::archived_at.eq(archived_at))
.execute(conn)
.map_res("Error saving archive")
}
}
}
// Deletes an archive record for a specific cipher
pub async fn delete_by_cipher(user_uuid: &UserId, cipher_uuid: &CipherId, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
db_run! { conn: {
diesel::delete(
archives::table
.filter(archives::user_uuid.eq(user_uuid))
.filter(archives::cipher_uuid.eq(cipher_uuid))
)
.execute(conn)
.map_res("Error deleting archive")
}}
}
/// Return a vec with (cipher_uuid, archived_at)
/// This is used during a full sync so we only need one query for all archive matches
pub async fn find_by_user(user_uuid: &UserId, conn: &DbConn) -> Vec<(CipherId, NaiveDateTime)> {
db_run! { conn: {
archives::table
.filter(archives::user_uuid.eq(user_uuid))
.select((archives::cipher_uuid, archives::archived_at))
.load::<(CipherId, NaiveDateTime)>(conn)
.unwrap_or_default()
}}
}
}
+1 -1
View File
@@ -50,7 +50,7 @@ impl Attachment {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone()));
Ok(format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id))
} else {
Ok(operator.presign_read(&self.get_file_path(), Duration::from_secs(5 * 60)).await?.uri().to_string())
Ok(operator.presign_read(&self.get_file_path(), Duration::from_mins(5)).await?.uri().to_string())
}
}
+20 -3
View File
@@ -10,8 +10,8 @@ use diesel::prelude::*;
use serde_json::Value;
use super::{
Attachment, CollectionCipher, CollectionId, Favorite, FolderCipher, FolderId, Group, Membership, MembershipStatus,
MembershipType, OrganizationId, User, UserId,
Archive, Attachment, CollectionCipher, CollectionId, Favorite, FolderCipher, FolderId, Group, Membership,
MembershipStatus, MembershipType, OrganizationId, User, UserId,
};
use crate::api::core::{CipherData, CipherSyncData, CipherSyncType};
use macros::UuidFromParam;
@@ -380,6 +380,11 @@ impl Cipher {
} else {
self.is_favorite(user_uuid, conn).await
});
json_object["archivedDate"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
cipher_sync_data.cipher_archives.get(&self.uuid).map_or(Value::Null, |d| Value::String(format_date(d)))
} else {
self.get_archived_at(user_uuid, conn).await.map_or(Value::Null, |d| Value::String(format_date(&d)))
});
// These values are true by default, but can be false if the
// cipher belongs to a collection or group where the org owner has enabled
// the "Read Only" or "Hide Passwords" restrictions for the user.
@@ -398,7 +403,7 @@ impl Cipher {
3 => "card",
4 => "identity",
5 => "sshKey",
_ => panic!("Wrong type"),
_ => err!(format!("Cipher {} has an invalid type {}", self.uuid, self.atype)),
};
json_object[key] = type_data_json;
@@ -742,6 +747,18 @@ impl Cipher {
}
}
pub async fn get_archived_at(&self, user_uuid: &UserId, conn: &DbConn) -> Option<NaiveDateTime> {
Archive::get_archived_at(&self.uuid, user_uuid, conn).await
}
pub async fn set_archived_at(&self, archived_at: NaiveDateTime, user_uuid: &UserId, conn: &DbConn) -> EmptyResult {
Archive::save(user_uuid, &self.uuid, archived_at, conn).await
}
pub async fn unarchive(&self, user_uuid: &UserId, conn: &DbConn) -> EmptyResult {
Archive::delete_by_cipher(user_uuid, &self.uuid, conn).await
}
pub async fn get_folder_uuid(&self, user_uuid: &UserId, conn: &DbConn) -> Option<FolderId> {
db_run! { conn: {
folders_ciphers::table
+4 -1
View File
@@ -25,7 +25,7 @@ pub struct Device {
pub user_uuid: UserId,
pub name: String,
pub atype: i32, // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/Enums/DeviceType.cs
pub atype: i32, // https://github.com/bitwarden/server/blob/8d547dcc280babab70dd4a3c94ced6a34b12dfbf/src/Core/Enums/DeviceType.cs
pub push_uuid: Option<PushId>,
pub push_token: Option<String>,
@@ -332,6 +332,8 @@ pub enum DeviceType {
MacOsCLI = 24,
#[display("Linux CLI")]
LinuxCLI = 25,
#[display("DuckDuckGo")]
DuckDuckGoBrowser = 26,
}
impl DeviceType {
@@ -363,6 +365,7 @@ impl DeviceType {
23 => DeviceType::WindowsCLI,
24 => DeviceType::MacOsCLI,
25 => DeviceType::LinuxCLI,
26 => DeviceType::DuckDuckGoBrowser,
_ => DeviceType::UnknownBrowser,
}
}
+2 -3
View File
@@ -85,7 +85,8 @@ impl EmergencyAccess {
pub async fn to_json_grantee_details(&self, conn: &DbConn) -> Option<Value> {
let grantee_user = if let Some(grantee_uuid) = &self.grantee_uuid {
User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found.")
} else if let Some(email) = self.email.as_deref() {
} else {
let email = self.email.as_deref()?;
match User::find_by_mail(email, conn).await {
Some(user) => user,
None => {
@@ -94,8 +95,6 @@ impl EmergencyAccess {
return None;
}
}
} else {
return None;
};
Some(json!({
+2
View File
@@ -1,3 +1,4 @@
mod archive;
mod attachment;
mod auth_request;
mod cipher;
@@ -17,6 +18,7 @@ mod two_factor_duo_context;
mod two_factor_incomplete;
mod user;
pub use self::archive::Archive;
pub use self::attachment::{Attachment, AttachmentId};
pub use self::auth_request::{AuthRequest, AuthRequestId};
pub use self::cipher::{Cipher, CipherId, RepromptType};
+9 -1
View File
@@ -54,11 +54,18 @@ pub struct SsoAuth {
pub auth_response: Option<OIDCAuthenticatedUser>,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub binding_hash: Option<String>,
}
/// Local methods
impl SsoAuth {
pub fn new(state: OIDCState, client_challenge: OIDCCodeChallenge, nonce: String, redirect_uri: String) -> Self {
pub fn new(
state: OIDCState,
client_challenge: OIDCCodeChallenge,
nonce: String,
redirect_uri: String,
binding_hash: Option<String>,
) -> Self {
let now = Utc::now().naive_utc();
SsoAuth {
@@ -70,6 +77,7 @@ impl SsoAuth {
updated_at: now,
code_response: None,
auth_response: None,
binding_hash,
}
}
}
+12
View File
@@ -265,6 +265,7 @@ table! {
auth_response -> Nullable<Text>,
created_at -> Timestamp,
updated_at -> Timestamp,
binding_hash -> Nullable<Text>,
}
}
@@ -341,6 +342,16 @@ table! {
}
}
table! {
archives (user_uuid, cipher_uuid) {
user_uuid -> Text,
cipher_uuid -> Text,
archived_at -> Timestamp,
}
}
joinable!(archives -> users (user_uuid));
joinable!(archives -> ciphers (cipher_uuid));
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -372,6 +383,7 @@ joinable!(auth_requests -> users (user_uuid));
joinable!(sso_users -> users (user_uuid));
allow_tables_to_appear_in_same_query!(
archives,
attachments,
ciphers,
ciphers_collections,
+266 -16
View File
@@ -1,7 +1,6 @@
use std::{
fmt,
net::{IpAddr, SocketAddr},
str::FromStr,
sync::{Arc, LazyLock, Mutex},
time::Duration,
};
@@ -59,16 +58,6 @@ pub fn get_reqwest_client_builder() -> ClientBuilder {
.timeout(Duration::from_secs(10))
}
pub fn should_block_address(domain_or_ip: &str) -> bool {
if let Ok(ip) = IpAddr::from_str(domain_or_ip) {
if should_block_ip(ip) {
return true;
}
}
should_block_address_regex(domain_or_ip)
}
fn should_block_ip(ip: IpAddr) -> bool {
if !CONFIG.http_request_block_non_global_ips() {
return false;
@@ -100,11 +89,54 @@ fn should_block_address_regex(domain_or_ip: &str) -> bool {
is_match
}
fn should_block_host(host: &Host<&str>) -> Result<(), CustomHttpClientError> {
pub fn get_valid_host(host: &str) -> Result<Host, CustomHttpClientError> {
let Ok(host) = Host::parse(host) else {
return Err(CustomHttpClientError::Invalid {
domain: host.to_string(),
});
};
// Some extra checks to validate hosts
match host {
Host::Domain(ref domain) => {
// Host::parse() does not verify length or all possible invalid characters
// We do some extra checks here to prevent issues
if domain.len() > 253 {
debug!("Domain validation error: '{domain}' exceeds 253 characters");
return Err(CustomHttpClientError::Invalid {
domain: host.to_string(),
});
}
if !domain.split('.').all(|label| {
!label.is_empty()
// Labels can't be longer than 63 chars
&& label.len() <= 63
// Labels are not allowed to start or end with a hyphen `-`
&& !label.starts_with('-')
&& !label.ends_with('-')
// Only ASCII Alphanumeric characters are allowed
// We already received a punycoded domain back, so no unicode should exists here
&& label.chars().all(|c| c.is_ascii_alphanumeric() || c == '-')
}) {
debug!(
"Domain validation error: '{domain}' labels contain invalid characters or exceed the maximum length"
);
return Err(CustomHttpClientError::Invalid {
domain: host.to_string(),
});
}
}
Host::Ipv4(_) | Host::Ipv6(_) => {}
}
Ok(host)
}
pub fn should_block_host<S: AsRef<str>>(host: &Host<S>) -> Result<(), CustomHttpClientError> {
let (ip, host_str): (Option<IpAddr>, String) = match host {
Host::Ipv4(ip) => (Some(IpAddr::V4(*ip)), ip.to_string()),
Host::Ipv6(ip) => (Some(IpAddr::V6(*ip)), ip.to_string()),
Host::Domain(d) => (None, (*d).to_string()),
Host::Domain(d) => (None, d.as_ref().to_string()),
};
if let Some(ip) = ip {
@@ -134,6 +166,9 @@ pub enum CustomHttpClientError {
domain: Option<String>,
ip: IpAddr,
},
Invalid {
domain: String,
},
}
impl CustomHttpClientError {
@@ -155,7 +190,7 @@ impl fmt::Display for CustomHttpClientError {
match self {
Self::Blocked {
domain,
} => write!(f, "Blocked domain: {domain} matched HTTP_REQUEST_BLOCK_REGEX"),
} => write!(f, "Blocked domain: '{domain}' matched HTTP_REQUEST_BLOCK_REGEX"),
Self::NonGlobalIp {
domain: Some(domain),
ip,
@@ -163,7 +198,10 @@ impl fmt::Display for CustomHttpClientError {
Self::NonGlobalIp {
domain: None,
ip,
} => write!(f, "IP {ip} is not a global IP!"),
} => write!(f, "IP '{ip}' is not a global IP!"),
Self::Invalid {
domain,
} => write!(f, "Invalid host: '{domain}' contains invalid characters or exceeds the maximum length"),
}
}
}
@@ -217,7 +255,13 @@ impl CustomDnsResolver {
}
fn pre_resolve(name: &str) -> Result<(), CustomHttpClientError> {
if should_block_address(name) {
let Ok(host) = get_valid_host(name) else {
return Err(CustomHttpClientError::Invalid {
domain: name.to_string(),
});
};
if should_block_host(&host).is_err() {
return Err(CustomHttpClientError::Blocked {
domain: name.to_string(),
});
@@ -308,3 +352,209 @@ pub(crate) mod aws {
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::util::is_global_hardcoded;
use std::net::Ipv4Addr;
use url::Host;
// ===
// IPv4 numeric-format normalization
fn parse_to_ip(s: &str) -> Option<IpAddr> {
match Host::parse(s).ok()? {
Host::Ipv4(v4) => Some(IpAddr::V4(v4)),
Host::Ipv6(v6) => Some(IpAddr::V6(v6)),
Host::Domain(_) => None,
}
}
#[test]
fn dotted_decimal_loopback_normalizes() {
let ip = parse_to_ip("127.0.0.1").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn single_decimal_loopback_normalizes() {
// 127.0.0.1 == 2130706433
let ip = parse_to_ip("2130706433").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn hex_loopback_normalizes() {
let ip = parse_to_ip("0x7f000001").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn dotted_hex_loopback_normalizes() {
let ip = parse_to_ip("0x7f.0.0.1").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn octal_loopback_normalizes() {
// 017700000001 == 127.0.0.1
let ip = parse_to_ip("017700000001").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn dotted_octal_loopback_normalizes() {
let ip = parse_to_ip("0177.0.0.01").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn aws_metadata_decimal_blocked() {
// 169.254.169.254 == 2852039166 (link-local, AWS IMDS)
let ip = parse_to_ip("2852039166").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(169, 254, 169, 254)));
assert!(!is_global_hardcoded(ip));
}
#[test]
fn rfc1918_hex_blocked() {
// 10.0.0.1
let ip = parse_to_ip("0x0a000001").unwrap();
assert!(!is_global_hardcoded(ip));
}
#[test]
fn public_ip_decimal_allowed() {
// 8.8.8.8 == 134744072
let ip = parse_to_ip("134744072").unwrap();
assert_eq!(ip, IpAddr::V4(Ipv4Addr::new(8, 8, 8, 8)));
assert!(is_global_hardcoded(ip));
}
// ===
// get_valid_host integration: numeric forms become Host::Ipv4
#[test]
fn get_valid_host_normalizes_decimal_int() {
let h = get_valid_host("2130706433").expect("valid");
assert!(matches!(h, Host::Ipv4(ip) if ip == Ipv4Addr::new(127, 0, 0, 1)));
}
#[test]
fn get_valid_host_normalizes_hex() {
let h = get_valid_host("0x7f000001").expect("valid");
assert!(matches!(h, Host::Ipv4(ip) if ip == Ipv4Addr::new(127, 0, 0, 1)));
}
#[test]
fn get_valid_host_normalizes_octal() {
let h = get_valid_host("017700000001").expect("valid");
assert!(matches!(h, Host::Ipv4(ip) if ip == Ipv4Addr::new(127, 0, 0, 1)));
}
// ===
// IPv6 formats
#[test]
fn ipv6_loopback_blocked() {
let h = get_valid_host("[::1]").expect("valid");
let Host::Ipv6(ip) = h else {
panic!("expected v6")
};
assert!(!is_global_hardcoded(IpAddr::V6(ip)));
}
#[test]
fn ipv4_mapped_in_ipv6_loopback_blocked() {
// ::ffff:127.0.0.1 — v4-mapped form; is_global_hardcoded blocks via ::ffff:0:0/96
let h = get_valid_host("[::ffff:127.0.0.1]").expect("valid");
let Host::Ipv6(ip) = h else {
panic!("expected v6")
};
assert!(!is_global_hardcoded(IpAddr::V6(ip)));
}
#[test]
fn ipv6_unique_local_blocked() {
let h = get_valid_host("[fc00::1]").expect("valid");
let Host::Ipv6(ip) = h else {
panic!("expected v6")
};
assert!(!is_global_hardcoded(IpAddr::V6(ip)));
}
// ===
// Punycode / IDN
#[test]
fn punycode_passthrough() {
let h = get_valid_host("xn--deadbeafcaf-lbb.test").expect("valid");
match h {
Host::Domain(d) => assert_eq!(d, "xn--deadbeafcaf-lbb.test"),
_ => panic!("expected domain"),
}
}
#[test]
fn idn_unicode_gets_punycoded() {
let h = get_valid_host("deadbeafcafé.test").expect("valid");
match h {
Host::Domain(d) => assert_eq!(d, "xn--deadbeafcaf-lbb.test"),
_ => panic!("expected domain"),
}
}
#[test]
fn idn_unicode_gets_punycoded_tld() {
let h = get_valid_host("deadbeaf.café").expect("valid");
match h {
Host::Domain(d) => assert_eq!(d, "deadbeaf.xn--caf-dma"),
_ => panic!("expected domain"),
}
}
#[test]
fn idn_emoji_gets_punycoded() {
let h = get_valid_host("xn--t88h.test").expect("valid"); // 🛡️.test
match h {
Host::Domain(d) => assert_eq!(d, "xn--t88h.test"),
_ => panic!("expected domain"),
}
}
#[test]
fn idn_unicode_to_punycode_roundtrip() {
let from_unicode = get_valid_host("🛡️.test").expect("valid");
let from_puny = get_valid_host("xn--t88h.test").expect("valid");
match (from_unicode, from_puny) {
(Host::Domain(a), Host::Domain(b)) => assert_eq!(a, b),
_ => panic!("expected domains"),
}
}
#[test]
fn invalid_punycode_rejected() {
// bare invalid punycode
assert!(get_valid_host("xn--").is_err());
}
#[test]
fn underscore_in_label_rejected() {
assert!(get_valid_host("dead_beaf.cafe").is_err());
}
#[test]
fn label_too_long_rejected() {
let label = "a".repeat(64);
assert!(get_valid_host(&format!("{label}.test")).is_err());
}
#[test]
fn domain_too_long_rejected() {
let big = "a.".repeat(130) + "test"; // > 253
assert!(get_valid_host(&big).is_err());
}
}
+4 -3
View File
@@ -17,7 +17,7 @@ use crate::{
CONFIG,
};
pub static FAKE_SSO_IDENTIFIER: &str = "vaultwarden-dummy-oidc-identifier";
pub static FAKE_SSO_IDENTIFIER: &str = "00000000-01DC-01DC-01DC-000000000000";
static SSO_JWT_ISSUER: LazyLock<String> = LazyLock::new(|| format!("{}|sso", CONFIG.domain_origin()));
@@ -188,6 +188,7 @@ pub async fn authorize_url(
client_challenge: OIDCCodeChallenge,
client_id: &str,
raw_redirect_uri: &str,
binding_hash: Option<String>,
conn: DbConn,
) -> ApiResult<Url> {
let redirect_uri = match client_id {
@@ -203,7 +204,7 @@ pub async fn authorize_url(
_ => err!(format!("Unsupported client {client_id}")),
};
let (auth_url, sso_auth) = Client::authorize_url(state, client_challenge, redirect_uri).await?;
let (auth_url, sso_auth) = Client::authorize_url(state, client_challenge, redirect_uri, binding_hash).await?;
sso_auth.save(&conn).await?;
Ok(auth_url)
}
@@ -283,7 +284,7 @@ pub async fn exchange_code(
let email_verified = id_claims.email_verified().or(user_info.email_verified());
let user_name = id_claims.preferred_username().map(|un| un.to_string());
let user_name = id_claims.preferred_username().or(user_info.preferred_username()).map(|un| un.to_string());
let refresh_token = token_response.refresh_token().map(|t| t.secret());
if refresh_token.is_none() && CONFIG.sso_scopes_vec().contains(&"offline_access".to_string()) {
+2 -1
View File
@@ -117,6 +117,7 @@ impl Client {
state: OIDCState,
client_challenge: OIDCCodeChallenge,
redirect_uri: String,
binding_hash: Option<String>,
) -> ApiResult<(Url, SsoAuth)> {
let scopes = CONFIG.sso_scopes_vec().into_iter().map(Scope::new);
let base64_state = data_encoding::BASE64.encode(state.to_string().as_bytes());
@@ -139,7 +140,7 @@ impl Client {
}
let (auth_url, _, nonce) = auth_req.url();
Ok((auth_url, SsoAuth::new(state, client_challenge, nonce.secret().clone(), redirect_uri)))
Ok((auth_url, SsoAuth::new(state, client_challenge, nonce.secret().clone(), redirect_uri, binding_hash)))
}
pub async fn exchange_code(
+13 -2
View File
@@ -1,6 +1,17 @@
body {
padding-top: 75px;
}
/* Some extra width's for the main layout */
@media (min-width: 1600px) {
.container-xxl {
max-width: 1520px;
}
}
@media (min-width: 1800px) {
.container-xxl {
max-width: 1720px;
}
}
img {
width: 48px;
height: 48px;
@@ -38,8 +49,8 @@ img {
max-width: 130px;
}
#users-table .vw-actions, #orgs-table .vw-actions {
min-width: 155px;
max-width: 160px;
min-width: 170px;
max-width: 180px;
}
#users-table .vw-org-cell {
max-height: 120px;
+1 -1
View File
@@ -27,7 +27,7 @@
</symbol>
</svg>
<nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top">
<div class="container-xl">
<div class="container-xxl">
<a class="navbar-brand" href="{{urlpath}}/admin"><img class="vaultwarden-icon" src="{{urlpath}}/vw_static/vaultwarden-icon.png" alt="V">aultwarden Admin</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarCollapse"
aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation">
+1 -1
View File
@@ -1,4 +1,4 @@
<main class="container-xl">
<main class="container-xxl">
<div id="diagnostics-block" class="my-3 p-3 rounded shadow">
<h6 class="border-bottom pb-2 mb-2">Diagnostics</h6>
+1 -1
View File
@@ -1,4 +1,4 @@
<main class="container-xl">
<main class="container-xxl">
{{#if error}}
<div class="align-items-center p-3 mb-3 text-opacity-50 text-dark bg-warning rounded shadow">
<div>
+1 -1
View File
@@ -1,4 +1,4 @@
<main class="container-xl">
<main class="container-xxl">
<div id="organizations-block" class="my-3 p-3 rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Organizations</h6>
<div class="table-responsive-xl small">
+1 -1
View File
@@ -1,4 +1,4 @@
<main class="container-xl">
<main class="container-xxl">
<div id="admin_token_warning" class="alert alert-warning alert-dismissible fade show d-none">
<button type="button" class="btn-close" data-bs-target="admin_token_warning" data-bs-dismiss="alert" aria-label="Close"></button>
You are using a plain text `ADMIN_TOKEN` which is insecure.<br>
+2 -2
View File
@@ -1,4 +1,4 @@
<main class="container-xl">
<main class="container-xxl">
<div id="users-block" class="my-3 p-3 rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Registered Users</h6>
<div class="table-responsive-xl small">
@@ -43,7 +43,7 @@
</td>
{{#if ../sso_enabled}}
<td>
<span class="d-block">{{sso_identifier}}</span>
<span class="d-block text-break text-wrap">{{sso_identifier}}</span>
</td>
{{/if}}
<td>
+18 -8
View File
@@ -734,7 +734,7 @@ where
warn!("Can't connect to database, retrying: {e:?}");
sleep(Duration::from_millis(1_000)).await;
sleep(Duration::from_secs(1)).await;
}
}
}
@@ -818,14 +818,18 @@ pub fn is_global_hardcoded(ip: std::net::IpAddr) -> bool {
std::net::IpAddr::V4(ip) => {
!(ip.octets()[0] == 0 // "This network"
|| ip.is_private()
|| (ip.octets()[0] == 100 && (ip.octets()[1] & 0b1100_0000 == 0b0100_0000)) //ip.is_shared()
|| (ip.octets()[0] == 100 && (ip.octets()[1] & 0b1100_0000 == 0b0100_0000)) // ip.is_shared()
|| ip.is_loopback()
|| ip.is_link_local()
// addresses reserved for future protocols (`192.0.0.0/24`)
||(ip.octets()[0] == 192 && ip.octets()[1] == 0 && ip.octets()[2] == 0)
// .9 and .10 are documented as globally reachable so they're excluded
|| (
ip.octets()[0] == 192 && ip.octets()[1] == 0 && ip.octets()[2] == 0
&& ip.octets()[3] != 9 && ip.octets()[3] != 10
)
|| ip.is_documentation()
|| (ip.octets()[0] == 198 && (ip.octets()[1] & 0xfe) == 18) // ip.is_benchmarking()
|| (ip.octets()[0] & 240 == 240 && !ip.is_broadcast()) //ip.is_reserved()
|| (ip.octets()[0] & 240 == 240 && !ip.is_broadcast()) // ip.is_reserved()
|| ip.is_broadcast())
}
std::net::IpAddr::V6(ip) => {
@@ -849,11 +853,17 @@ pub fn is_global_hardcoded(ip: std::net::IpAddr) -> bool {
// AS112-v6 (`2001:4:112::/48`)
|| matches!(ip.segments(), [0x2001, 4, 0x112, _, _, _, _, _])
// ORCHIDv2 (`2001:20::/28`)
|| matches!(ip.segments(), [0x2001, b, _, _, _, _, _, _] if (0x20..=0x2F).contains(&b))
// Drone Remote ID Protocol Entity Tags (DETs) Prefix (`2001:30::/28`)`
|| matches!(ip.segments(), [0x2001, b, _, _, _, _, _, _] if (0x20..=0x3F).contains(&b))
))
|| ((ip.segments()[0] == 0x2001) && (ip.segments()[1] == 0xdb8)) // ip.is_documentation()
|| ((ip.segments()[0] & 0xfe00) == 0xfc00) //ip.is_unique_local()
|| ((ip.segments()[0] & 0xffc0) == 0xfe80)) //ip.is_unicast_link_local()
// 6to4 (`2002::/16`) it's not explicitly documented as globally reachable,
// IANA says N/A.
|| matches!(ip.segments(), [0x2002, _, _, _, _, _, _, _])
|| matches!(ip.segments(), [0x2001, 0xdb8, ..] | [0x3fff, 0..=0x0fff, ..]) // ip.is_documentation()
// Segment Routing (SRv6) SIDs (`5f00::/16`)
|| matches!(ip.segments(), [0x5f00, ..])
|| ip.is_unique_local()
|| ip.is_unicast_link_local())
}
}
}