Compare commits

...

34 Commits

Author SHA1 Message Date
Mathijs van Veluw
a523c82f5f Use updated fern instead of patch (#5298)
Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-15 23:13:29 +01:00
Mathijs van Veluw
4d6d3443ae Allow adding connect-src entries (#5293)
Bitwarden allows to use self-hosted forwarded email services.
But for this to work you need to add custom URL's to the `connect-src` CSP entry.

This commit allows setting this and checks if the URL starts with `https://` else it will abort loading.

Fixes #5290

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-15 00:27:20 +01:00
Mathijs van Veluw
9cd400db6c Some refactoring and optimizations (#5291)
- Refactored several code to use more modern syntax
- Made some checks a bit more strict
- Updated crates

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-14 00:55:34 +01:00
Helmut K. C. Tessarek
fd51230044 feat: mask _smtp_img_src in support string (#5281) 2024-12-12 14:35:07 +01:00
Mathijs van Veluw
45e5f06b86 Some Backend Admin fixes and updates (#5272)
* Some Backend Admin fixes and updates

- Updated datatables
- Added a `X-Robots-Tags` header to prevent indexing
- Modified some layout settings
- Added Websocket check to diagnostics
- Added Security Header checks to diagnostics
- Added Error page response checks to diagnostics
- Modifed support string layout a bit

Signed-off-by: BlackDex <black.dex@gmail.com>

* Some small fixes

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-10 21:52:12 +01:00
Daniel
620ad92331 Update crates (#5268)
- fixes CVE-2024-12224
2024-12-10 17:59:28 +01:00
Mathijs van Veluw
c9860af11c Fix another sync issue with native clients (#5259)
The `reprompt` value somehow sometimes has a value of `4`.
This isn't a valid value, and doesn't cause issues with other clients, but the native clients are more strict.

This commit fixes this by validating the value before storing and returning.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-08 21:48:19 +01:00
Daniel
d7adce97df Update Alpine to version 3.21 (#5256) 2024-12-06 11:05:52 +01:00
Mathijs van Veluw
71b3d3c818 Update Rust and crates (#5248)
* Update Rust and crates

- Updated Rust to v1.83.0
- Updated MSRV to v1.82.0 (Needed for html5gum crate)
- Updated icon fetching code to match new html5gum version
- Updated workflows
- Enabled edition 2024 clippy lints
  Nightly reports some clippy hints, but that would be too much to change in this PR i think.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Some additional updates

- Patch fern to allow syslog-7 feature
- Fixed diesel logger which was broken because of the sqlite backup feature
  Refactored the sqlite backup because of this
- Added a build workflow test to include the query_logger feature

Signed-off-by: BlackDex <black.dex@gmail.com>

* Also patch yubico-rs and latest updates

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-05 22:10:59 +01:00
chuangjinglu
da3701c0cf chore: fix some comments (#5224)
Signed-off-by: chuangjinglu <chuangjinglu@outlook.com>
2024-11-25 18:35:00 +01:00
Mathijs van Veluw
96813b1317 Fix editing members which have access-all rights (#5213)
With web-vault v2024.6.2 and lower, if a user has access-all rights either as an org-member or via a group it shouldn't return individual collections.

This probably needs to be changed with newer versions which do not support the `access-all` feature anymore and work with manage.
But with the current version this should solve access right issues.

Fixes #5212

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-20 17:38:16 +01:00
Mathijs van Veluw
b0b953f348 Fix push not working (#5214)
The new native mobile clients seem to use PascalCase for the push payload.
Also the date/time could cause issues.

This PR fixes this by formatting the date/time correctly and use PascalCase for the payload key's
I now receive cipher updates and login-with-device requests again.

Fixes #5182

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-20 17:32:44 +01:00
Mathijs van Veluw
cdfdc6ff4f Fix Org Import duplicate collections (#5200)
This fixes an issue with collections be duplicated same as was an issue with folders.
Also made some optimizations by using HashSet where possible and device the Vec/Hash capacity.
And instead of passing objects only use the UUID which was the only value we needed.

Also found an issue with importing a personal export via the Org import where folders are used.
Since Org's do not use folder we needed to clear those out, same as Bitwarden does.

Fixes #5193

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-17 21:33:23 +01:00
Daniel García
2393c3f3c0 Support SSH keys on desktop 2024.12 (#5187)
* Support SSH keys on desktop 2024.12

* Document flags in .env.template

* Validate key rotation contents
2024-11-15 18:38:16 +01:00
Daniel García
0d16b38a68 Some more authrequest changes (#5188) 2024-11-15 11:25:51 +01:00
Stefan Melmuk
ff33534c07 don't infer manage permission for groups (#5190)
the web-vault v2024.6.2 currently cannot deal with manage permission so
instead of relying on the org user type this should just default to false
2024-11-13 19:19:19 +01:00
Stefan Melmuk
adb21d5c1a fix password hint check (#5189)
* fix password hint check

don't show password hints if you have disabled the hints with
PASSWORD_HINTS_ALLOWED=false or if you have not configured mail and
opted into showing password hints

* update descriptions for pw hints options
2024-11-12 21:22:25 +01:00
Mathijs van Veluw
e927b8aa5e Remove auth-request deletion (#5184)
2FA is needed to login even when using login-with-device.
If the user didn't saved the 2FA token they still need to provide this.
We deleted the auth-request after validation the request, but before 2FA was triggered.

Removing the deletion of this record from that point as it will get cleaned-up automatically anyways.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-12 15:48:39 +01:00
Mathijs van Veluw
ba48ca68fc fix hibp username encoding and pw hint check (#5180)
* fix hibp username encoding

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix password-hint check

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-12 11:09:28 +01:00
Mathijs van Veluw
294b429436 Add dynamic CSS support (#4940)
* Add dynamic CSS support

Together with https://github.com/dani-garcia/bw_web_builds/pull/180 this PR will add support for dynamic CSS changes.

For example, we could hide the register link if signups are not allowed.
In the future show or hide the SSO button depending on if it is enabled or not.

There also is a special `user.vaultwarden.scss` file so that users can add custom CSS without the need to modify the default (static) changes.
This will prevent future changes from not being applied and still have the custom user changes to be added.

Also added a special redirect when someone goes directly to `/index.html` as that might cause issues with loading other scripts and files.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add versions and fallback to built-in

- Add both Vaultwarden and web-vault versions to the css_options.
- Fallback to the inner templates if rendering or compiling the scss fails.
  This ensures the basics are always working even if someone breaks the templates.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix fallback code to actually work

The fallback now works by using an alternative `reg!` macro.
This adds an extra template register which prefixes the template with `fallback_`.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Updated the wiki link in the user template

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-11 20:14:04 +01:00
Daniel García
37c14c3c69 More authrequest fixes (#5176) 2024-11-11 20:13:02 +01:00
Mathijs van Veluw
d0581da638 Fix if logic error (#5171)
Fixing a logical error in an if statement where we used `&&` which should have been `||`.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-11 11:50:33 +01:00
Daniel García
38aad4f7be Limit HIBP to authed users 2024-11-10 23:59:06 +01:00
BlackDex
20d9e885bf Update crates and fix several issues
Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-10 23:56:19 +01:00
Mathijs van Veluw
2f20ad86f9 Update README (#5153)
Updating the Readme to be more modern and more clear.
Added and moved several shields/badges and changed some default colors to have a better contrast.
Added a Disclaimer section.

Closes #4901
Closes #4930
Closes #4931
Closes #5024

Co-authored-by: ipitio <21136719+ipitio@users.noreply.github.com>
Co-authored-by: Robert Schütz <github@dotlambda.de>
Co-authored-by: Yonas Yanfa <yonas.y@gmail.com>
Co-authored-by: KUSUMA RUSHIKESH <141169227+rushi-k12@users.noreply.github.com>
2024-11-02 22:20:10 +01:00
Mathijs van Veluw
33bae5fbe9 Update crates and fix Mail issue (#5125)
- Updated all the crates
  Including in this update is an update from lettre, which solves an issue with some specific SMTP mail providers.
2024-10-24 19:13:20 +02:00
Daniel
f60502a17e Add documentation for the extension-refresh feature flag (#5112) 2024-10-21 00:05:11 +02:00
Mathijs van Veluw
13f4b66e62 Hide user name on invite status (#5110)
A possible user disclosure when you invite an user into an organization which already has an account on the same instance.
This was because we always returned the user's name.
To prevent this, this PR only returns the user's name if the status is accepted or higher, else we will return null.
This is the same as Bitwarden does.

Resolves a reported issue.

Also resolved a new `nightly` reported clippy regarding a regex within a loop.
2024-10-19 18:22:21 +02:00
Daniel
c967d0ddc1 Add extension-refresh feature flag (#5106)
- in case people want to try out the new extension design
2024-10-19 18:21:00 +02:00
Mathijs van Veluw
ae6ed0ece8 Fix collection management and match some json output (#5095)
- Fixed collection management to be usable from the Password Manager UI
- Checked and brought in-to-sync with upstream several json responses
- Fixed a small issue with the `fields` response when it was empty

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-18 20:37:32 +02:00
Daniel
b7c254eb30 Update Rust to 1.82.0 (#5099)
- raise MSRV to 1.80.0
- also update the crates
2024-10-18 20:34:31 +02:00
Mathijs van Veluw
a47b484172 Fix org invite url being html encoded (#5100)
Ever since we changed to pass the full url as a template value handlebars now html-encodes this.
This causes issues with the plain/text mails, but it also could potentially cause issues with the text/html templates.

This PR encloses the template values inside triple braces `{{{ }}}` which prevents html-encoding.
Since the URL is generated via the `url` crate the values are percent-encoded anyway.

Fixes #5097

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-18 20:34:11 +02:00
Mathijs van Veluw
65629a99f0 Fix field type to actually be hidden (#5082)
In an oversight i forgot to set the type to a hidden type if converting the int was not possible.
This fixes that.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-13 20:32:15 +02:00
Mathijs van Veluw
49c5dec9b6 Fix iOS sync by converting field types to int (#5081)
It seems the iOS clients are not able to handle the `type` key within the `fields` array when they are of the type string.
All other clients seem to handle this just fine though.

This PR fixes this by validating it is a number, if this is not the case, try to convert the string to a number, or return the default of `1`.
`1` is used as this is the type `hidden` and should prevent accidental data disclosure.

Fixes #5069

Possibly Fixes #5016
Possibly Fixes #5002

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-13 20:25:09 +02:00
63 changed files with 2973 additions and 1977 deletions

View File

@@ -280,12 +280,13 @@
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=600000
## Controls whether users can set password hints. This setting applies globally to all users.
## Controls whether users can set or show password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
## SMTP service is not configured and password hints are allowed.
## Not recommended for publicly-accessible instances because this provides
## unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
#########################
@@ -347,7 +348,10 @@
## - "autofill-overlay": Add an overlay menu to form fields for quick access to credentials.
## - "autofill-v2": Use the new autofill implementation.
## - "browser-fileless-import": Directly import credentials from other providers without a file.
## - "extension-refresh": Temporarily enable the new extension design until general availability (should be used with the beta Chrome extension)
## - "fido2-vault-credentials": Enable the use of FIDO2 security keys as second factor.
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0)
## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent.
@@ -406,6 +410,14 @@
## Multiple values must be separated with a whitespace.
# ALLOWED_IFRAME_ANCESTORS=
## Allowed connect-src (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/connect-src
## Allows other domains to URLs which can be loaded using script interfaces like the Forwarded email alias feature
## This adds the configured value to the 'Content-Security-Policy' headers 'connect-src' value.
## Multiple values must be separated with a whitespace. And only HTTPS values are allowed.
## Example: "https://my-addy-io.domain.tld https://my-simplelogin.domain.tld"
# ALLOWED_CONNECT_SRC=""
## Number of seconds, on average, between login requests from the same IP address before rate limiting kicks in.
# LOGIN_RATELIMIT_SECONDS=60
## Allow a burst of requests of up to this size, while maintaining the average indicated by `LOGIN_RATELIMIT_SECONDS`.

View File

@@ -47,7 +47,7 @@ jobs:
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 #v4.2.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
@@ -75,7 +75,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@7b1c307e0dcbda6122208f10795a713336a9b35a # master @ Aug 8, 2024, 7:36 PM GMT+2
uses: dtolnay/rust-toolchain@315e265cd78dad1e1dcf3a5074f6d6c47029d5aa # master @ Nov 18, 2024, 5:36 AM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -85,7 +85,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@7b1c307e0dcbda6122208f10795a713336a9b35a # master @ Aug 8, 2024, 7:36 PM GMT+2
uses: dtolnay/rust-toolchain@315e265cd78dad1e1dcf3a5074f6d6c47029d5aa # master @ Nov 18, 2024, 5:36 AM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -107,7 +107,7 @@ jobs:
# End Show environment
# Enable Rust Caching
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84 # v2.7.3
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2.7.5
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.
@@ -117,6 +117,12 @@ jobs:
# Run cargo tests
# First test all features together, afterwards test them separately.
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc,query_logger"
id: test_sqlite_mysql_postgresql_mimalloc_logger
if: $${{ always() }}
run: |
cargo test --features sqlite,mysql,postgresql,enable_mimalloc,query_logger
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc"
id: test_sqlite_mysql_postgresql_mimalloc
if: $${{ always() }}
@@ -176,6 +182,7 @@ jobs:
echo "" >> $GITHUB_STEP_SUMMARY
echo "|Job|Status|" >> $GITHUB_STEP_SUMMARY
echo "|---|------|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc,query_logger)|${{ steps.test_sqlite_mysql_postgresql_mimalloc_logger.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql)|${{ steps.test_sqlite_mysql_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite)|${{ steps.test_sqlite.outcome }}|" >> $GITHUB_STEP_SUMMARY

View File

@@ -13,7 +13,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 #v4.2.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
# Start Docker Buildx

View File

@@ -58,7 +58,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 #v4.2.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
fetch-depth: 0

View File

@@ -28,10 +28,13 @@ jobs:
actions: read
steps:
- name: Checkout code
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 #v4.2.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@5681af892cd0f4997658e2bacc62bd0a894cf564 # v0.27.0
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0 # v0.29.0
env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
with:
scan-type: repo
ignore-unfixed: true
@@ -40,6 +43,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@2bbafcdd7fbf96243689e764c2f15d9735164f33 # v3.26.6
uses: github/codeql-action/upload-sarif@86b04fb0e47484f7282357688f21d5d0e32175fe # v3.27.5
with:
sarif_file: 'trivy-results.sarif'

814
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.79.0"
rust-version = "1.82.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -36,13 +36,13 @@ unstable = []
[target."cfg(unix)".dependencies]
# Logging
syslog = "6.1.1"
syslog = "7.0.0"
[dependencies]
# Logging
log = "0.4.22"
fern = { version = "0.6.2", features = ["syslog-6", "reopen-1"] }
tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
fern = { version = "0.7.1", features = ["syslog-7", "reopen-1"] }
tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.7", default-features = false }
@@ -53,7 +53,7 @@ once_cell = "1.20.2"
# Numerical libraries
num-traits = "0.2.19"
num-derive = "0.4.2"
bigdecimal = "0.4.5"
bigdecimal = "0.4.7"
# Web framework
rocket = { version = "0.5.1", features = ["tls", "json"], default-features = false }
@@ -67,16 +67,16 @@ dashmap = "6.1.0"
# Async futures
futures = "0.3.31"
tokio = { version = "1.40.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio = { version = "1.42.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.210", features = ["derive"] }
serde_json = "1.0.128"
serde = { version = "1.0.216", features = ["derive"] }
serde_json = "1.0.133"
# A safe, extensible ORM and Query builder
diesel = { version = "2.2.4", features = ["chrono", "r2d2", "numeric"] }
diesel = { version = "2.2.6", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0"
diesel_logger = { version = "0.3.0", optional = true }
diesel_logger = { version = "0.4.0", optional = true }
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.30.1", features = ["bundled"], optional = true }
@@ -86,12 +86,12 @@ rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.17.8"
# UUID generation
uuid = { version = "1.10.0", features = ["v4"] }
uuid = { version = "1.11.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.38", features = ["clock", "serde"], default-features = false }
chrono = { version = "0.4.39", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.0"
time = "0.3.36"
time = "0.3.37"
# Job scheduler
job_scheduler_ng = "2.0.5"
@@ -106,56 +106,56 @@ jsonwebtoken = "9.3.0"
totp-lite = "2.0.1"
# Yubico Library
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
yubico = { version = "0.12.0", features = ["online-tokio"], default-features = false }
# WebAuthn libraries
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn and favicons
url = "2.5.2"
url = "2.5.4"
# Email libraries
lettre = { version = "0.11.9", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
lettre = { version = "0.11.11", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9"
# HTML Template library
handlebars = { version = "6.1.0", features = ["dir_source"] }
handlebars = { version = "6.2.0", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.8", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.1"
reqwest = { version = "0.12.9", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.2"
# Favicon extraction libraries
html5gum = "0.5.7"
regex = { version = "1.11.0", features = ["std", "perf", "unicode-perl"], default-features = false }
html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.7.2"
bytes = "1.9.0"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.53.1", features = ["async"] }
cached = { version = "0.54.0", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.18.1"
cookie_store = "0.21.0"
cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.66"
openssl = "0.10.68"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.15"
governor = "0.6.3"
governor = "0.8.0"
# Check client versions for specific features.
semver = "1.0.23"
semver = "1.0.24"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.43", features = ["secure"], default-features = false, optional = true }
which = "6.0.3"
which = "7.0.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.3"
@@ -163,6 +163,13 @@ argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.3.1"
# Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false }
[patch.crates-io]
# Patch yubico to remove duplicate crates of older versions
yubico = { git = "https://github.com/BlackDex/yubico-rs", rev = "00df14811f58155c0f02e3ab10f1570ed3e115c6" }
# Strip debuginfo from the release builds
# The symbols are the provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations
@@ -213,7 +220,8 @@ noop_method_call = "deny"
refining_impl_trait = { level = "deny", priority = -1 }
rust_2018_idioms = { level = "deny", priority = -1 }
rust_2021_compatibility = { level = "deny", priority = -1 }
# rust_2024_compatibility = { level = "deny", priority = -1 } # Enable once we are at MSRV 1.81.0
rust_2024_compatibility = { level = "deny", priority = -1 }
edition_2024_expr_fragment_specifier = "allow" # Once changed to Rust 2024 this should be removed and macro's should be validated again
single_use_lifetimes = "deny"
trivial_casts = "deny"
trivial_numeric_casts = "deny"
@@ -222,9 +230,6 @@ unused_import_braces = "deny"
unused_lifetimes = "deny"
unused_qualifications = "deny"
variant_size_differences = "deny"
# The lints below are part of the rust_2024_compatibility group
static-mut-refs = "deny"
unsafe-op-in-unsafe-fn = "deny"
# https://rust-lang.github.io/rust-clippy/stable/index.html
[lints.clippy]

202
README.md
View File

@@ -1,102 +1,144 @@
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
![Vaultwarden Logo](./resources/vaultwarden-logo-auto.svg)
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
An alternative server implementation of the Bitwarden Client API, written in Rust and compatible with [official Bitwarden clients](https://bitwarden.com/download/) [[disclaimer](#disclaimer)], perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
---
[![Build](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml/badge.svg)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml)
[![ghcr.io](https://img.shields.io/badge/ghcr.io-download-blue)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/Quay.io-download-blue)](https://quay.io/repository/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg?style=for-the-badge&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![ghcr.io Pulls](https://img.shields.io/badge/dynamic/json?style=for-the-badge&logo=github&logoColor=fff&color=005AA4&url=https%3A%2F%2Fipitio.github.io%2Fbackage%2Fdani-garcia%2Fvaultwarden%2Fvaultwarden.json&query=%24.downloads&label=ghcr.io%20pulls&cacheSeconds=14400)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg?style=for-the-badge&logo=docker&logoColor=fff&color=005AA4&label=docker.io%20pulls)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/quay.io-download-005AA4?style=for-the-badge&logo=redhat&cacheSeconds=14400)](https://quay.io/repository/vaultwarden/server) <br>
[![Contributors](https://img.shields.io/github/contributors-anon/dani-garcia/vaultwarden.svg?style=flat-square&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)
[![Forks](https://img.shields.io/github/forks/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4)](https://github.com/dani-garcia/vaultwarden/network/members)
[![Stars](https://img.shields.io/github/stars/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4)](https://github.com/dani-garcia/vaultwarden/stargazers)
[![Issues Open](https://img.shields.io/github/issues/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/issues)
[![Issues Closed](https://img.shields.io/github/issues-closed/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue+is%3Aclosed)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg?style=flat-square&logo=vaultwarden&color=944000&cacheSeconds=14400)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt) <br>
[![Dependency Status](https://img.shields.io/badge/dynamic/xml?url=https%3A%2F%2Fdeps.rs%2Frepo%2Fgithub%2Fdani-garcia%2Fvaultwarden%2Fstatus.svg&query=%2F*%5Blocal-name()%3D'svg'%5D%2F*%5Blocal-name()%3D'g'%5D%5B2%5D%2F*%5Blocal-name()%3D'text'%5D%5B4%5D&style=flat-square&logo=rust&label=dependencies&color=005AA4)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GHA Release](https://img.shields.io/github/actions/workflow/status/dani-garcia/vaultwarden/release.yml?style=flat-square&logo=github&logoColor=fff&label=Release%20Workflow)](https://github.com/dani-garcia/vaultwarden/actions/workflows/release.yml)
[![GHA Build](https://img.shields.io/github/actions/workflow/status/dani-garcia/vaultwarden/build.yml?style=flat-square&logo=github&logoColor=fff&label=Build%20Workflow)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml) <br>
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?style=flat-square&logo=matrix&logoColor=fff&color=953B00&cacheSeconds=14400)](https://matrix.to/#/#vaultwarden:matrix.org)
[![GitHub Discussions](https://img.shields.io/github/discussions/dani-garcia/vaultwarden?style=flat-square&logo=github&logoColor=fff&color=953B00&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/discussions)
[![Discourse Discussions](https://img.shields.io/discourse/topics?server=https%3A%2F%2Fvaultwarden.discourse.group%2F&style=flat-square&logo=discourse&color=953B00)](https://vaultwarden.discourse.group/)
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor Bitwarden, Inc.**
> [!IMPORTANT]
> **When using this server, please report any bugs or suggestions directly to us (see [Get in touch](#get-in-touch)), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official Bitwarden support channels.**
#### ⚠️**IMPORTANT**⚠️: When using this server, please report any bugs or suggestions to us directly (look at the bottom of this page for ways to get in touch), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels.
---
<br>
## Features
Basically full implementation of Bitwarden API is provided including:
A nearly complete implementation of the Bitwarden Client API is provided, including:
* Organizations support
* Attachments and Send
* Vault API support
* Serving the static files for Vault interface
* Website icons API
* Authenticator and U2F support
* YubiKey and Duo support
* Emergency Access
* [Personal Vault](https://bitwarden.com/help/managing-items/)
* [Send](https://bitwarden.com/help/about-send/)
* [Attachments](https://bitwarden.com/help/attachments/)
* [Website icons](https://bitwarden.com/help/website-icons/)
* [Personal API Key](https://bitwarden.com/help/personal-api-key/)
* [Organizations](https://bitwarden.com/help/getting-started-organizations/)
- [Collections](https://bitwarden.com/help/about-collections/),
[Password Sharing](https://bitwarden.com/help/sharing/),
[Member Roles](https://bitwarden.com/help/user-types-access-control/),
[Groups](https://bitwarden.com/help/about-groups/),
[Event Logs](https://bitwarden.com/help/event-logs/),
[Admin Password Reset](https://bitwarden.com/help/admin-reset/),
[Directory Connector](https://bitwarden.com/help/directory-sync/),
[Policies](https://bitwarden.com/help/policies/)
* [Multi/Two Factor Authentication](https://bitwarden.com/help/bitwarden-field-guide-two-step-login/)
- [Authenticator](https://bitwarden.com/help/setup-two-step-login-authenticator/),
[Email](https://bitwarden.com/help/setup-two-step-login-email/),
[FIDO2 WebAuthn](https://bitwarden.com/help/setup-two-step-login-fido/),
[YubiKey](https://bitwarden.com/help/setup-two-step-login-yubikey/),
[Duo](https://bitwarden.com/help/setup-two-step-login-duo/)
* [Emergency Access](https://bitwarden.com/help/emergency-access/)
* [Vaultwarden Admin Backend](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page)
* [Modified Web Vault client](https://github.com/dani-garcia/bw_web_builds) (Bundled within our containers)
## Installation
Pull the docker image and mount a volume from the host for persistent storage:
```sh
docker pull vaultwarden/server:latest
docker run -d --name vaultwarden -v /vw-data/:/data/ --restart unless-stopped -p 80:80 vaultwarden/server:latest
```
This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy (see examples linked above).
<br>
## Usage
See the [vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki) for more information on how to configure and run the vaultwarden server.
> [!IMPORTANT]
> Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
>
>This can be configured in [Vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
>
>If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy or Traefik (see examples linked above).
> [!TIP]
>**For more detailed examples on how to install, use and configure Vaultwarden you can check our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).**
The main way to use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
### Docker/Podman CLI
Pull the container image and mount a volume from the host for persistent storage.<br>
You can replace `docker` with `podman` if you prefer to use podman.
```shell
docker pull vaultwarden/server:latest
docker run --detach --name vaultwarden \
--env DOMAIN="https://vw.domain.tld" \
--volume /vw-data/:/data/ \
--restart unless-stopped \
--publish 80:80 \
vaultwarden/server:latest
```
This will preserve any persistent data under `/vw-data/`, you can adapt the path to whatever suits you.
### Docker Compose
To use Docker compose you need to create a `compose.yaml` which will hold the configuration to run the Vaultwarden container.
```yaml
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
environment:
DOMAIN: "https://vw.domain.tld"
volumes:
- ./vw-data/:/data/
ports:
- 80:80
```
<br>
## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please use [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [the forum](https://vaultwarden.discourse.group/).
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure you are on the latest version and there aren't any similar issues open, though!
Have a question, suggestion or need help? Join our community on [Matrix](https://matrix.to/#/#vaultwarden:matrix.org), [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [Discourse Forums](https://vaultwarden.discourse.group/).
If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org](https://matrix.to/#/#vaultwarden:matrix.org) room on Matrix. Feel free to join us!
Encountered a bug or crash? Please search our issue tracker and discussions to see if it's already been reported. If not, please [start a new discussion](https://github.com/dani-garcia/vaultwarden/discussions) or [create a new issue](https://github.com/dani-garcia/vaultwarden/issues/). Ensure you're using the latest version of Vaultwarden and there aren't any similar issues open or closed!
<br>
## Contributors
### Sponsors
Thanks for your contribution to the project!
<!--
<table>
<tr>
<td align="center">
<a href="https://github.com/username">
<img src="https://avatars.githubusercontent.com/u/725423?s=75&v=4" width="75px;" alt="username"/>
<br />
<sub><b>username</b></sub>
</a>
</td>
</tr>
</table>
[![Contributors Count](https://img.shields.io/github/contributors-anon/dani-garcia/vaultwarden?style=for-the-badge&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)<br>
[![Contributors Avatars](https://contributors-img.web.app/image?repo=dani-garcia/vaultwarden)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)
<br/>
-->
<br>
<table>
<tr>
<td align="center">
<a href="https://github.com/themightychris" style="width: 75px">
<sub><b>Chris Alfano</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/numberly" style="width: 75px">
<sub><b>Numberly</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/IQ333777" style="width: 75px">
<sub><b>IQ333777</b></sub>
</a>
</td>
</tr>
</table>
## Disclaimer
**This project is not associated with [Bitwarden](https://bitwarden.com/) or Bitwarden, Inc.**
However, one of the active maintainers for Vaultwarden is employed by Bitwarden and is allowed to contribute to the project on their own time. These contributions are independent of Bitwarden and are reviewed by other maintainers.
The maintainers work together to set the direction for the project, focusing on serving the self-hosting community, including individuals, families, and small organizations, while ensuring the project's sustainability.
**Please note:** We cannot be held liable for any data loss that may occur while using Vaultwarden. This includes passwords, attachments, and other information handled by the application. We highly recommend performing regular backups of your files and database. However, should you experience data loss, we encourage you to contact us immediately.
<br>
## Bitwarden_RS
This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues.<br>
Please see [#1642 - v1.21.0 release and project rename to Vaultwarden](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.

View File

@@ -21,7 +21,7 @@ notify us. We welcome working with you to resolve the issue promptly. Thanks in
The following bug classes are out-of scope:
- Bugs that are already reported on Vaultwarden's issue tracker (https://github.com/dani-garcia/vaultwarden/issues)
- Bugs that are not part of Vaultwarden, like on the the web-vault or mobile and desktop clients. These issues need to be reported in the respective project issue tracker at https://github.com/bitwarden to which we are not associated
- Bugs that are not part of Vaultwarden, like on the web-vault or mobile and desktop clients. These issues need to be reported in the respective project issue tracker at https://github.com/bitwarden to which we are not associated
- Issues in an upstream software dependency (ex: Rust, or External Libraries) which are already reported to the upstream maintainer
- Attacks requiring physical access to a user's device
- Issues related to software or protocols not under Vaultwarden's control

View File

@@ -5,9 +5,9 @@ vault_image_digest: "sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf716
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:1978e7a58a1777cb0ef0dde76bad60b7914b21da57cfa88047875e4f364297aa"
rust_version: 1.81.0 # Rust version to be used
rust_version: 1.83.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: "3.20" # Alpine version to be used
alpine_version: "3.21" # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch

View File

@@ -32,10 +32,10 @@ FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:409ab328ca931
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.81.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.81.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.81.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.81.0 AS build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.83.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.83.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.83.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.83.0 AS build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
@@ -126,7 +126,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.20
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.21
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \

View File

@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:1978e7a58a1777cb0ef0d
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.81.0-slim-bookworm AS build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.83.0-slim-bookworm AS build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT

View File

@@ -46,7 +46,7 @@ There also is an option to use an other docker container to provide support for
```bash
# To install and activate
docker run --privileged --rm tonistiigi/binfmt --install arm64,arm
# To unistall
# To uninstall
docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
```

View File

@@ -17,7 +17,7 @@ variable "SOURCE_REPOSITORY_URL" {
default = null
}
// The commit hash of of the current commit this build was triggered on
// The commit hash of the current commit this build was triggered on
variable "SOURCE_COMMIT" {
default = null
}

View File

@@ -0,0 +1,78 @@
<svg width="1365.8256" height="280.48944" version="1.1" viewBox="0 0 1365.8255 280.48944" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<style>
@media (prefers-color-scheme: dark) {
svg { -webkit-filter:invert(0.90); filter:invert(0.90); }
}</style>
<title>Vaultwarden Logo</title>
<defs>
<mask id="d">
<rect x="-60" y="-60" width="120" height="120" fill="#fff"/>
<circle id="b" cy="-40" r="3"/>
<use transform="rotate(72)" xlink:href="#b"/>
<use transform="rotate(144)" xlink:href="#b"/>
<use transform="rotate(216)" xlink:href="#b"/>
<use transform="rotate(-72)" xlink:href="#b"/>
</mask>
</defs>
<g transform="translate(-10.708266,-9.2965379)" aria-label="aultwarden">
<path d="m371.55338 223.43649-5.76172-14.84375h-0.78125q-7.51953 9.47266-15.52735 13.1836-7.91015 3.61328-20.70312 3.61328-15.72266 0-24.80469-8.98438-8.98437-8.98437-8.98437-25.58593 0-17.38282 12.10937-25.58594 12.20703-8.30078 36.71875-9.17969l18.94531-0.58594v-4.78515q0-16.60157-16.99218-16.60157-13.08594 0-30.76172 7.91016l-9.86328-20.11719q18.84765-9.86328 41.79687-9.86328 21.97266 0 33.69141 9.57031 11.71875 9.57032 11.71875 29.10157v72.7539zm-8.78907-50.58593-11.52343 0.39062q-12.98829 0.39063-19.33594 4.6875-6.34766 4.29688-6.34766 13.08594 0 12.59765 14.45313 12.59765 10.35156 0 16.5039-5.95703 6.25-5.95703 6.25-15.82031zm137.59766 50.58593-4.00391-13.96484h-1.5625q-4.78515 7.61719-13.57422 11.81641-8.78906 4.10156-20.01953 4.10156-19.23828 0-29.0039-10.25391-9.76563-10.35156-9.76563-29.6875v-71.1914h29.78516v63.76953q0 11.8164 4.19922 17.77343 4.19922 5.85938 13.3789 5.85938 12.5 0 18.06641-8.30078 5.56641-8.39844 5.56641-27.73438v-51.36718h29.78515v109.17968zm83.88672 0h-29.78516v-151.953122h29.78516zm77.24609-21.77734q7.8125 0 18.75-3.41797v22.16797q-11.13281 4.98047-27.34375 4.98047-17.87109 0-26.07422-8.98438-8.10547-9.08203-8.10547-27.14843v-52.63672h-14.25781v-12.59766l16.40625-9.96094 8.59375-23.046872h19.04297v23.242192h30.56641v22.36328h-30.56641v52.63672q0 6.34765 3.51563 9.375 3.61328 3.02734 9.47265 3.02734z"/>
<path d="m791.27994 223.43649-19.62891-62.79297q-1.85547-5.76171-6.93359-26.17187h-0.78125q-3.90625 17.08984-6.83594 26.36719l-20.21484 62.59765h-18.75l-29.19922-107.03125h16.99219q10.35156 40.33203 15.72265 61.42578 5.46875 21.09375 6.25 28.41797h0.78125q1.07422-5.5664 3.41797-14.35547 2.44141-8.88671 4.19922-14.0625l19.62891-61.42578h17.57812l19.14063 61.42578q5.46875 16.79688 7.42187 28.22266h0.78125q0.39063-3.51562 2.05078-10.83984 1.75781-7.32422 20.41016-78.8086h16.79687l-29.58984 107.03125zm133.98437 0-3.22265-15.23437h-0.78125q-8.00782 10.05859-16.01563 13.67187-7.91015 3.51563-19.82422 3.51563-15.91797 0-25-8.20313-8.98437-8.20312-8.98437-23.33984 0-32.42188 51.85547-33.98438l18.16406-0.58593v-6.64063q0-12.59765-5.46875-18.55469-5.37109-6.05468-17.28516-6.05468-13.3789 0-30.27343 8.20312l-4.98047-12.40234q7.91015-4.29688 17.28515-6.73828 9.47266-2.44141 18.94532-2.44141 19.14062 0 28.32031 8.49609 9.27734 8.4961 9.27734 27.2461v73.04687zm-36.62109-11.42578q15.13672 0 23.73047-8.30078 8.6914-8.30078 8.6914-23.24219v-9.66797l-16.21093 0.6836q-19.33594 0.68359-27.92969 6.05469-8.49609 5.27343-8.49609 16.5039 0 8.78906 5.27343 13.37891 5.3711 4.58984 14.94141 4.58984zm130.85938-97.55859q7.1289 0 12.793 1.17187l-2.2461 15.03907q-6.6407-1.46485-11.7188-1.46485-12.9883 0-22.26561 10.54688-9.17968 10.54687-9.17968 26.26953v57.42187h-16.21094v-107.03125h13.37891l1.85546 19.82422h0.78125q5.95704-10.44922 14.35551-16.11328 8.3984-5.66406 18.457-5.66406zm101.6602 94.6289h-0.879q-11.2304 16.3086-33.5937 16.3086-20.9961 0-32.7148-14.35547-11.6211-14.35547-11.6211-40.82031 0-26.46485 11.7187-41.11328 11.7188-14.64844 32.6172-14.64844 21.7773 0 33.3984 15.82031h1.2696l-0.6836-7.71484-0.3907-7.51953v-43.554692h16.211v151.953122h-13.1836zm-32.4219 2.73438q16.6015 0 24.0234-8.98438 7.5195-9.08203 7.5195-29.19921v-3.41797q0-22.75391-7.6171-32.42188-7.5196-9.76562-24.1211-9.76562-14.2578 0-21.875 11.13281-7.5196 11.03516-7.5196 31.25 0 20.50781 7.5196 30.95703 7.5195 10.44922 22.0703 10.44922zm127.3437 13.57422q-23.7304 0-37.5-14.45313-13.6718-14.45312-13.6718-40.13672 0-25.8789 12.6953-41.11328 12.7929-15.23437 34.2773-15.23437 20.1172 0 31.8359 13.28125 11.7188 13.18359 11.7188 34.86328v10.25391h-73.7305q0.4883 18.84765 9.4727 28.61328 9.082 9.76562 25.4883 9.76562 17.2851 0 34.1797-7.22656v14.45312q-8.5938 3.71094-16.3086 5.27344-7.6172 1.66016-18.4571 1.66016zm-4.3945-97.36328q-12.8906 0-20.6055 8.39843-7.6172 8.39844-8.9843 23.24219h55.957q0-15.33203-6.836-23.4375-6.8359-8.20312-19.5312-8.20312zm144.6289 95.41015v-69.23828q0-13.08594-5.957-19.53125-5.9571-6.44531-18.6524-6.44531-16.7968 0-24.6093 9.08203t-7.8125 29.98047v56.15234h-16.211v-107.03125h13.1836l2.6367 14.64844h0.7813q4.9804-7.91016 13.9648-12.20703 8.9844-4.39453 20.0196-4.39453 19.3359 0 29.1015 9.375 9.7656 9.27734 9.7656 29.78515v69.82422z"/>
</g>
<g transform="translate(-10.708266,-9.2965379)">
<g id="e" transform="matrix(2.6712834,0,0,2.6712834,150.95027,149.53854)">
<g id="f" mask="url(#d)">
<path d="m-31.1718-33.813208 26.496029 74.188883h9.3515399l26.49603-74.188883h-9.767164l-16.728866 47.588948q-1.662496 4.571864-2.805462 8.624198-1.142966 3.948427-1.870308 7.585137-.72734199-3.63671-1.8703079-7.689043-1.142966-4.052334-2.805462-8.728104l-16.624959-47.381136z" stroke="#000" stroke-width="4.51171"/>
<circle transform="scale(-1,1)" r="43" fill="none" stroke="#000" stroke-width="9"/>
<g id="g" transform="scale(-1,1)">
<polygon id="a" points="46 -3 46 3 51 0" stroke="#000" stroke-linejoin="round" stroke-width="3"/>
<use transform="rotate(11.25)" xlink:href="#a"/>
<use transform="rotate(22.5)" xlink:href="#a"/>
<use transform="rotate(33.75)" xlink:href="#a"/>
<use transform="rotate(45)" xlink:href="#a"/>
<use transform="rotate(56.25)" xlink:href="#a"/>
<use transform="rotate(67.5)" xlink:href="#a"/>
<use transform="rotate(78.75)" xlink:href="#a"/>
<use transform="rotate(90)" xlink:href="#a"/>
<use transform="rotate(101.25)" xlink:href="#a"/>
<use transform="rotate(112.5)" xlink:href="#a"/>
<use transform="rotate(123.75)" xlink:href="#a"/>
<use transform="rotate(135)" xlink:href="#a"/>
<use transform="rotate(146.25)" xlink:href="#a"/>
<use transform="rotate(157.5)" xlink:href="#a"/>
<use transform="rotate(168.75)" xlink:href="#a"/>
<use transform="scale(-1)" xlink:href="#a"/>
<use transform="rotate(191.25)" xlink:href="#a"/>
<use transform="rotate(202.5)" xlink:href="#a"/>
<use transform="rotate(213.75)" xlink:href="#a"/>
<use transform="rotate(225)" xlink:href="#a"/>
<use transform="rotate(236.25)" xlink:href="#a"/>
<use transform="rotate(247.5)" xlink:href="#a"/>
<use transform="rotate(258.75)" xlink:href="#a"/>
<use transform="rotate(-90)" xlink:href="#a"/>
<use transform="rotate(-78.75)" xlink:href="#a"/>
<use transform="rotate(-67.5)" xlink:href="#a"/>
<use transform="rotate(-56.25)" xlink:href="#a"/>
<use transform="rotate(-45)" xlink:href="#a"/>
<use transform="rotate(-33.75)" xlink:href="#a"/>
<use transform="rotate(-22.5)" xlink:href="#a"/>
<use transform="rotate(-11.25)" xlink:href="#a"/>
</g>
<g id="h" transform="scale(-1,1)">
<polygon id="c" points="7 -42 -7 -42 0 -35" stroke="#000" stroke-linejoin="round" stroke-width="6"/>
<use transform="rotate(72)" xlink:href="#c"/>
<use transform="rotate(144)" xlink:href="#c"/>
<use transform="rotate(216)" xlink:href="#c"/>
<use transform="rotate(-72)" xlink:href="#c"/>
</g>
</g>
<mask>
<rect x="-60" y="-60" width="120" height="120" fill="#fff"/>
<circle cy="-40" r="3"/>
<use transform="rotate(72)" xlink:href="#b"/>
<use transform="rotate(144)" xlink:href="#b"/>
<use transform="rotate(216)" xlink:href="#b"/>
<use transform="rotate(-72)" xlink:href="#b"/>
</mask>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@@ -1,4 +1,4 @@
[toolchain]
channel = "1.81.0"
channel = "1.83.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"

View File

@@ -62,6 +62,7 @@ pub fn routes() -> Vec<Route> {
diagnostics,
get_diagnostics_config,
resend_user_invite,
get_diagnostics_http,
]
}
@@ -494,11 +495,11 @@ struct UserOrgTypeData {
async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit =
match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let Some(mut user_to_edit) =
UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await
else {
err!("The specified user isn't member of the organization")
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
@@ -601,9 +602,8 @@ async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
}
async fn has_http_access() -> bool {
let req = match make_http_request(Method::HEAD, "https://github.com/dani-garcia/vaultwarden") {
Ok(r) => r,
Err(_) => return false,
let Ok(req) = make_http_request(Method::HEAD, "https://github.com/dani-garcia/vaultwarden") else {
return false;
};
match req.send().await {
Ok(r) => r.status().is_success(),
@@ -713,6 +713,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"ip_header_name": ip_header_name,
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
"enable_websocket": &CONFIG.enable_websocket(),
"db_type": *DB_TYPE,
"db_version": get_sql_server_version(&mut conn).await,
"admin_url": format!("{}/diagnostics", admin_url()),
@@ -734,6 +735,11 @@ fn get_diagnostics_config(_token: AdminToken) -> Json<Value> {
Json(support_json)
}
#[get("/diagnostics/http?<code>")]
fn get_diagnostics_http(code: u16, _token: AdminToken) -> EmptyResult {
err_code!(format!("Testing error {code} response"), code);
}
#[post("/config", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner();

View File

@@ -1,5 +1,7 @@
use std::collections::HashSet;
use crate::db::DbPool;
use chrono::{SecondsFormat, Utc};
use chrono::Utc;
use rocket::serde::json::Json;
use serde_json::Value;
@@ -13,7 +15,7 @@ use crate::{
crypto,
db::{models::*, DbConn},
mail,
util::NumberOrString,
util::{format_date, NumberOrString},
CONFIG,
};
@@ -477,6 +479,60 @@ struct KeyData {
private_key: String,
}
fn validate_keydata(
data: &KeyData,
existing_ciphers: &[Cipher],
existing_folders: &[Folder],
existing_emergency_access: &[EmergencyAccess],
existing_user_orgs: &[UserOrganization],
existing_sends: &[Send],
) -> EmptyResult {
// Check that we're correctly rotating all the user's ciphers
let existing_cipher_ids = existing_ciphers.iter().map(|c| c.uuid.as_str()).collect::<HashSet<_>>();
let provided_cipher_ids = data
.ciphers
.iter()
.filter(|c| c.organization_id.is_none())
.filter_map(|c| c.id.as_deref())
.collect::<HashSet<_>>();
if !provided_cipher_ids.is_superset(&existing_cipher_ids) {
err!("All existing ciphers must be included in the rotation")
}
// Check that we're correctly rotating all the user's folders
let existing_folder_ids = existing_folders.iter().map(|f| f.uuid.as_str()).collect::<HashSet<_>>();
let provided_folder_ids = data.folders.iter().filter_map(|f| f.id.as_deref()).collect::<HashSet<_>>();
if !provided_folder_ids.is_superset(&existing_folder_ids) {
err!("All existing folders must be included in the rotation")
}
// Check that we're correctly rotating all the user's emergency access keys
let existing_emergency_access_ids =
existing_emergency_access.iter().map(|ea| ea.uuid.as_str()).collect::<HashSet<_>>();
let provided_emergency_access_ids =
data.emergency_access_keys.iter().map(|ea| ea.id.as_str()).collect::<HashSet<_>>();
if !provided_emergency_access_ids.is_superset(&existing_emergency_access_ids) {
err!("All existing emergency access keys must be included in the rotation")
}
// Check that we're correctly rotating all the user's reset password keys
let existing_reset_password_ids = existing_user_orgs.iter().map(|uo| uo.org_uuid.as_str()).collect::<HashSet<_>>();
let provided_reset_password_ids =
data.reset_password_keys.iter().map(|rp| rp.organization_id.as_str()).collect::<HashSet<_>>();
if !provided_reset_password_ids.is_superset(&existing_reset_password_ids) {
err!("All existing reset password keys must be included in the rotation")
}
// Check that we're correctly rotating all the user's sends
let existing_send_ids = existing_sends.iter().map(|s| s.uuid.as_str()).collect::<HashSet<_>>();
let provided_send_ids = data.sends.iter().filter_map(|s| s.id.as_deref()).collect::<HashSet<_>>();
if !provided_send_ids.is_superset(&existing_send_ids) {
err!("All existing sends must be included in the rotation")
}
Ok(())
}
#[post("/accounts/key", data = "<data>")]
async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
// TODO: See if we can wrap everything within a SQL Transaction. If something fails it should revert everything.
@@ -494,20 +550,34 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
let user_uuid = &headers.user.uuid;
// TODO: Ideally we'd do everything after this point in a single transaction.
let mut existing_ciphers = Cipher::find_owned_by_user(user_uuid, &mut conn).await;
let mut existing_folders = Folder::find_by_user(user_uuid, &mut conn).await;
let mut existing_emergency_access = EmergencyAccess::find_all_by_grantor_uuid(user_uuid, &mut conn).await;
let mut existing_user_orgs = UserOrganization::find_by_user(user_uuid, &mut conn).await;
// We only rotate the reset password key if it is set.
existing_user_orgs.retain(|uo| uo.reset_password_key.is_some());
let mut existing_sends = Send::find_by_user(user_uuid, &mut conn).await;
validate_keydata(
&data,
&existing_ciphers,
&existing_folders,
&existing_emergency_access,
&existing_user_orgs,
&existing_sends,
)?;
// Update folder data
for folder_data in data.folders {
// Skip `null` folder id entries.
// See: https://github.com/bitwarden/clients/issues/8453
if let Some(folder_id) = folder_data.id {
let mut saved_folder = match Folder::find_by_uuid(&folder_id, &mut conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
let Some(saved_folder) = existing_folders.iter_mut().find(|f| f.uuid == folder_id) else {
err!("Folder doesn't exist")
};
if &saved_folder.user_uuid != user_uuid {
err!("The folder is not owned by the user")
}
saved_folder.name = folder_data.name;
saved_folder.save(&mut conn).await?
}
@@ -515,12 +585,11 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update emergency access data
for emergency_access_data in data.emergency_access_keys {
let mut saved_emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(&emergency_access_data.id, user_uuid, &mut conn).await
{
Some(emergency_access) => emergency_access,
None => err!("Emergency access doesn't exist or is not owned by the user"),
};
let Some(saved_emergency_access) =
existing_emergency_access.iter_mut().find(|ea| ea.uuid == emergency_access_data.id)
else {
err!("Emergency access doesn't exist or is not owned by the user")
};
saved_emergency_access.key_encrypted = Some(emergency_access_data.key_encrypted);
saved_emergency_access.save(&mut conn).await?
@@ -528,13 +597,11 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update reset password data
for reset_password_data in data.reset_password_keys {
let mut user_org =
match UserOrganization::find_by_user_and_org(user_uuid, &reset_password_data.organization_id, &mut conn)
.await
{
Some(reset_password) => reset_password,
None => err!("Reset password doesn't exist"),
};
let Some(user_org) =
existing_user_orgs.iter_mut().find(|uo| uo.org_uuid == reset_password_data.organization_id)
else {
err!("Reset password doesn't exist")
};
user_org.reset_password_key = Some(reset_password_data.reset_password_key);
user_org.save(&mut conn).await?
@@ -542,12 +609,11 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update send data
for send_data in data.sends {
let mut send = match Send::find_by_uuid(send_data.id.as_ref().unwrap(), &mut conn).await {
Some(send) => send,
None => err!("Send doesn't exist"),
let Some(send) = existing_sends.iter_mut().find(|s| &s.uuid == send_data.id.as_ref().unwrap()) else {
err!("Send doesn't exist")
};
update_send_from_data(&mut send, send_data, &headers, &mut conn, &nt, UpdateType::None).await?;
update_send_from_data(send, send_data, &headers, &mut conn, &nt, UpdateType::None).await?;
}
// Update cipher data
@@ -555,20 +621,15 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
for cipher_data in data.ciphers {
if cipher_data.organization_id.is_none() {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.id.as_ref().unwrap(), &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(saved_cipher) = existing_ciphers.iter_mut().find(|c| &c.uuid == cipher_data.id.as_ref().unwrap())
else {
err!("Cipher doesn't exist")
};
if saved_cipher.user_uuid.as_ref().unwrap() != user_uuid {
err!("The cipher is not owned by the user")
}
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
// We force the users to logout after the user has been saved to try and prevent these issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None)
.await?
update_cipher_from_data(saved_cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None).await?
}
}
@@ -739,14 +800,12 @@ struct VerifyEmailTokenData {
async fn post_verify_email_token(data: Json<VerifyEmailTokenData>, mut conn: DbConn) -> EmptyResult {
let data: VerifyEmailTokenData = data.into_inner();
let mut user = match User::find_by_uuid(&data.user_id, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
let Some(mut user) = User::find_by_uuid(&data.user_id, &mut conn).await else {
err!("User doesn't exist")
};
let claims = match decode_verify_email(&data.token) {
Ok(claims) => claims,
Err(_) => err!("Invalid claim"),
let Ok(claims) = decode_verify_email(&data.token) else {
err!("Invalid claim")
};
if claims.sub != user.uuid {
err!("Invalid claim");
@@ -798,15 +857,14 @@ struct DeleteRecoverTokenData {
async fn post_delete_recover_token(data: Json<DeleteRecoverTokenData>, mut conn: DbConn) -> EmptyResult {
let data: DeleteRecoverTokenData = data.into_inner();
let user = match User::find_by_uuid(&data.user_id, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
let Ok(claims) = decode_delete(&data.token) else {
err!("Invalid claim")
};
let claims = match decode_delete(&data.token) {
Ok(claims) => claims,
Err(_) => err!("Invalid claim"),
let Some(user) = User::find_by_uuid(&data.user_id, &mut conn).await else {
err!("User doesn't exist")
};
if claims.sub != user.uuid {
err!("Invalid claim");
}
@@ -842,7 +900,7 @@ struct PasswordHintData {
#[post("/accounts/password-hint", data = "<data>")]
async fn password_hint(data: Json<PasswordHintData>, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
if !CONFIG.password_hints_allowed() || (!CONFIG.mail_enabled() && !CONFIG.show_password_hint()) {
err!("This server is not configured to provide password hints.");
}
@@ -901,14 +959,12 @@ pub async fn _prelogin(data: Json<PreloginData>, mut conn: DbConn) -> Json<Value
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT, None, None),
};
let result = json!({
Json(json!({
"kdf": kdf_type,
"kdfIterations": kdf_iter,
"kdfMemory": kdf_mem,
"kdfParallelism": kdf_para,
});
Json(result)
}))
}
// https://github.com/bitwarden/server/blob/master/src/Api/Models/Request/Accounts/SecretVerificationRequestModel.cs
@@ -980,11 +1036,8 @@ impl<'r> FromRequest<'r> for KnownDevice {
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let email = if let Some(email_b64) = req.headers().get_one("X-Request-Email") {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes,
Err(_) => {
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
}
let Ok(email_bytes) = data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) else {
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
};
match String::from_utf8(email_bytes) {
Ok(email) => email,
@@ -1025,9 +1078,9 @@ async fn put_device_token(uuid: &str, data: Json<PushToken>, headers: Headers, m
let data = data.into_inner();
let token = data.push_token;
let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await {
Some(device) => device,
None => err!(format!("Error: device {uuid} should be present before a token can be assigned")),
let Some(mut device) = Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await
else {
err!(format!("Error: device {uuid} should be present before a token can be assigned"))
};
// if the device already has been registered
@@ -1084,31 +1137,35 @@ struct AuthRequestRequest {
device_identifier: String,
email: String,
public_key: String,
#[serde(alias = "type")]
_type: i32,
// Not used for now
// #[serde(alias = "type")]
// _type: i32,
}
#[post("/auth-requests", data = "<data>")]
async fn post_auth_request(
data: Json<AuthRequestRequest>,
headers: ClientHeaders,
client_headers: ClientHeaders,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let user = match User::find_by_mail(&data.email, &mut conn).await {
Some(user) => user,
None => {
err!("AuthRequest doesn't exist")
}
let Some(user) = User::find_by_mail(&data.email, &mut conn).await else {
err!("AuthRequest doesn't exist", "User not found")
};
// Validate device uuid and type
match Device::find_by_uuid_and_user(&data.device_identifier, &user.uuid, &mut conn).await {
Some(device) if device.atype == client_headers.device_type => {}
_ => err!("AuthRequest doesn't exist", "Device verification failed"),
}
let mut auth_request = AuthRequest::new(
user.uuid.clone(),
data.device_identifier.clone(),
headers.device_type,
headers.ip.ip.to_string(),
client_headers.device_type,
client_headers.ip.ip.to_string(),
data.access_code,
data.public_key,
);
@@ -1123,7 +1180,7 @@ async fn post_auth_request(
"requestIpAddress": auth_request.request_ip,
"key": null,
"masterPasswordHash": null,
"creationDate": auth_request.creation_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true),
"creationDate": format_date(&auth_request.creation_date),
"responseDate": null,
"requestApproved": false,
"origin": CONFIG.domain_origin(),
@@ -1132,33 +1189,26 @@ async fn post_auth_request(
}
#[get("/auth-requests/<uuid>")]
async fn get_auth_request(uuid: &str, mut conn: DbConn) -> JsonResult {
let auth_request = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
async fn get_auth_request(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let Some(auth_request) = AuthRequest::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("AuthRequest doesn't exist", "Record not found or user uuid does not match")
};
let response_date_utc = auth_request
.response_date
.map(|response_date| response_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true));
let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date));
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
Ok(Json(json!({
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": format_date(&auth_request.creation_date),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
})))
}
#[derive(Debug, Deserialize)]
@@ -1174,82 +1224,86 @@ struct AuthResponseRequest {
async fn put_auth_request(
uuid: &str,
data: Json<AuthResponseRequest>,
headers: Headers,
mut conn: DbConn,
ant: AnonymousNotify<'_>,
nt: Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let mut auth_request: AuthRequest = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
let Some(mut auth_request) = AuthRequest::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("AuthRequest doesn't exist", "Record not found or user uuid does not match")
};
auth_request.approved = Some(data.request_approved);
auth_request.enc_key = Some(data.key);
auth_request.master_password_hash = data.master_password_hash;
auth_request.response_device_id = Some(data.device_identifier.clone());
auth_request.save(&mut conn).await?;
if auth_request.approved.unwrap_or(false) {
ant.send_auth_response(&auth_request.user_uuid, &auth_request.uuid).await;
nt.send_auth_response(&auth_request.user_uuid, &auth_request.uuid, data.device_identifier, &mut conn).await;
if auth_request.approved.is_some() {
err!("An authentication request with the same device already exists")
}
let response_date_utc = auth_request
.response_date
.map(|response_date| response_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true));
let response_date = Utc::now().naive_utc();
let response_date_utc = format_date(&response_date);
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
if data.request_approved {
auth_request.approved = Some(data.request_approved);
auth_request.enc_key = Some(data.key);
auth_request.master_password_hash = data.master_password_hash;
auth_request.response_device_id = Some(data.device_identifier.clone());
auth_request.response_date = Some(response_date);
auth_request.save(&mut conn).await?;
ant.send_auth_response(&auth_request.user_uuid, &auth_request.uuid).await;
nt.send_auth_response(&auth_request.user_uuid, &auth_request.uuid, data.device_identifier, &mut conn).await;
} else {
// If denied, there's no reason to keep the request
auth_request.delete(&mut conn).await?;
}
Ok(Json(json!({
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": format_date(&auth_request.creation_date),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
})))
}
#[get("/auth-requests/<uuid>/response?<code>")]
async fn get_auth_request_response(uuid: &str, code: &str, mut conn: DbConn) -> JsonResult {
let auth_request = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
async fn get_auth_request_response(
uuid: &str,
code: &str,
client_headers: ClientHeaders,
mut conn: DbConn,
) -> JsonResult {
let Some(auth_request) = AuthRequest::find_by_uuid(uuid, &mut conn).await else {
err!("AuthRequest doesn't exist", "User not found")
};
if !auth_request.check_access_code(code) {
err!("Access code invalid doesn't exist")
if auth_request.device_type != client_headers.device_type
|| auth_request.request_ip != client_headers.ip.ip.to_string()
|| !auth_request.check_access_code(code)
{
err!("AuthRequest doesn't exist", "Invalid device, IP or code")
}
let response_date_utc = auth_request
.response_date
.map(|response_date| response_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true));
let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date));
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
Ok(Json(json!({
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": format_date(&auth_request.creation_date),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
})))
}
#[get("/auth-requests")]
@@ -1261,7 +1315,7 @@ async fn get_auth_requests(headers: Headers, mut conn: DbConn) -> JsonResult {
.iter()
.filter(|request| request.approved.is_none())
.map(|request| {
let response_date_utc = request.response_date.map(|response_date| response_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true));
let response_date_utc = request.response_date.map(|response_date| format_date(&response_date));
json!({
"id": request.uuid,
@@ -1270,7 +1324,7 @@ async fn get_auth_requests(headers: Headers, mut conn: DbConn) -> JsonResult {
"requestIpAddress": request.request_ip,
"key": request.enc_key,
"masterPasswordHash": request.master_password_hash,
"creationDate": request.creation_date.and_utc().to_rfc3339_opts(SecondsFormat::Micros, true),
"creationDate": format_date(&request.creation_date),
"responseDate": response_date_utc,
"requestApproved": request.approved,
"origin": CONFIG.domain_origin(),

View File

@@ -10,6 +10,7 @@ use rocket::{
};
use serde_json::Value;
use crate::auth::ClientVersion;
use crate::util::NumberOrString;
use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType},
@@ -104,11 +105,27 @@ struct SyncData {
}
#[get("/sync?<data..>")]
async fn sync(data: SyncData, headers: Headers, mut conn: DbConn) -> Json<Value> {
async fn sync(
data: SyncData,
headers: Headers,
client_version: Option<ClientVersion>,
mut conn: DbConn,
) -> Json<Value> {
let user_json = headers.user.to_json(&mut conn).await;
// Get all ciphers which are visible by the user
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &mut conn).await;
let mut ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &mut conn).await;
// Filter out SSH keys if the client version is less than 2024.12.0
let show_ssh_keys = if let Some(client_version) = client_version {
let ver_match = semver::VersionReq::parse(">=2024.12.0").unwrap();
ver_match.matches(&client_version.0)
} else {
false
};
if !show_ssh_keys {
ciphers.retain(|c| c.atype != 5);
}
let cipher_sync_data = CipherSyncData::new(&headers.user.uuid, CipherSyncType::User, &mut conn).await;
@@ -150,7 +167,6 @@ async fn sync(data: SyncData, headers: Headers, mut conn: DbConn) -> Json<Value>
"ciphers": ciphers_json,
"domains": domains_json,
"sends": sends_json,
"unofficialServer": true,
"object": "sync"
}))
}
@@ -177,9 +193,8 @@ async fn get_ciphers(headers: Headers, mut conn: DbConn) -> Json<Value> {
#[get("/ciphers/<uuid>")]
async fn get_cipher(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -206,7 +221,7 @@ pub struct CipherData {
// Id is optional as it is included only in bulk share
pub id: Option<String>,
// Folder id is not included in import
folder_id: Option<String>,
pub folder_id: Option<String>,
// TODO: Some of these might appear all the time, no need for Option
#[serde(alias = "organizationID")]
pub organization_id: Option<String>,
@@ -217,7 +232,8 @@ pub struct CipherData {
Login = 1,
SecureNote = 2,
Card = 3,
Identity = 4
Identity = 4,
SshKey = 5
*/
pub r#type: i32,
pub name: String,
@@ -229,6 +245,7 @@ pub struct CipherData {
secure_note: Option<Value>,
card: Option<Value>,
identity: Option<Value>,
ssh_key: Option<Value>,
favorite: Option<bool>,
reprompt: Option<i32>,
@@ -411,14 +428,9 @@ pub async fn update_cipher_from_data(
cipher.user_uuid = Some(headers.user.uuid.clone());
}
if let Some(ref folder_id) = data.folder_id {
match Folder::find_by_uuid(folder_id, conn).await {
Some(folder) => {
if folder.user_uuid != headers.user.uuid {
err!("Folder is not owned by user")
}
}
None => err!("Folder doesn't exist"),
if let Some(ref folder_uuid) = data.folder_id {
if Folder::find_by_uuid_and_user(folder_uuid, &headers.user.uuid, conn).await.is_none() {
err!("Invalid folder", "Folder does not exist or belongs to another user");
}
}
@@ -470,6 +482,7 @@ pub async fn update_cipher_from_data(
2 => data.secure_note,
3 => data.card,
4 => data.identity,
5 => data.ssh_key,
_ => err!("Invalid type"),
};
@@ -492,7 +505,7 @@ pub async fn update_cipher_from_data(
cipher.fields = data.fields.map(|f| _clean_cipher_data(f).to_string());
cipher.data = type_data.to_string();
cipher.password_history = data.password_history.map(|f| f.to_string());
cipher.reprompt = data.reprompt;
cipher.reprompt = data.reprompt.filter(|r| *r == RepromptType::None as i32 || *r == RepromptType::Password as i32);
cipher.save(conn).await?;
cipher.move_to_folder(data.folder_id, &headers.user.uuid, conn).await?;
@@ -566,11 +579,11 @@ async fn post_ciphers_import(
Cipher::validate_cipher_data(&data.ciphers)?;
// Read and create the folders
let existing_folders: Vec<String> =
Folder::find_by_user(&headers.user.uuid, &mut conn).await.into_iter().map(|f| f.uuid).collect();
let existing_folders: HashSet<Option<String>> =
Folder::find_by_user(&headers.user.uuid, &mut conn).await.into_iter().map(|f| Some(f.uuid)).collect();
let mut folders: Vec<String> = Vec::with_capacity(data.folders.len());
for folder in data.folders.into_iter() {
let folder_uuid = if folder.id.is_some() && existing_folders.contains(folder.id.as_ref().unwrap()) {
let folder_uuid = if existing_folders.contains(&folder.id) {
folder.id.unwrap()
} else {
let mut new_folder = Folder::new(headers.user.uuid.clone(), folder.name);
@@ -582,8 +595,8 @@ async fn post_ciphers_import(
}
// Read the relations between folders and ciphers
// Ciphers can only be in one folder at the same time
let mut relations_map = HashMap::with_capacity(data.folder_relationships.len());
for relation in data.folder_relationships {
relations_map.insert(relation.key, relation.value);
}
@@ -642,9 +655,8 @@ async fn put_cipher(
) -> JsonResult {
let data: CipherData = data.into_inner();
let mut cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(mut cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
// TODO: Check if only the folder ID or favorite status is being changed.
@@ -676,19 +688,13 @@ async fn put_cipher_partial(
) -> JsonResult {
let data: PartialCipherData = data.into_inner();
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if let Some(ref folder_id) = data.folder_id {
match Folder::find_by_uuid(folder_id, &mut conn).await {
Some(folder) => {
if folder.user_uuid != headers.user.uuid {
err!("Folder is not owned by user")
}
}
None => err!("Folder doesn't exist"),
if let Some(ref folder_uuid) = data.folder_id {
if Folder::find_by_uuid_and_user(folder_uuid, &headers.user.uuid, &mut conn).await.is_none() {
err!("Invalid folder", "Folder does not exist or belongs to another user");
}
}
@@ -755,9 +761,8 @@ async fn post_collections_update(
) -> JsonResult {
let data: CollectionsAdminData = data.into_inner();
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -769,7 +774,8 @@ async fn post_collections_update(
HashSet::<String>::from_iter(cipher.get_collections(headers.user.uuid.clone(), &mut conn).await);
for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid(collection, &mut conn).await {
match Collection::find_by_uuid_and_org(collection, cipher.organization_uuid.as_ref().unwrap(), &mut conn).await
{
None => err!("Invalid collection ID provided"),
Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &mut conn).await {
@@ -832,9 +838,8 @@ async fn post_collections_admin(
) -> EmptyResult {
let data: CollectionsAdminData = data.into_inner();
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -846,7 +851,8 @@ async fn post_collections_admin(
HashSet::<String>::from_iter(cipher.get_admin_collections(headers.user.uuid.clone(), &mut conn).await);
for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid(collection, &mut conn).await {
match Collection::find_by_uuid_and_org(collection, cipher.organization_uuid.as_ref().unwrap(), &mut conn).await
{
None => err!("Invalid collection ID provided"),
Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &mut conn).await {
@@ -1024,9 +1030,8 @@ async fn share_cipher_by_uuid(
/// redirects to the same location as before the v2 API.
#[get("/ciphers/<uuid>/attachment/<attachment_id>")]
async fn get_attachment(uuid: &str, attachment_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -1065,9 +1070,8 @@ async fn post_attachment_v2(
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
let cipher = match Cipher::find_by_uuid(uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -1131,9 +1135,8 @@ async fn save_attachment(
err!("Attachment size can't be negative")
}
let cipher = match Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &mut conn).await {
@@ -1526,21 +1529,15 @@ async fn move_cipher_selected(
let data = data.into_inner();
let user_uuid = headers.user.uuid;
if let Some(ref folder_id) = data.folder_id {
match Folder::find_by_uuid(folder_id, &mut conn).await {
Some(folder) => {
if folder.user_uuid != user_uuid {
err!("Folder is not owned by user")
}
}
None => err!("Folder doesn't exist"),
if let Some(ref folder_uuid) = data.folder_id {
if Folder::find_by_uuid_and_user(folder_uuid, &user_uuid, &mut conn).await.is_none() {
err!("Invalid folder", "Folder does not exist or belongs to another user");
}
}
for uuid in data.ids {
let cipher = match Cipher::find_by_uuid(&uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(&uuid, &mut conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_accessible_to_user(&user_uuid, &mut conn).await {
@@ -1648,9 +1645,8 @@ async fn _delete_cipher_by_uuid(
soft_delete: bool,
nt: &Notify<'_>,
) -> EmptyResult {
let mut cipher = match Cipher::find_by_uuid(uuid, conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(mut cipher) = Cipher::find_by_uuid(uuid, conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn).await {
@@ -1720,9 +1716,8 @@ async fn _delete_multiple_ciphers(
}
async fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &mut DbConn, nt: &Notify<'_>) -> JsonResult {
let mut cipher = match Cipher::find_by_uuid(uuid, conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(mut cipher) = Cipher::find_by_uuid(uuid, conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn).await {
@@ -1788,18 +1783,16 @@ async fn _delete_cipher_attachment_by_id(
conn: &mut DbConn,
nt: &Notify<'_>,
) -> EmptyResult {
let attachment = match Attachment::find_by_id(attachment_id, conn).await {
Some(attachment) => attachment,
None => err!("Attachment doesn't exist"),
let Some(attachment) = Attachment::find_by_id(attachment_id, conn).await else {
err!("Attachment doesn't exist")
};
if attachment.cipher_uuid != uuid {
err!("Attachment from other cipher")
}
let cipher = match Cipher::find_by_uuid(uuid, conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
let Some(cipher) = Cipher::find_by_uuid(uuid, conn).await else {
err!("Cipher doesn't exist")
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn).await {

View File

@@ -137,11 +137,11 @@ async fn post_emergency_access(
let data: EmergencyAccessUpdateData = data.into_inner();
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
let new_type = match EmergencyAccessType::from_str(&data.r#type.into_string()) {
Some(new_type) => new_type as i32,
@@ -284,24 +284,22 @@ async fn send_invite(data: Json<EmergencyAccessInviteData>, headers: Headers, mu
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?;
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
err!("The grantee user is already accepted or confirmed to the organization");
}
let email = match emergency_access.email.clone() {
Some(email) => email,
None => err!("Email not valid."),
let Some(email) = emergency_access.email.clone() else {
err!("Email not valid.")
};
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_mail(&email, &mut conn).await else {
err!("Grantee user not found.")
};
let grantor_user = headers.user;
@@ -356,16 +354,15 @@ async fn accept_invite(emer_id: &str, data: Json<AcceptData>, headers: Headers,
// We need to search for the uuid in combination with the email, since we do not yet store the uuid of the grantee in the database.
// The uuid of the grantee gets stored once accepted.
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_email(emer_id, &headers.user.email, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_email(emer_id, &headers.user.email, &mut conn).await
else {
err!("Emergency access not valid.")
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if emer_id == claims.emer_id
@@ -403,11 +400,11 @@ async fn confirm_emergency_access(
let data: ConfirmData = data.into_inner();
let key = data.key;
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &confirming_user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &confirming_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
|| emergency_access.grantor_uuid != confirming_user.uuid
@@ -415,15 +412,13 @@ async fn confirm_emergency_access(
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&confirming_user.uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
@@ -450,19 +445,18 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
check_emergency_access_enabled()?;
let initiating_user = headers.user;
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &initiating_user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &initiating_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32 {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let now = Utc::now().naive_utc();
@@ -488,25 +482,23 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32 {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&headers.user.uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
@@ -525,11 +517,11 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let mut emergency_access =
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32
@@ -538,9 +530,8 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
}
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
@@ -563,11 +554,11 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &headers.user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &headers.user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
@@ -602,19 +593,18 @@ async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
check_emergency_access_enabled()?;
let requesting_user = headers.user;
let emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let result = json!({
@@ -650,19 +640,18 @@ async fn password_emergency_access(
//let key = &data.Key;
let requesting_user = headers.user;
let emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(mut grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
// change grantor_user password
@@ -686,19 +675,18 @@ async fn password_emergency_access(
#[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access =
match EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &mut conn);

View File

@@ -25,16 +25,10 @@ async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
#[get("/folders/<uuid>")]
async fn get_folder(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
match Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await {
Some(folder) => Ok(Json(folder.to_json())),
_ => err!("Invalid folder", "Folder does not exist or belongs to another user"),
}
Ok(Json(folder.to_json()))
}
#[derive(Deserialize)]
@@ -71,15 +65,10 @@ async fn put_folder(
) -> JsonResult {
let data: FolderData = data.into_inner();
let mut folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
let Some(mut folder) = Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user")
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
}
folder.name = data.name;
folder.save(&mut conn).await?;
@@ -95,15 +84,10 @@ async fn delete_folder_post(uuid: &str, headers: Headers, conn: DbConn, nt: Noti
#[delete("/folders/<uuid>")]
async fn delete_folder(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
let Some(folder) = Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user")
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
}
// Delete the actual folder entry
folder.delete(&mut conn).await?;

View File

@@ -135,12 +135,13 @@ async fn put_eq_domains(data: Json<EquivDomainData>, headers: Headers, conn: DbC
}
#[get("/hibp/breach?<username>")]
async fn hibp_breach(username: &str) -> JsonResult {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
async fn hibp_breach(username: &str, _headers: Headers) -> JsonResult {
let username: String = url::form_urlencoded::byte_serialize(username.as_bytes()).collect();
if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
let res = make_http_request(Method::GET, &url)?.header("hibp-api-key", api_key).send().await?;
// If we get a 404, return a 404, it means no breached accounts

View File

@@ -9,9 +9,8 @@ use crate::{
core::{log_event, two_factor, CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType,
},
auth::{decode_invite, AdminHeaders, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders},
auth::{decode_invite, AdminHeaders, ClientVersion, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders},
db::{models::*, DbConn},
error::Error,
mail,
util::{convert_json_key_lcase_first, NumberOrString},
CONFIG,
@@ -127,6 +126,7 @@ struct NewCollectionData {
name: String,
groups: Vec<NewCollectionObjectData>,
users: Vec<NewCollectionObjectData>,
id: Option<String>,
external_id: Option<String>,
}
@@ -267,9 +267,8 @@ async fn post_organization(
) -> JsonResult {
let data: OrganizationUpdateData = data.into_inner();
let mut org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(organization) => organization,
None => err!("Can't find organization details"),
let Some(mut org) = Organization::find_by_uuid(org_id, &mut conn).await else {
err!("Can't find organization details")
};
org.name = data.name;
@@ -318,9 +317,8 @@ async fn get_org_collections(org_id: &str, _headers: ManagerHeadersLoose, mut co
async fn get_org_collections_details(org_id: &str, headers: ManagerHeadersLoose, mut conn: DbConn) -> JsonResult {
let mut data = Vec::new();
let user_org = match UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await {
Some(u) => u,
None => err!("User is not part of organization"),
let Some(user_org) = UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await else {
err!("User is not part of organization")
};
// get all collection memberships for the current organization
@@ -363,6 +361,7 @@ async fn get_org_collections_details(org_id: &str, headers: ManagerHeadersLoose,
json_object["users"] = json!(users);
json_object["groups"] = json!(groups);
json_object["object"] = json!("collectionAccessDetails");
json_object["unmanaged"] = json!(false);
data.push(json_object)
}
@@ -386,9 +385,8 @@ async fn post_organization_collections(
) -> JsonResult {
let data: NewCollectionData = data.into_inner();
let org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(organization) => organization,
None => err!("Can't find organization details"),
let Some(org) = Organization::find_by_uuid(org_id, &mut conn).await else {
err!("Can't find organization details")
};
let collection = Collection::new(org.uuid, data.name, data.external_id);
@@ -412,9 +410,8 @@ async fn post_organization_collections(
}
for user in data.users {
let org_user = match UserOrganization::find_by_uuid(&user.id, &mut conn).await {
Some(u) => u,
None => err!("User is not part of organization"),
let Some(org_user) = UserOrganization::find_by_uuid_and_org(&user.id, org_id, &mut conn).await else {
err!("User is not part of organization")
};
if org_user.access_all {
@@ -453,20 +450,14 @@ async fn post_organization_collection_update(
) -> JsonResult {
let data: NewCollectionData = data.into_inner();
let org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(organization) => organization,
None => err!("Can't find organization details"),
if Organization::find_by_uuid(org_id, &mut conn).await.is_none() {
err!("Can't find organization details")
};
let mut collection = match Collection::find_by_uuid(col_id, &mut conn).await {
Some(collection) => collection,
None => err!("Collection not found"),
let Some(mut collection) = Collection::find_by_uuid_and_org(col_id, org_id, &mut conn).await else {
err!("Collection not found")
};
if collection.org_uuid != org.uuid {
err!("Collection is not owned by organization");
}
collection.name = data.name;
collection.external_id = match data.external_id {
Some(external_id) if !external_id.trim().is_empty() => Some(external_id),
@@ -497,9 +488,8 @@ async fn post_organization_collection_update(
CollectionUser::delete_all_by_collection(col_id, &mut conn).await?;
for user in data.users {
let org_user = match UserOrganization::find_by_uuid(&user.id, &mut conn).await {
Some(u) => u,
None => err!("User is not part of organization"),
let Some(org_user) = UserOrganization::find_by_uuid_and_org(&user.id, org_id, &mut conn).await else {
err!("User is not part of organization")
};
if org_user.access_all {
@@ -520,15 +510,8 @@ async fn delete_organization_collection_user(
_headers: AdminHeaders,
mut conn: DbConn,
) -> EmptyResult {
let collection = match Collection::find_by_uuid(col_id, &mut conn).await {
None => err!("Collection not found"),
Some(collection) => {
if collection.org_uuid == org_id {
collection
} else {
err!("Collection and Organization id do not match")
}
}
let Some(collection) = Collection::find_by_uuid_and_org(col_id, org_id, &mut conn).await else {
err!("Collection not found", "Collection does not exist or does not belong to this organization")
};
match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await {
@@ -559,26 +542,20 @@ async fn _delete_organization_collection(
headers: &ManagerHeaders,
conn: &mut DbConn,
) -> EmptyResult {
match Collection::find_by_uuid(col_id, conn).await {
None => err!("Collection not found"),
Some(collection) => {
if collection.org_uuid == org_id {
log_event(
EventType::CollectionDeleted as i32,
&collection.uuid,
org_id,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
)
.await;
collection.delete(conn).await
} else {
err!("Collection and Organization id do not match")
}
}
}
let Some(collection) = Collection::find_by_uuid_and_org(col_id, org_id, conn).await else {
err!("Collection not found", "Collection does not exist or does not belong to this organization")
};
log_event(
EventType::CollectionDeleted as i32,
&collection.uuid,
org_id,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
)
.await;
collection.delete(conn).await
}
#[delete("/organizations/<org_id>/collections/<col_id>")]
@@ -600,12 +577,11 @@ struct DeleteCollectionData {
org_id: String,
}
#[post("/organizations/<org_id>/collections/<col_id>/delete", data = "<_data>")]
#[post("/organizations/<org_id>/collections/<col_id>/delete")]
async fn post_organization_collection_delete(
org_id: &str,
col_id: &str,
headers: ManagerHeaders,
_data: Json<DeleteCollectionData>,
mut conn: DbConn,
) -> EmptyResult {
_delete_organization_collection(org_id, col_id, &headers, &mut conn).await
@@ -650,9 +626,9 @@ async fn get_org_collection_detail(
err!("Collection is not owned by organization")
}
let user_org = match UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await {
Some(u) => u,
None => err!("User is not part of organization"),
let Some(user_org) = UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await
else {
err!("User is not part of organization")
};
let groups: Vec<Value> = if CONFIG.org_groups_enabled() {
@@ -694,9 +670,8 @@ async fn get_org_collection_detail(
#[get("/organizations/<org_id>/collections/<coll_id>/users")]
async fn get_collection_users(org_id: &str, coll_id: &str, _headers: ManagerHeaders, mut conn: DbConn) -> JsonResult {
// Get org and collection, check that collection is from org
let collection = match Collection::find_by_uuid_and_org(coll_id, org_id, &mut conn).await {
None => err!("Collection not found in Organization"),
Some(collection) => collection,
let Some(collection) = Collection::find_by_uuid_and_org(coll_id, org_id, &mut conn).await else {
err!("Collection not found in Organization")
};
let mut user_list = Vec::new();
@@ -730,9 +705,8 @@ async fn put_collection_users(
// And then add all the received ones (except if the user has access_all)
for d in data.iter() {
let user = match UserOrganization::find_by_uuid(&d.id, &mut conn).await {
Some(u) => u,
None => err!("User is not part of organization"),
let Some(user) = UserOrganization::find_by_uuid_and_org(&d.id, org_id, &mut conn).await else {
err!("User is not part of organization")
};
if user.access_all {
@@ -872,20 +846,19 @@ async fn send_invite(org_id: &str, data: Json<InviteData>, headers: AdminHeaders
}
for email in data.emails.iter() {
let email = email.to_lowercase();
let mut user_org_status = UserOrgStatus::Invited as i32;
let user = match User::find_by_mail(&email, &mut conn).await {
let user = match User::find_by_mail(email, &mut conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("User does not exist: {email}"))
}
if !CONFIG.is_email_domain_allowed(&email) {
if !CONFIG.is_email_domain_allowed(email) {
err!("Email domain not eligible for invitations")
}
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(&email);
let invitation = Invitation::new(email);
invitation.save(&mut conn).await?;
}
@@ -1007,18 +980,16 @@ async fn reinvite_user(org_id: &str, user_org: &str, headers: AdminHeaders, mut
}
async fn _reinvite_user(org_id: &str, user_org: &str, invited_by_email: &str, conn: &mut DbConn) -> EmptyResult {
let user_org = match UserOrganization::find_by_uuid(user_org, conn).await {
Some(user_org) => user_org,
None => err!("The user hasn't been invited to the organization."),
let Some(user_org) = UserOrganization::find_by_uuid_and_org(user_org, org_id, conn).await else {
err!("The user hasn't been invited to the organization.")
};
if user_org.status != UserOrgStatus::Invited as i32 {
err!("The user is already accepted or confirmed to the organization")
}
let user = match User::find_by_uuid(&user_org.user_uuid, conn).await {
Some(user) => user,
None => err!("User not found."),
let Some(user) = User::find_by_uuid(&user_org.user_uuid, conn).await else {
err!("User not found.")
};
if !CONFIG.invitations_allowed() && user.password_hash.is_empty() {
@@ -1059,20 +1030,25 @@ struct AcceptData {
reset_password_key: Option<String>,
}
#[post("/organizations/<org_id>/users/<_org_user_id>/accept", data = "<data>")]
async fn accept_invite(org_id: &str, _org_user_id: &str, data: Json<AcceptData>, mut conn: DbConn) -> EmptyResult {
#[post("/organizations/<org_id>/users/<org_user_id>/accept", data = "<data>")]
async fn accept_invite(org_id: &str, org_user_id: &str, data: Json<AcceptData>, mut conn: DbConn) -> EmptyResult {
// The web-vault passes org_id and org_user_id in the URL, but we are just reading them from the JWT instead
let data: AcceptData = data.into_inner();
let claims = decode_invite(&data.token)?;
// If a claim does not have a user_org_id or it does not match the one in from the URI, something is wrong.
match &claims.user_org_id {
Some(ou_id) if ou_id.eq(org_user_id) => {}
_ => err!("Error accepting the invitation", "Claim does not match the org_user_id"),
}
match User::find_by_mail(&claims.email, &mut conn).await {
Some(user) => {
Invitation::take(&claims.email, &mut conn).await;
if let (Some(user_org), Some(org)) = (&claims.user_org_id, &claims.org_id) {
let mut user_org = match UserOrganization::find_by_uuid_and_org(user_org, org, &mut conn).await {
Some(user_org) => user_org,
None => err!("Error accepting the invitation"),
let Some(mut user_org) = UserOrganization::find_by_uuid_and_org(user_org, org, &mut conn).await else {
err!("Error accepting the invitation")
};
if user_org.status != UserOrgStatus::Invited as i32 {
@@ -1213,9 +1189,8 @@ async fn _confirm_invite(
err!("Key or UserId is not set, unable to process request");
}
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await {
Some(user) => user,
None => err!("The specified user isn't a member of the organization"),
let Some(mut user_to_confirm) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await else {
err!("The specified user isn't a member of the organization")
};
if user_to_confirm.atype != UserOrgType::User && headers.org_user_type != UserOrgType::Owner {
@@ -1287,9 +1262,8 @@ async fn get_user(
_headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
let user = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't a member of the organization"),
let Some(user) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await else {
err!("The specified user isn't a member of the organization")
};
// In this case, when groups are requested we also need to include collections.
@@ -1331,14 +1305,12 @@ async fn edit_user(
) -> EmptyResult {
let data: EditUserData = data.into_inner();
let new_type = match UserOrgType::from_str(&data.r#type.into_string()) {
Some(new_type) => new_type,
None => err!("Invalid type"),
let Some(new_type) = UserOrgType::from_str(&data.r#type.into_string()) else {
err!("Invalid type")
};
let mut user_to_edit = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
let Some(mut user_to_edit) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await else {
err!("The specified user isn't member of the organization")
};
if new_type != user_to_edit.atype
@@ -1490,9 +1462,8 @@ async fn _delete_user(
conn: &mut DbConn,
nt: &Notify<'_>,
) -> EmptyResult {
let user_to_delete = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await {
Some(user) => user,
None => err!("User to delete isn't member of the organization"),
let Some(user_to_delete) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await else {
err!("User to delete isn't member of the organization")
};
if user_to_delete.atype != UserOrgType::User && headers.org_user_type != UserOrgType::Owner {
@@ -1598,40 +1569,43 @@ async fn post_org_import(
// TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks.
Cipher::validate_cipher_data(&data.ciphers)?;
let mut collections = Vec::new();
let existing_collections: HashSet<Option<String>> =
Collection::find_by_organization(&org_id, &mut conn).await.into_iter().map(|c| (Some(c.uuid))).collect();
let mut collections: Vec<String> = Vec::with_capacity(data.collections.len());
for coll in data.collections {
let collection = Collection::new(org_id.clone(), coll.name, coll.external_id);
if collection.save(&mut conn).await.is_err() {
collections.push(Err(Error::new("Failed to create Collection", "Failed to create Collection")));
let collection_uuid = if existing_collections.contains(&coll.id) {
coll.id.unwrap()
} else {
collections.push(Ok(collection));
}
let new_collection = Collection::new(org_id.clone(), coll.name, coll.external_id);
new_collection.save(&mut conn).await?;
new_collection.uuid
};
collections.push(collection_uuid);
}
// Read the relations between collections and ciphers
let mut relations = Vec::new();
// Ciphers can be in multiple collections at the same time
let mut relations = Vec::with_capacity(data.collection_relationships.len());
for relation in data.collection_relationships {
relations.push((relation.key, relation.value));
}
let headers: Headers = headers.into();
let mut ciphers = Vec::new();
for cipher_data in data.ciphers {
let mut ciphers: Vec<String> = Vec::with_capacity(data.ciphers.len());
for mut cipher_data in data.ciphers {
// Always clear folder_id's via an organization import
cipher_data.folder_id = None;
let mut cipher = Cipher::new(cipher_data.r#type, cipher_data.name.clone());
update_cipher_from_data(&mut cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None).await.ok();
ciphers.push(cipher);
ciphers.push(cipher.uuid);
}
// Assign the collections
for (cipher_index, coll_index) in relations {
let cipher_id = &ciphers[cipher_index].uuid;
let coll = &collections[coll_index];
let coll_id = match coll {
Ok(coll) => coll.uuid.as_str(),
Err(_) => err!("Failed to assign to collection"),
};
let cipher_id = &ciphers[cipher_index];
let coll_id = &collections[coll_index];
CollectionCipher::save(cipher_id, coll_id, &mut conn).await?;
}
@@ -1649,7 +1623,7 @@ struct BulkCollectionsData {
remove_collections: bool,
}
// This endpoint is only reachable via the organization view, therefor this endpoint is located here
// This endpoint is only reachable via the organization view, therefore this endpoint is located here
// Also Bitwarden does not send out Notifications for these changes, it only does this for individual cipher collection updates
#[post("/ciphers/bulk-collections", data = "<data>")]
async fn post_bulk_collections(data: Json<BulkCollectionsData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
@@ -1722,9 +1696,8 @@ async fn list_policies_token(org_id: &str, token: &str, mut conn: DbConn) -> Jso
let invite = decode_invite(token)?;
let invite_org_id = match invite.org_id {
Some(invite_org_id) => invite_org_id,
None => err!("Invalid token"),
let Some(invite_org_id) = invite.org_id else {
err!("Invalid token")
};
if invite_org_id != org_id {
@@ -1744,9 +1717,8 @@ async fn list_policies_token(org_id: &str, token: &str, mut conn: DbConn) -> Jso
#[get("/organizations/<org_id>/policies/<pol_type>")]
async fn get_policy(org_id: &str, pol_type: i32, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt,
None => err!("Invalid or unsupported policy type"),
let Some(pol_type_enum) = OrgPolicyType::from_i32(pol_type) else {
err!("Invalid or unsupported policy type")
};
let policy = match OrgPolicy::find_by_org_and_type(org_id, pol_type_enum, &mut conn).await {
@@ -1775,9 +1747,8 @@ async fn put_policy(
) -> JsonResult {
let data: PolicyData = data.into_inner();
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt,
None => err!("Invalid or unsupported policy type"),
let Some(pol_type_enum) = OrgPolicyType::from_i32(pol_type) else {
err!("Invalid or unsupported policy type")
};
// Bitwarden only allows the Reset Password policy when Single Org policy is enabled
@@ -2305,14 +2276,14 @@ async fn _restore_organization_user(
}
#[get("/organizations/<org_id>/groups")]
async fn get_groups(org_id: &str, headers: ManagerHeadersLoose, mut conn: DbConn) -> JsonResult {
async fn get_groups(org_id: &str, _headers: ManagerHeadersLoose, mut conn: DbConn) -> JsonResult {
let groups: Vec<Value> = if CONFIG.org_groups_enabled() {
// Group::find_by_organization(&org_id, &mut conn).await.iter().map(Group::to_json).collect::<Value>()
let groups = Group::find_by_organization(org_id, &mut conn).await;
let mut groups_json = Vec::with_capacity(groups.len());
for g in groups {
groups_json.push(g.to_json_details(&headers.org_user.atype, &mut conn).await)
groups_json.push(g.to_json_details(&mut conn).await)
}
groups_json
} else {
@@ -2434,9 +2405,8 @@ async fn put_group(
err!("Group support is disabled");
}
let group = match Group::find_by_uuid(group_id, &mut conn).await {
Some(group) => group,
None => err!("Group not found"),
let Some(group) = Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await else {
err!("Group not found", "Group uuid is invalid or does not belong to the organization")
};
let group_request = data.into_inner();
@@ -2499,18 +2469,17 @@ async fn add_update_group(
})))
}
#[get("/organizations/<_org_id>/groups/<group_id>/details")]
async fn get_group_details(_org_id: &str, group_id: &str, headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
#[get("/organizations/<org_id>/groups/<group_id>/details")]
async fn get_group_details(org_id: &str, group_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
if !CONFIG.org_groups_enabled() {
err!("Group support is disabled");
}
let group = match Group::find_by_uuid(group_id, &mut conn).await {
Some(group) => group,
_ => err!("Group could not be found!"),
let Some(group) = Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await else {
err!("Group not found", "Group uuid is invalid or does not belong to the organization")
};
Ok(Json(group.to_json_details(&(headers.org_user_type as i32), &mut conn).await))
Ok(Json(group.to_json_details(&mut conn).await))
}
#[post("/organizations/<org_id>/groups/<group_id>/delete")]
@@ -2528,9 +2497,8 @@ async fn _delete_group(org_id: &str, group_id: &str, headers: &AdminHeaders, con
err!("Group support is disabled");
}
let group = match Group::find_by_uuid(group_id, conn).await {
Some(group) => group,
_ => err!("Group not found"),
let Some(group) = Group::find_by_uuid_and_org(group_id, org_id, conn).await else {
err!("Group not found", "Group uuid is invalid or does not belong to the organization")
};
log_event(
@@ -2566,29 +2534,27 @@ async fn bulk_delete_groups(
Ok(())
}
#[get("/organizations/<_org_id>/groups/<group_id>")]
async fn get_group(_org_id: &str, group_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
#[get("/organizations/<org_id>/groups/<group_id>")]
async fn get_group(org_id: &str, group_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
if !CONFIG.org_groups_enabled() {
err!("Group support is disabled");
}
let group = match Group::find_by_uuid(group_id, &mut conn).await {
Some(group) => group,
_ => err!("Group not found"),
let Some(group) = Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await else {
err!("Group not found", "Group uuid is invalid or does not belong to the organization")
};
Ok(Json(group.to_json()))
}
#[get("/organizations/<_org_id>/groups/<group_id>/users")]
async fn get_group_users(_org_id: &str, group_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
#[get("/organizations/<org_id>/groups/<group_id>/users")]
async fn get_group_users(org_id: &str, group_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
if !CONFIG.org_groups_enabled() {
err!("Group support is disabled");
}
match Group::find_by_uuid(group_id, &mut conn).await {
Some(_) => { /* Do nothing */ }
_ => err!("Group could not be found!"),
if Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await.is_none() {
err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization")
};
let group_users: Vec<String> = GroupUser::find_by_group(group_id, &mut conn)
@@ -2612,9 +2578,8 @@ async fn put_group_users(
err!("Group support is disabled");
}
match Group::find_by_uuid(group_id, &mut conn).await {
Some(_) => { /* Do nothing */ }
_ => err!("Group could not be found!"),
if Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await.is_none() {
err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization")
};
GroupUser::delete_all_by_group(group_id, &mut conn).await?;
@@ -2639,15 +2604,14 @@ async fn put_group_users(
Ok(())
}
#[get("/organizations/<_org_id>/users/<user_id>/groups")]
async fn get_user_groups(_org_id: &str, user_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
#[get("/organizations/<org_id>/users/<user_id>/groups")]
async fn get_user_groups(org_id: &str, user_id: &str, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
if !CONFIG.org_groups_enabled() {
err!("Group support is disabled");
}
match UserOrganization::find_by_uuid(user_id, &mut conn).await {
Some(_) => { /* Do nothing */ }
_ => err!("User could not be found!"),
if UserOrganization::find_by_uuid_and_org(user_id, org_id, &mut conn).await.is_none() {
err!("User could not be found!")
};
let user_groups: Vec<String> =
@@ -2685,13 +2649,8 @@ async fn put_user_groups(
err!("Group support is disabled");
}
let user_org = match UserOrganization::find_by_uuid(org_user_id, &mut conn).await {
Some(uo) => uo,
_ => err!("User could not be found!"),
};
if user_org.org_uuid != org_id {
err!("Group doesn't belong to organization");
if UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await.is_none() {
err!("User could not be found or does not belong to the organization.");
}
GroupUser::delete_all_by_user(org_user_id, &mut conn).await?;
@@ -2739,22 +2698,12 @@ async fn delete_group_user(
err!("Group support is disabled");
}
let user_org = match UserOrganization::find_by_uuid(org_user_id, &mut conn).await {
Some(uo) => uo,
_ => err!("User could not be found!"),
};
if user_org.org_uuid != org_id {
err!("User doesn't belong to organization");
if UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await.is_none() {
err!("User could not be found or does not belong to the organization.");
}
let group = match Group::find_by_uuid(group_id, &mut conn).await {
Some(g) => g,
_ => err!("Group could not be found!"),
};
if group.organizations_uuid != org_id {
err!("Group doesn't belong to organization");
if Group::find_by_uuid_and_org(group_id, org_id, &mut conn).await.is_none() {
err!("Group could not be found or does not belong to the organization.");
}
log_event(
@@ -2786,14 +2735,13 @@ struct OrganizationUserResetPasswordRequest {
key: String,
}
// Upstrem reports this is the renamed endpoint instead of `/keys`
// Upstream reports this is the renamed endpoint instead of `/keys`
// But the clients do not seem to use this at all
// Just add it here in case they will
#[get("/organizations/<org_id>/public-key")]
async fn get_organization_public_key(org_id: &str, _headers: Headers, mut conn: DbConn) -> JsonResult {
let org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(organization) => organization,
None => err!("Organization not found"),
let Some(org) = Organization::find_by_uuid(org_id, &mut conn).await else {
err!("Organization not found")
};
Ok(Json(json!({
@@ -2818,19 +2766,16 @@ async fn put_reset_password(
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(org) => org,
None => err!("Required organization not found"),
let Some(org) = Organization::find_by_uuid(org_id, &mut conn).await else {
err!("Required organization not found")
};
let org_user = match UserOrganization::find_by_uuid_and_org(org_user_id, &org.uuid, &mut conn).await {
Some(user) => user,
None => err!("User to reset isn't member of required organization"),
let Some(org_user) = UserOrganization::find_by_uuid_and_org(org_user_id, &org.uuid, &mut conn).await else {
err!("User to reset isn't member of required organization")
};
let user = match User::find_by_uuid(&org_user.user_uuid, &mut conn).await {
Some(user) => user,
None => err!("User not found"),
let Some(user) = User::find_by_uuid(&org_user.user_uuid, &mut conn).await else {
err!("User not found")
};
check_reset_password_applicable_and_permissions(org_id, org_user_id, &headers, &mut conn).await?;
@@ -2877,19 +2822,16 @@ async fn get_reset_password_details(
headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
let org = match Organization::find_by_uuid(org_id, &mut conn).await {
Some(org) => org,
None => err!("Required organization not found"),
let Some(org) = Organization::find_by_uuid(org_id, &mut conn).await else {
err!("Required organization not found")
};
let org_user = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await {
Some(user) => user,
None => err!("User to reset isn't member of required organization"),
let Some(org_user) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, &mut conn).await else {
err!("User to reset isn't member of required organization")
};
let user = match User::find_by_uuid(&org_user.user_uuid, &mut conn).await {
Some(user) => user,
None => err!("User not found"),
let Some(user) = User::find_by_uuid(&org_user.user_uuid, &mut conn).await else {
err!("User not found")
};
check_reset_password_applicable_and_permissions(org_id, org_user_id, &headers, &mut conn).await?;
@@ -2915,9 +2857,8 @@ async fn check_reset_password_applicable_and_permissions(
) -> EmptyResult {
check_reset_password_applicable(org_id, conn).await?;
let target_user = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await {
Some(user) => user,
None => err!("Reset target user not found"),
let Some(target_user) = UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await else {
err!("Reset target user not found")
};
// Resetting user must be higher/equal to user to reset
@@ -2933,9 +2874,8 @@ async fn check_reset_password_applicable(org_id: &str, conn: &mut DbConn) -> Emp
err!("Password reset is not supported on an email-disabled instance.");
}
let policy = match OrgPolicy::find_by_org_and_type(org_id, OrgPolicyType::ResetPassword, conn).await {
Some(p) => p,
None => err!("Policy not found"),
let Some(policy) = OrgPolicy::find_by_org_and_type(org_id, OrgPolicyType::ResetPassword, conn).await else {
err!("Policy not found")
};
if !policy.enabled {
@@ -2953,9 +2893,8 @@ async fn put_reset_password_enrollment(
data: Json<OrganizationUserResetPasswordEnrollmentRequest>,
mut conn: DbConn,
) -> EmptyResult {
let mut org_user = match UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await {
Some(u) => u,
None => err!("User to enroll isn't member of required organization"),
let Some(mut org_user) = UserOrganization::find_by_user_and_org(&headers.user.uuid, org_id, &mut conn).await else {
err!("User to enroll isn't member of required organization")
};
check_reset_password_applicable(org_id, &mut conn).await?;
@@ -2999,18 +2938,20 @@ async fn put_reset_password_enrollment(
// We need to convert all keys so they have the first character to be a lowercase.
// Else the export will be just an empty JSON file.
#[get("/organizations/<org_id>/export")]
async fn get_org_export(org_id: &str, headers: AdminHeaders, mut conn: DbConn) -> Json<Value> {
use semver::{Version, VersionReq};
async fn get_org_export(
org_id: &str,
headers: AdminHeaders,
client_version: Option<ClientVersion>,
mut conn: DbConn,
) -> Json<Value> {
// Since version v2023.1.0 the format of the export is different.
// Also, this endpoint was created since v2022.9.0.
// Therefore, we will check for any version smaller then v2023.1.0 and return a different response.
// If we can't determine the version, we will use the latest default v2023.1.0 and higher.
// https://github.com/bitwarden/server/blob/9ca93381ce416454734418c3a9f99ab49747f1b6/src/Api/Controllers/OrganizationExportController.cs#L44
let use_list_response_model = if let Some(client_version) = headers.client_version {
let ver_match = VersionReq::parse("<2023.1.0").unwrap();
let client_version = Version::parse(&client_version).unwrap();
ver_match.matches(&client_version)
let use_list_response_model = if let Some(client_version) = client_version {
let ver_match = semver::VersionReq::parse("<2023.1.0").unwrap();
ver_match.matches(&client_version.0)
} else {
false
};

View File

@@ -203,9 +203,8 @@ impl<'r> FromRequest<'r> for PublicToken {
None => err_handler!("No access token provided"),
};
// Check JWT token is valid and get device and user from it
let claims = match auth::decode_api_org(access_token) {
Ok(claims) => claims,
Err(_) => err_handler!("Invalid claim"),
let Ok(claims) = auth::decode_api_org(access_token) else {
err_handler!("Invalid claim")
};
// Check if time is between claims.nbf and claims.exp
let time_now = Utc::now().timestamp();
@@ -227,13 +226,11 @@ impl<'r> FromRequest<'r> for PublicToken {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let org_uuid = match claims.client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err_handler!("Malformed client_id"),
let Some(org_uuid) = claims.client_id.strip_prefix("organization.") else {
err_handler!("Malformed client_id")
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, &conn).await {
Some(org_api_key) => org_api_key,
None => err_handler!("Invalid client_id"),
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(org_uuid, &conn).await else {
err_handler!("Invalid client_id")
};
if org_api_key.org_uuid != claims.client_sub {
err_handler!("Token not issued for this org");

View File

@@ -159,16 +159,10 @@ async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
#[get("/sends/<uuid>")]
async fn get_send(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(uuid, &mut conn).await {
Some(send) => send,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
match Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await {
Some(send) => Ok(Json(send.to_json())),
None => err!("Send not found", "Invalid uuid or does not belong to user"),
}
Ok(Json(send.to_json()))
}
#[post("/sends", data = "<data>")]
@@ -371,22 +365,14 @@ async fn post_send_file_v2_data(
let mut data = data.into_inner();
let Some(send) = Send::find_by_uuid(send_uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.")
let Some(send) = Send::find_by_uuid_and_user(send_uuid, &headers.user.uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.", "Invalid uuid or does not belong to user.")
};
if send.atype != SendType::File as i32 {
err!("Send is not a file type send.");
}
let Some(send_user_id) = &send.user_uuid else {
err!("Sends are only supported for users at the moment.")
};
if send_user_id != &headers.user.uuid {
err!("Send doesn't belong to user.");
}
let Ok(send_data) = serde_json::from_str::<SendFileData>(&send.data) else {
err!("Unable to decode send data as json.")
};
@@ -456,9 +442,8 @@ async fn post_access(
ip: ClientIp,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_access_id(access_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
let Some(mut send) = Send::find_by_access_id(access_id, &mut conn).await else {
err_code!(SEND_INACCESSIBLE_MSG, 404)
};
if let Some(max_access_count) = send.max_access_count {
@@ -517,9 +502,8 @@ async fn post_access_file(
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_uuid(send_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
let Some(mut send) = Send::find_by_uuid(send_id, &mut conn).await else {
err_code!(SEND_INACCESSIBLE_MSG, 404)
};
if let Some(max_access_count) = send.max_access_count {
@@ -582,16 +566,15 @@ async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Opt
None
}
#[put("/sends/<id>", data = "<data>")]
async fn put_send(id: &str, data: Json<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
#[put("/sends/<uuid>", data = "<data>")]
async fn put_send(uuid: &str, data: Json<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner();
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
let Some(mut send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Send uuid is invalid or does not belong to user")
};
update_send_from_data(&mut send, data, &headers, &mut conn, &nt, UpdateType::SyncSendUpdate).await?;
@@ -657,17 +640,12 @@ pub async fn update_send_from_data(
Ok(())
}
#[delete("/sends/<id>")]
async fn delete_send(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
#[delete("/sends/<uuid>")]
async fn delete_send(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let Some(send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user")
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.delete(&mut conn).await?;
nt.send_send_update(
UpdateType::SyncSendDelete,
@@ -681,19 +659,14 @@ async fn delete_send(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_
Ok(())
}
#[put("/sends/<id>/remove-password")]
async fn put_remove_password(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
#[put("/sends/<uuid>/remove-password")]
async fn put_remove_password(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
let Some(mut send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user")
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.set_password(None);
send.save(&mut conn).await?;
nt.send_send_update(

View File

@@ -117,9 +117,8 @@ pub async fn validate_totp_code(
) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
Ok(s) => s,
Err(_) => err!("Invalid TOTP secret"),
let Ok(decoded_secret) = BASE32.decode(secret.as_bytes()) else {
err!("Invalid TOTP secret")
};
let mut twofactor =

View File

@@ -232,9 +232,8 @@ async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn).await {
Some(t) => t,
None => return DuoStatus::Disabled(DuoData::global().is_some()),
let Some(twofactor) = TwoFactor::find_by_user_and_type(uuid, type_, conn).await else {
return DuoStatus::Disabled(DuoData::global().is_some());
};
// If the user has the required values, we use those
@@ -333,14 +332,12 @@ fn parse_duo_values(key: &str, val: &str, ikey: &str, prefix: &str, time: i64) -
err!("Prefixes don't match")
}
let cookie_vec = match BASE64.decode(u_b64.as_bytes()) {
Ok(c) => c,
Err(_) => err!("Invalid Duo cookie encoding"),
let Ok(cookie_vec) = BASE64.decode(u_b64.as_bytes()) else {
err!("Invalid Duo cookie encoding")
};
let cookie = match String::from_utf8(cookie_vec) {
Ok(c) => c,
Err(_) => err!("Invalid Duo cookie encoding"),
let Ok(cookie) = String::from_utf8(cookie_vec) else {
err!("Invalid Duo cookie encoding")
};
let cookie_split: Vec<&str> = cookie.split('|').collect();

View File

@@ -211,10 +211,7 @@ impl DuoClient {
nonce,
};
let token = match self.encode_duo_jwt(jwt_payload) {
Ok(token) => token,
Err(e) => return Err(e),
};
let token = self.encode_duo_jwt(jwt_payload)?;
let authz_endpoint = format!("https://{}/oauth/v1/authorize", self.api_host);
let mut auth_url = match Url::parse(authz_endpoint.as_str()) {

View File

@@ -40,9 +40,8 @@ async fn send_email_login(data: Json<SendEmailLoginData>, mut conn: DbConn) -> E
use crate::db::models::User;
// Get the user
let user = match User::find_by_mail(&data.email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
let Some(user) = User::find_by_mail(&data.email, &mut conn).await else {
err!("Username or password is incorrect. Try again.")
};
// Check password
@@ -174,9 +173,8 @@ async fn email(data: Json<EmailData>, headers: Headers, mut conn: DbConn) -> Jso
let mut email_data = EmailTokenData::from_json(&twofactor.data)?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!("No token available"),
let Some(issued_token) = &email_data.last_token else {
err!("No token available")
};
if !crypto::ct_eq(issued_token, data.token) {
@@ -205,14 +203,13 @@ pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, c
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
.await
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!(
let Some(issued_token) = &email_data.last_token else {
err!(
"No token available",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
)
};
if !crypto::ct_eq(issued_token, token) {

View File

@@ -85,9 +85,8 @@ async fn recover(data: Json<RecoverTwoFactor>, client_headers: ClientHeaders, mu
use crate::db::models::User;
// Get the user
let mut user = match User::find_by_mail(&data.email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
let Some(mut user) = User::find_by_mail(&data.email, &mut conn).await else {
err!("Username or password is incorrect. Try again.")
};
// Check password

View File

@@ -309,17 +309,16 @@ async fn delete_webauthn(data: Json<DeleteU2FData>, headers: Headers, mut conn:
err!("Invalid password");
}
let mut tf =
match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let Some(mut tf) =
TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await
else {
err!("Webauthn data not found!")
};
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
let item_pos = match data.iter().position(|r| r.id == id) {
Some(p) => p,
None => err!("Webauthn entry not found"),
let Some(item_pos) = data.iter().position(|r| r.id == id) else {
err!("Webauthn entry not found")
};
let removed_item = data.remove(item_pos);

View File

@@ -19,7 +19,7 @@ use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
};
use html5gum::{Emitter, HtmlString, InfallibleTokenizer, Readable, StringReader, Tokenizer};
use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer};
use crate::{
error::Error,
@@ -261,11 +261,7 @@ impl Icon {
}
}
fn get_favicons_node(
dom: InfallibleTokenizer<StringReader<'_>, FaviconEmitter>,
icons: &mut Vec<Icon>,
url: &url::Url,
) {
fn get_favicons_node(dom: Tokenizer<StringReader<'_>, FaviconEmitter>, icons: &mut Vec<Icon>, url: &url::Url) {
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
@@ -274,7 +270,7 @@ fn get_favicons_node(
let mut base_url = url.clone();
let mut icon_tags: Vec<Tag> = Vec::new();
for token in dom {
for Ok(token) in dom {
let tag_name: &[u8] = &token.tag.name;
match tag_name {
TAG_LINK => {
@@ -401,7 +397,7 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// 384KB should be more than enough for the HTML, though as we only really need the HTML header.
let limited_reader = stream_to_bytes_limit(content, 384 * 1024).await?.to_vec();
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default()).infallible();
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default());
get_favicons_node(dom, &mut iconlist, &url);
} else {
// Add the default favicon.ico to the list with just the given domain
@@ -662,7 +658,7 @@ impl reqwest::cookie::CookieStore for Jar {
/// The FaviconEmitter is using an optimized version of the DefaultEmitter.
/// This prevents emitting tags like comments, doctype and also strings between the tags.
/// But it will also only emit the tags we need and only if they have the correct attributes
/// Therefor parsing the HTML content is faster.
/// Therefore parsing the HTML content is faster.
use std::collections::BTreeMap;
#[derive(Default)]

View File

@@ -120,16 +120,8 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
"expires_in": expires_in,
"token_type": "Bearer",
"refresh_token": device.refresh_token,
"Key": user.akey,
"PrivateKey": user.private_key,
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": scope,
"unofficialServer": true,
});
Ok(Json(result))
@@ -165,28 +157,30 @@ async fn _password_login(
// Get the user
let username = data.username.as_ref().unwrap().trim();
let mut user = match User::find_by_mail(username, conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
let Some(mut user) = User::find_by_mail(username, conn).await else {
err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username))
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Check password
let password = data.password.as_ref().unwrap();
if let Some(auth_request_uuid) = data.auth_request.clone() {
if let Some(auth_request) = AuthRequest::find_by_uuid(auth_request_uuid.as_str(), conn).await {
if !auth_request.check_access_code(password) {
err!(
"Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
} else {
)
}
let password = data.password.as_ref().unwrap();
// If we get an auth request, we don't check the user's password, but the access code of the auth request
if let Some(ref auth_request_uuid) = data.auth_request {
let Some(auth_request) = AuthRequest::find_by_uuid_and_user(auth_request_uuid.as_str(), &user.uuid, conn).await
else {
err!(
"Auth request not found. Try again.",
format!("IP: {}. Username: {}.", ip.ip, username),
@@ -194,6 +188,24 @@ async fn _password_login(
event: EventType::UserFailedLogIn,
}
)
};
let expiration_time = auth_request.creation_date + chrono::Duration::minutes(5);
let request_expired = Utc::now().naive_utc() >= expiration_time;
if auth_request.user_uuid != user.uuid
|| !auth_request.approved.unwrap_or(false)
|| request_expired
|| ip.ip.to_string() != auth_request.request_ip
|| !auth_request.check_access_code(password)
{
err!(
"Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
} else if !user.check_valid_password(password) {
err!(
@@ -205,8 +217,8 @@ async fn _password_login(
)
}
// Change the KDF Iterations
if user.password_iterations != CONFIG.password_iterations() {
// Change the KDF Iterations (only when not logging in with an auth request)
if data.auth_request.is_none() && user.password_iterations != CONFIG.password_iterations() {
user.password_iterations = CONFIG.password_iterations();
user.set_password(password, None, false, None);
@@ -215,17 +227,6 @@ async fn _password_login(
}
}
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let now = Utc::now().naive_utc();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
@@ -342,7 +343,6 @@ async fn _password_login(
"MasterPasswordPolicy": master_password_policy,
"scope": scope,
"unofficialServer": true,
"UserDecryptionOptions": {
"HasMasterPassword": !user.password_hash.is_empty(),
"Object": "userDecryptionOptions"
@@ -382,13 +382,11 @@ async fn _user_api_key_login(
) -> JsonResult {
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let client_user_uuid = match client_id.strip_prefix("user.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
let Some(client_user_uuid) = client_id.strip_prefix("user.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip))
};
let user = match User::find_by_uuid(client_user_uuid, conn).await {
Some(user) => user,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
let Some(user) = User::find_by_uuid(client_user_uuid, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip))
};
// Set the user_uuid here to be passed back used for event logging.
@@ -461,9 +459,8 @@ async fn _user_api_key_login(
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: Same as above
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": "api",
"unofficialServer": true,
});
Ok(Json(result))
@@ -472,13 +469,11 @@ async fn _user_api_key_login(
async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &ClientIp) -> JsonResult {
// Get the org via the client_id
let client_id = data.client_id.as_ref().unwrap();
let org_uuid = match client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
let Some(org_uuid) = client_id.strip_prefix("organization.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip))
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, conn).await {
Some(org_api_key) => org_api_key,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(org_uuid, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip))
};
// Check API key.
@@ -495,7 +490,6 @@ async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &
"expires_in": 3600,
"token_type": "Bearer",
"scope": "api.organization",
"unofficialServer": true,
})))
}
@@ -678,9 +672,8 @@ async fn _json_err_twofactor(
}
Some(tf_type @ TwoFactorType::YubiKey) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No YubiKey devices registered"),
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await else {
err!("No YubiKey devices registered")
};
let yubikey_metadata: yubikey::YubikeyMetadata = serde_json::from_str(&twofactor.data)?;
@@ -691,9 +684,8 @@ async fn _json_err_twofactor(
}
Some(tf_type @ TwoFactorType::Email) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No twofactor email registered"),
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await else {
err!("No twofactor email registered")
};
// Send email immediately if email is the only 2FA option

View File

@@ -9,6 +9,7 @@ use crate::{
api::{ApiResult, EmptyResult, UpdateType},
db::models::{Cipher, Device, Folder, Send, User},
http_client::make_http_request,
util::format_date,
CONFIG,
};
@@ -154,12 +155,9 @@ pub async fn push_cipher_update(
if cipher.organization_uuid.is_some() {
return;
};
let user_uuid = match &cipher.user_uuid {
Some(c) => c,
None => {
debug!("Cipher has no uuid");
return;
}
let Some(user_uuid) = &cipher.user_uuid else {
debug!("Cipher has no uuid");
return;
};
if Device::check_user_has_push_device(user_uuid, conn).await {
@@ -170,10 +168,10 @@ pub async fn push_cipher_update(
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": cipher.uuid,
"userId": cipher.user_uuid,
"organizationId": (),
"revisionDate": cipher.updated_at
"Id": cipher.uuid,
"UserId": cipher.user_uuid,
"OrganizationId": (),
"RevisionDate": format_date(&cipher.updated_at)
}
}))
.await;
@@ -190,8 +188,8 @@ pub fn push_logout(user: &User, acting_device_uuid: Option<String>) {
"identifier": acting_device_uuid,
"type": UpdateType::LogOut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
@@ -204,8 +202,8 @@ pub fn push_user_update(ut: UpdateType, user: &User) {
"identifier": (),
"type": ut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
@@ -224,9 +222,9 @@ pub async fn push_folder_update(
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": folder.uuid,
"userId": folder.user_uuid,
"revisionDate": folder.updated_at
"Id": folder.uuid,
"UserId": folder.user_uuid,
"RevisionDate": format_date(&folder.updated_at)
}
})));
}
@@ -242,9 +240,9 @@ pub async fn push_send_update(ut: UpdateType, send: &Send, acting_device_uuid: &
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": send.uuid,
"userId": send.user_uuid,
"revisionDate": send.revision_date
"Id": send.uuid,
"UserId": send.user_uuid,
"RevisionDate": format_date(&send.revision_date)
}
})));
}
@@ -295,8 +293,8 @@ pub async fn push_auth_request(user_uuid: String, auth_request_uuid: String, con
"identifier": null,
"type": UpdateType::AuthRequest as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
"Id": auth_request_uuid,
"UserId": user_uuid,
}
})));
}
@@ -316,8 +314,8 @@ pub async fn push_auth_response(
"identifier": approving_device_uuid,
"type": UpdateType::AuthRequestResponse as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
"Id": auth_request_uuid,
"UserId": user_uuid,
}
})));
}

View File

@@ -1,13 +1,20 @@
use once_cell::sync::Lazy;
use std::path::{Path, PathBuf};
use rocket::{fs::NamedFile, http::ContentType, response::content::RawHtml as Html, serde::json::Json, Catcher, Route};
use rocket::{
fs::NamedFile,
http::ContentType,
response::{content::RawCss as Css, content::RawHtml as Html, Redirect},
serde::json::Json,
Catcher, Route,
};
use serde_json::Value;
use crate::{
api::{core::now, ApiResult, EmptyResult},
auth::decode_file_download,
error::Error,
util::{Cached, SafeString},
util::{get_web_vault_version, Cached, SafeString},
CONFIG,
};
@@ -16,7 +23,7 @@ pub fn routes() -> Vec<Route> {
// crate::utils::LOGGED_ROUTES to make sure they appear in the log
let mut routes = routes![attachments, alive, alive_head, static_files];
if CONFIG.web_vault_enabled() {
routes.append(&mut routes![web_index, web_index_head, app_id, web_files]);
routes.append(&mut routes![web_index, web_index_direct, web_index_head, app_id, web_files, vaultwarden_css]);
}
#[cfg(debug_assertions)]
@@ -45,11 +52,101 @@ fn not_found() -> ApiResult<Html<String>> {
Ok(Html(text))
}
#[get("/css/vaultwarden.css")]
fn vaultwarden_css() -> Cached<Css<String>> {
// Configure the web-vault version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static WEB_VAULT_VERSION: Lazy<u32> = Lazy::new(|| {
let re = regex::Regex::new(r"(\d{4})\.(\d{1,2})\.(\d{1,2})").unwrap();
let vault_version = get_web_vault_version();
let (major, minor, patch) = match re.captures(&vault_version) {
Some(c) if c.len() == 4 => (
c.get(1).unwrap().as_str().parse().unwrap(),
c.get(2).unwrap().as_str().parse().unwrap(),
c.get(3).unwrap().as_str().parse().unwrap(),
),
_ => (2024, 6, 2),
};
format!("{major}{minor:02}{patch:02}").parse::<u32>().unwrap()
});
// Configure the Vaultwarden version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static VW_VERSION: Lazy<u32> = Lazy::new(|| {
let re = regex::Regex::new(r"(\d{1})\.(\d{1,2})\.(\d{1,2})").unwrap();
let vw_version = crate::VERSION.unwrap_or("1.32.1");
let (major, minor, patch) = match re.captures(vw_version) {
Some(c) if c.len() == 4 => (
c.get(1).unwrap().as_str().parse().unwrap(),
c.get(2).unwrap().as_str().parse().unwrap(),
c.get(3).unwrap().as_str().parse().unwrap(),
),
_ => (1, 32, 1),
};
format!("{major}{minor:02}{patch:02}").parse::<u32>().unwrap()
});
let css_options = json!({
"web_vault_version": *WEB_VAULT_VERSION,
"vw_version": *VW_VERSION,
"signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(),
"mail_enabled": CONFIG.mail_enabled(),
"yubico_enabled": CONFIG._enable_yubico() && (CONFIG.yubico_client_id().is_some() == CONFIG.yubico_secret_key().is_some()),
"emergency_access_allowed": CONFIG.emergency_access_allowed(),
"sends_allowed": CONFIG.sends_allowed(),
"load_user_scss": true,
});
let scss = match CONFIG.render_template("scss/vaultwarden.scss", &css_options) {
Ok(t) => t,
Err(e) => {
// Something went wrong loading the template. Use the fallback
warn!("Loading scss/vaultwarden.scss.hbs or scss/user.vaultwarden.scss.hbs failed. {e}");
CONFIG
.render_fallback_template("scss/vaultwarden.scss", &css_options)
.expect("Fallback scss/vaultwarden.scss.hbs to render")
}
};
let css = match grass_compiler::from_string(
scss,
&grass_compiler::Options::default().style(grass_compiler::OutputStyle::Compressed),
) {
Ok(css) => css,
Err(e) => {
// Something went wrong compiling the scss. Use the fallback
warn!("Compiling the Vaultwarden SCSS styles failed. {e}");
let mut css_options = css_options;
css_options["load_user_scss"] = json!(false);
let scss = CONFIG
.render_fallback_template("scss/vaultwarden.scss", &css_options)
.expect("Fallback scss/vaultwarden.scss.hbs to render");
grass_compiler::from_string(
scss,
&grass_compiler::Options::default().style(grass_compiler::OutputStyle::Compressed),
)
.expect("SCSS to compile")
}
};
// Cache for one day should be enough and not too much
Cached::ttl(Css(css), 86_400, false)
}
#[get("/")]
async fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).await.ok(), false)
}
// Make sure that `/index.html` redirect to actual domain path.
// If not, this might cause issues with the web-vault
#[get("/index.html")]
fn web_index_direct() -> Redirect {
Redirect::to(format!("{}/", CONFIG.domain_path()))
}
#[head("/")]
fn web_index_head() -> EmptyResult {
// Add an explicit HEAD route to prevent uptime monitoring services from

View File

@@ -471,9 +471,8 @@ impl<'r> FromRequest<'r> for Headers {
};
// Check JWT token is valid and get device and user from it
let claims = match decode_login(access_token) {
Ok(claims) => claims,
Err(_) => err_handler!("Invalid claim"),
let Ok(claims) = decode_login(access_token) else {
err_handler!("Invalid claim")
};
let device_uuid = claims.device;
@@ -484,23 +483,20 @@ impl<'r> FromRequest<'r> for Headers {
_ => err_handler!("Error getting DB"),
};
let device = match Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &mut conn).await {
Some(device) => device,
None => err_handler!("Invalid device id"),
let Some(device) = Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &mut conn).await else {
err_handler!("Invalid device id")
};
let user = match User::find_by_uuid(&user_uuid, &mut conn).await {
Some(user) => user,
None => err_handler!("Device has no user associated"),
let Some(user) = User::find_by_uuid(&user_uuid, &mut conn).await else {
err_handler!("Device has no user associated")
};
if user.security_stamp != claims.sstamp {
if let Some(stamp_exception) =
user.stamp_exception.as_deref().and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{
let current_route = match request.route().and_then(|r| r.name.as_deref()) {
Some(name) => name,
_ => err_handler!("Error getting current route for stamp exception"),
let Some(current_route) = request.route().and_then(|r| r.name.as_deref()) else {
err_handler!("Error getting current route for stamp exception")
};
// Check if the stamp exception has expired first.
@@ -615,7 +611,6 @@ pub struct AdminHeaders {
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub client_version: Option<String>,
pub ip: ClientIp,
}
@@ -625,14 +620,12 @@ impl<'r> FromRequest<'r> for AdminHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
let client_version = request.headers().get_one("Bitwarden-Client-Version").map(String::from);
if headers.org_user_type >= UserOrgType::Admin {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
client_version,
ip: headers.ip,
})
} else {
@@ -900,3 +893,24 @@ impl<'r> FromRequest<'r> for WsAccessTokenHeader {
})
}
}
pub struct ClientVersion(pub semver::Version);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ClientVersion {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
let Some(version) = headers.get_one("Bitwarden-Client-Version") else {
err_handler!("No Bitwarden-Client-Version header provided")
};
let Ok(version) = semver::Version::parse(version) else {
err_handler!("Invalid Bitwarden-Client-Version header provided")
};
Outcome::Success(ClientVersion(version))
}
}

View File

@@ -238,6 +238,7 @@ macro_rules! make_config {
// Besides Pass, only String types will be masked via _privacy_mask.
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"allowed_connect_src",
"database_url",
"domain_origin",
"domain_path",
@@ -248,6 +249,7 @@ macro_rules! make_config {
"smtp_from",
"smtp_host",
"smtp_username",
"_smtp_img_src",
];
let cfg = {
@@ -497,11 +499,11 @@ make_config! {
/// Password iterations |> Number of server-side passwords hashing iterations for the password hash.
/// The default for new users. If changed, it will be updated during login for existing users.
password_iterations: i32, true, def, 600_000;
/// Allow password hints |> Controls whether users can set password hints. This setting applies globally to all users.
/// Allow password hints |> Controls whether users can set or show password hints. This setting applies globally to all users.
password_hints_allowed: bool, true, def, true;
/// Show password hint |> Controls whether a password hint should be shown directly in the web page
/// if SMTP service is not configured. Not recommended for publicly-accessible instances as this
/// provides unauthenticated access to potentially sensitive data.
/// Show password hint (Know the risks!) |> Controls whether a password hint should be shown directly in the web page
/// if SMTP service is not configured and password hints are allowed. Not recommended for publicly-accessible instances
/// because this provides unauthenticated access to potentially sensitive data.
show_password_hint: bool, true, def, false;
/// Admin token/Argon2 PHC |> The plain text token or Argon2 PHC string used to authenticate in this very same page. Changing it here will not deauthorize the current session!
@@ -609,6 +611,9 @@ make_config! {
/// Allowed iframe ancestors (Know the risks!) |> Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
allowed_iframe_ancestors: String, true, def, String::new();
/// Allowed connect-src (Know the risks!) |> Allows other domains to URLs which can be loaded using script interfaces like the Forwarded email alias feature
allowed_connect_src: String, true, def, String::new();
/// Seconds between login requests |> Number of seconds, on average, between login and 2FA requests from the same IP address before rate limiting kicks in
login_ratelimit_seconds: u64, false, def, 60;
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `login_ratelimit_seconds`. Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2
@@ -760,6 +765,13 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
);
}
let connect_src = cfg.allowed_connect_src.to_lowercase();
for url in connect_src.split_whitespace() {
if !url.starts_with("https://") || Url::parse(url).is_err() {
err!("ALLOWED_CONNECT_SRC variable contains one or more invalid URLs. Only FQDN's starting with https are allowed");
}
}
let whitelist = &cfg.signups_domains_whitelist;
if !whitelist.is_empty() && whitelist.split(',').any(|d| d.trim().is_empty()) {
err!("`SIGNUPS_DOMAINS_WHITELIST` contains empty tokens");
@@ -811,8 +823,15 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
// TODO: deal with deprecated flags so they can be removed from this list, cf. #4263
const KNOWN_FLAGS: &[&str] =
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "fido2-vault-credentials"];
const KNOWN_FLAGS: &[&str] = &[
"autofill-overlay",
"autofill-v2",
"browser-fileless-import",
"extension-refresh",
"fido2-vault-credentials",
"ssh-key-vault-item",
"ssh-agent",
];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
if !invalid_flags.is_empty() {
@@ -1269,11 +1288,16 @@ impl Config {
let hb = load_templates(CONFIG.templates_folder());
hb.render(name, data).map_err(Into::into)
} else {
let hb = &CONFIG.inner.read().unwrap().templates;
let hb = &self.inner.read().unwrap().templates;
hb.render(name, data).map_err(Into::into)
}
}
pub fn render_fallback_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> {
let hb = &self.inner.read().unwrap().templates;
hb.render(&format!("fallback_{name}"), data).map_err(Into::into)
}
pub fn set_rocket_shutdown_handle(&self, handle: rocket::Shutdown) {
self.inner.write().unwrap().rocket_shutdown_handle = Some(handle);
}
@@ -1312,6 +1336,11 @@ where
reg!($name);
reg!(concat!($name, $ext));
}};
(@withfallback $name:expr) => {{
let template = include_str!(concat!("static/templates/", $name, ".hbs"));
hb.register_template_string($name, template).unwrap();
hb.register_template_string(concat!("fallback_", $name), template).unwrap();
}};
}
// First register default templates here
@@ -1355,6 +1384,9 @@ where
reg!("404");
reg!(@withfallback "scss/vaultwarden.scss");
reg!("scss/user.vaultwarden.scss");
// And then load user templates to overwrite the defaults
// Use .hbs extension for the files
// Templates get registered with their relative name

View File

@@ -373,24 +373,18 @@ pub async fn backup_database(conn: &mut DbConn) -> Result<String, Error> {
err!("PostgreSQL and MySQL/MariaDB do not support this backup feature");
}
sqlite {
backup_sqlite_database(conn)
let db_url = CONFIG.database_url();
let db_path = std::path::Path::new(&db_url).parent().unwrap();
let backup_file = db_path
.join(format!("db_{}.sqlite3", chrono::Utc::now().format("%Y%m%d_%H%M%S")))
.to_string_lossy()
.into_owned();
diesel::sql_query(format!("VACUUM INTO '{backup_file}'")).execute(conn)?;
Ok(backup_file)
}
}
}
#[cfg(sqlite)]
pub fn backup_sqlite_database(conn: &mut diesel::sqlite::SqliteConnection) -> Result<String, Error> {
use diesel::RunQueryDsl;
let db_url = CONFIG.database_url();
let db_path = std::path::Path::new(&db_url).parent().unwrap();
let backup_file = db_path
.join(format!("db_{}.sqlite3", chrono::Utc::now().format("%Y%m%d_%H%M%S")))
.to_string_lossy()
.into_owned();
diesel::sql_query(format!("VACUUM INTO '{backup_file}'")).execute(conn)?;
Ok(backup_file)
}
/// Get the SQL Server version
pub async fn get_sql_server_version(conn: &mut DbConn) -> String {
db_run! {@raw conn:

View File

@@ -111,6 +111,17 @@ impl AuthRequest {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::uuid.eq(uuid))
.filter(auth_requests::user_uuid.eq(user_uuid))
.first::<AuthRequestDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
auth_requests::table

View File

@@ -1,6 +1,6 @@
use crate::util::LowerCase;
use crate::CONFIG;
use chrono::{DateTime, NaiveDateTime, TimeDelta, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use serde_json::Value;
use super::{
@@ -30,7 +30,8 @@ db_object! {
Login = 1,
SecureNote = 2,
Card = 3,
Identity = 4
Identity = 4,
SshKey = 5
*/
pub atype: i32,
pub name: String,
@@ -45,10 +46,9 @@ db_object! {
}
}
#[allow(dead_code)]
pub enum RepromptType {
None = 0,
Password = 1, // not currently used in server
Password = 1,
}
/// Local methods
@@ -176,7 +176,27 @@ impl Cipher {
.inspect_err(|e| warn!("Error parsing fields {e:?} for {}", self.uuid))
.ok()
})
.map(|d| d.into_iter().map(|d| d.data).collect())
.map(|d| {
d.into_iter()
.map(|mut f| {
// Check if the `type` key is a number, strings break some clients
// The fallback type is the hidden type `1`. this should prevent accidental data disclosure
// If not try to convert the string value to a number and fallback to `1`
// If it is both not a number and not a string, fallback to `1`
match f.data.get("type") {
Some(t) if t.is_number() => {}
Some(t) if t.is_string() => {
let type_num = &t.as_str().unwrap_or("1").parse::<u8>().unwrap_or(1);
f.data["type"] = json!(type_num);
}
_ => {
f.data["type"] = json!(1);
}
}
f.data
})
.collect()
})
.unwrap_or_default();
let password_history_json: Vec<_> = self
@@ -196,11 +216,13 @@ impl Cipher {
Some(p) if p.is_string() => Some(d.data),
_ => None,
})
.map(|d| match d.get("lastUsedDate").and_then(|l| l.as_str()) {
Some(l) if DateTime::parse_from_rfc3339(l).is_ok() => d,
.map(|mut d| match d.get("lastUsedDate").and_then(|l| l.as_str()) {
Some(l) => {
d["lastUsedDate"] = json!(crate::util::validate_and_format_date(l));
d
}
_ => {
let mut d = d;
d["lastUsedDate"] = json!("1970-01-01T00:00:00.000Z");
d["lastUsedDate"] = json!("1970-01-01T00:00:00.000000Z");
d
}
})
@@ -244,7 +266,7 @@ impl Cipher {
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// data_json should always contain the following keys with every atype
data_json["fields"] = Value::Array(fields_json.clone());
data_json["fields"] = json!(fields_json);
data_json["name"] = json!(self.name);
data_json["notes"] = json!(self.notes);
data_json["passwordHistory"] = Value::Array(password_history_json.clone());
@@ -273,7 +295,7 @@ impl Cipher {
"creationDate": format_date(&self.created_at),
"revisionDate": format_date(&self.updated_at),
"deletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"reprompt": self.reprompt.unwrap_or(RepromptType::None as i32),
"reprompt": self.reprompt.filter(|r| *r == RepromptType::None as i32 || *r == RepromptType::Password as i32).unwrap_or(RepromptType::None as i32),
"organizationId": self.organization_uuid,
"key": self.key,
"attachments": attachments_json,
@@ -297,6 +319,7 @@ impl Cipher {
"secureNote": null,
"card": null,
"identity": null,
"sshKey": null,
});
// These values are only needed for user/default syncs
@@ -325,6 +348,7 @@ impl Cipher {
2 => "secureNote",
3 => "card",
4 => "identity",
5 => "sshKey",
_ => panic!("Wrong type"),
};

View File

@@ -81,8 +81,8 @@ impl Collection {
let (read_only, hide_passwords, can_manage) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.user_organizations.get(&self.org_uuid) {
// Only for Manager types Bitwarden returns true for the can_manage option
// Owners and Admins always have false, but they can manage all collections anyway
Some(uo) if uo.has_full_access() => (false, false, uo.atype == UserOrgType::Manager),
// Owners and Admins always have true
Some(uo) if uo.has_full_access() => (false, false, uo.atype >= UserOrgType::Manager),
Some(uo) => {
// Only let a manager manage collections when the have full read/write access
let is_manager = uo.atype == UserOrgType::Manager;
@@ -98,7 +98,7 @@ impl Collection {
}
} else {
match UserOrganization::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
Some(ou) if ou.has_full_access() => (false, false, ou.atype == UserOrgType::Manager),
Some(ou) if ou.has_full_access() => (false, false, ou.atype >= UserOrgType::Manager),
Some(ou) => {
let is_manager = ou.atype == UserOrgType::Manager;
let read_only = !self.is_writable_by_user(user_uuid, conn).await;

View File

@@ -120,10 +120,11 @@ impl Folder {
Ok(())
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
folders::table
.filter(folders::uuid.eq(uuid))
.filter(folders::user_uuid.eq(user_uuid))
.first::<FolderDb>(conn)
.ok()
.from_db()

View File

@@ -1,4 +1,4 @@
use super::{User, UserOrgType, UserOrganization};
use super::{User, UserOrganization};
use crate::api::EmptyResult;
use crate::db::DbConn;
use crate::error::MapResult;
@@ -73,7 +73,7 @@ impl Group {
})
}
pub async fn to_json_details(&self, user_org_type: &i32, conn: &mut DbConn) -> Value {
pub async fn to_json_details(&self, conn: &mut DbConn) -> Value {
let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, conn)
.await
.iter()
@@ -82,7 +82,7 @@ impl Group {
"id": entry.collections_uuid,
"readOnly": entry.read_only,
"hidePasswords": entry.hide_passwords,
"manage": *user_org_type == UserOrgType::Manager && !entry.read_only && !entry.hide_passwords
"manage": false
})
})
.collect();
@@ -191,10 +191,11 @@ impl Group {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
groups::table
.filter(groups::uuid.eq(uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.first::<GroupDb>(conn)
.ok()
.from_db()

View File

@@ -18,7 +18,7 @@ mod user;
pub use self::attachment::Attachment;
pub use self::auth_request::AuthRequest;
pub use self::cipher::Cipher;
pub use self::cipher::{Cipher, RepromptType};
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::{Device, DeviceType};
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};

View File

@@ -142,16 +142,6 @@ impl OrgPolicy {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
.first::<OrgPolicyDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table

View File

@@ -161,7 +161,6 @@ impl Organization {
"identifier": null, // not supported by us
"name": self.name,
"seats": null,
"maxAutoscaleSeats": null,
"maxCollections": null,
"maxStorageGb": i16::MAX, // The value doesn't matter, we don't check server-side
"use2fa": true,
@@ -233,6 +232,14 @@ impl UserOrganization {
false
}
/// Return the status of the user in an unrevoked state
pub fn get_unrevoked_status(&self) -> i32 {
if self.status <= UserOrgStatus::Revoked as i32 {
return self.status + ACTIVATE_REVOKE_DIFF;
}
self.status
}
pub fn set_external_id(&mut self, external_id: Option<String>) -> bool {
//Check if external id is empty. We don't want to have
//empty strings in the database
@@ -374,7 +381,6 @@ impl UserOrganization {
"identifier": null, // Not supported
"name": org.name,
"seats": null,
"maxAutoscaleSeats": null,
"maxCollections": null,
"usersGetPremium": true,
"use2fa": true,
@@ -411,7 +417,7 @@ impl UserOrganization {
"familySponsorshipValidUntil": null,
"familySponsorshipToDelete": null,
"accessSecretsManager": false,
"limitCollectionCreationDeletion": true,
"limitCollectionCreationDeletion": false, // This should be set to true only when we can handle roles like createNewCollections
"allowAdminAccessToAllCollectionItems": true,
"flexibleCollections": false,
@@ -456,7 +462,13 @@ impl UserOrganization {
Vec::with_capacity(0)
};
let collections: Vec<Value> = if include_collections {
// Check if a user is in a group which has access to all collections
// If that is the case, we should not return individual collections!
let full_access_group =
CONFIG.org_groups_enabled() && Group::is_in_full_access_group(&self.user_uuid, &self.org_uuid, conn).await;
// If collections are to be included, only include them if the user does not have full access via a group or defined to the user it self
let collections: Vec<Value> = if include_collections && !(full_access_group || self.has_full_access()) {
// Get all collections for the user here already to prevent more queries
let cu: HashMap<String, CollectionUser> =
CollectionUser::find_by_organization_and_user_uuid(&self.org_uuid, &self.user_uuid, conn)
@@ -477,7 +489,7 @@ impl UserOrganization {
.into_iter()
.filter_map(|c| {
let (read_only, hide_passwords, can_manage) = if self.has_full_access() {
(false, false, self.atype == UserOrgType::Manager)
(false, false, self.atype >= UserOrgType::Manager)
} else if let Some(cu) = cu.get(&c.uuid) {
(
cu.read_only,
@@ -526,7 +538,7 @@ impl UserOrganization {
json!({
"id": self.uuid,
"userId": self.user_uuid,
"name": user.name,
"name": if self.get_unrevoked_status() >= UserOrgStatus::Accepted as i32 { Some(user.name) } else { None },
"email": user.email,
"externalId": self.external_id,
"avatarColor": user.avatar_color,

View File

@@ -268,9 +268,8 @@ impl Send {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
let uuid_vec = match BASE64URL_NOPAD.decode(access_id.as_bytes()) {
Ok(v) => v,
Err(_) => return None,
let Ok(uuid_vec) = BASE64URL_NOPAD.decode(access_id.as_bytes()) else {
return None;
};
let uuid = match Uuid::from_slice(&uuid_vec) {
@@ -291,6 +290,17 @@ impl Send {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
.filter(sends::user_uuid.eq(user_uuid))
.first::<SendDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table

View File

@@ -1,3 +1,4 @@
use crate::util::{format_date, get_uuid, retry};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use serde_json::Value;
@@ -90,7 +91,7 @@ impl User {
let email = email.to_lowercase();
Self {
uuid: crate::util::get_uuid(),
uuid: get_uuid(),
enabled: true,
created_at: now,
updated_at: now,
@@ -107,7 +108,7 @@ impl User {
salt: crypto::get_random_bytes::<64>().to_vec(),
password_iterations: CONFIG.password_iterations(),
security_stamp: crate::util::get_uuid(),
security_stamp: get_uuid(),
stamp_exception: None,
password_hint: None,
@@ -188,7 +189,7 @@ impl User {
}
pub fn reset_security_stamp(&mut self) {
self.security_stamp = crate::util::get_uuid();
self.security_stamp = get_uuid();
}
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
@@ -259,6 +260,7 @@ impl User {
"forcePasswordReset": false,
"avatarColor": self.avatar_color,
"usesKeyConnector": false,
"creationDate": format_date(&self.created_at),
"object": "profile",
})
}
@@ -340,7 +342,7 @@ impl User {
let updated_at = Utc::now().naive_utc();
db_run! {conn: {
crate::util::retry(|| {
retry(|| {
diesel::update(users::table)
.set(users::updated_at.eq(updated_at))
.execute(conn)
@@ -357,7 +359,7 @@ impl User {
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid)))
.set(users::updated_at.eq(date))
.execute(conn)

View File

@@ -96,7 +96,31 @@ fn smtp_transport() -> AsyncSmtpTransport<Tokio1Executor> {
smtp_client.build()
}
// This will sanitize the string values by stripping all the html tags to prevent XSS and HTML Injections
fn sanitize_data(data: &mut serde_json::Value) {
use regex::Regex;
use std::sync::LazyLock;
static RE: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"<[^>]+>").unwrap());
match data {
serde_json::Value::String(s) => *s = RE.replace_all(s, "").to_string(),
serde_json::Value::Object(obj) => {
for d in obj.values_mut() {
sanitize_data(d);
}
}
serde_json::Value::Array(arr) => {
for d in arr.iter_mut() {
sanitize_data(d);
}
}
_ => {}
}
}
fn get_text(template_name: &'static str, data: serde_json::Value) -> Result<(String, String, String), Error> {
let mut data = data;
sanitize_data(&mut data);
let (subject_html, body_html) = get_template(&format!("{template_name}.html"), &data)?;
let (_subject_text, body_text) = get_template(template_name, &data)?;
Ok((subject_html, body_html, body_text))
@@ -116,6 +140,10 @@ fn get_template(template_name: &str, data: &serde_json::Value) -> Result<(String
None => err!("Template doesn't contain body"),
};
if text_split.next().is_some() {
err!("Template contains more than one body");
}
Ok((subject, body))
}
@@ -258,17 +286,15 @@ pub async fn send_invite(
}
}
let query_string = match query.query() {
None => err!(format!("Failed to build invite URL query parameters")),
Some(query) => query,
let Some(query_string) = query.query() else {
err!("Failed to build invite URL query parameters")
};
// `url.Url` would place the anchor `#` after the query parameters
let url = format!("{}/#/accept-organization/?{}", CONFIG.domain(), query_string);
let (subject, body_html, body_text) = get_text(
"email/send_org_invite",
json!({
"url": url,
// `url.Url` would place the anchor `#` after the query parameters
"url": format!("{}/#/accept-organization/?{}", CONFIG.domain(), query_string),
"img_src": CONFIG._smtp_img_src(),
"org_name": org_name,
}),
@@ -292,17 +318,28 @@ pub async fn send_emergency_access_invite(
String::from(grantor_email),
);
let invite_token = encode_jwt(&claims);
// Build the query here to ensure proper escaping
let mut query = url::Url::parse("https://query.builder").unwrap();
{
let mut query_params = query.query_pairs_mut();
query_params
.append_pair("id", emer_id)
.append_pair("name", grantor_name)
.append_pair("email", address)
.append_pair("token", &encode_jwt(&claims));
}
let Some(query_string) = query.query() else {
err!("Failed to build emergency invite URL query parameters")
};
let (subject, body_html, body_text) = get_text(
"email/send_emergency_access_invite",
json!({
"url": CONFIG.domain(),
// `url.Url` would place the anchor `#` after the query parameters
"url": format!("{}/#/accept-emergency/?{query_string}", CONFIG.domain()),
"img_src": CONFIG._smtp_img_src(),
"emer_id": emer_id,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"grantor_name": grantor_name,
"token": invite_token,
}),
)?;

View File

@@ -67,7 +67,7 @@ pub use util::is_running_in_container;
#[rocket::main]
async fn main() -> Result<(), Error> {
parse_args();
parse_args().await;
launch_info();
let level = init_logging()?;
@@ -115,7 +115,7 @@ PRESETS: m= t= p=
pub const VERSION: Option<&str> = option_env!("VW_VERSION");
fn parse_args() {
async fn parse_args() {
let mut pargs = pico_args::Arguments::from_env();
let version = VERSION.unwrap_or("(Version info from Git not present)");
@@ -186,7 +186,7 @@ fn parse_args() {
exit(1);
}
} else if command == "backup" {
match backup_sqlite() {
match backup_sqlite().await {
Ok(f) => {
println!("Backup to '{f}' was successful");
exit(0);
@@ -201,25 +201,20 @@ fn parse_args() {
}
}
fn backup_sqlite() -> Result<String, Error> {
#[cfg(sqlite)]
{
use crate::db::{backup_sqlite_database, DbConnType};
if DbConnType::from_url(&CONFIG.database_url()).map(|t| t == DbConnType::sqlite).unwrap_or(false) {
use diesel::Connection;
let url = CONFIG.database_url();
async fn backup_sqlite() -> Result<String, Error> {
use crate::db::{backup_database, DbConnType};
if DbConnType::from_url(&CONFIG.database_url()).map(|t| t == DbConnType::sqlite).unwrap_or(false) {
// Establish a connection to the sqlite database
let mut conn = db::DbPool::from_config()
.expect("SQLite database connection failed")
.get()
.await
.expect("Unable to get SQLite db pool");
// Establish a connection to the sqlite database
let mut conn = diesel::sqlite::SqliteConnection::establish(&url)?;
let backup_file = backup_sqlite_database(&mut conn)?;
Ok(backup_file)
} else {
err_silent!("The database type is not SQLite. Backups only works for SQLite databases")
}
}
#[cfg(not(sqlite))]
{
err_silent!("The 'sqlite' feature is not enabled. Backups only works for SQLite databases")
let backup_file = backup_database(&mut conn).await?;
Ok(backup_file)
} else {
err_silent!("The database type is not SQLite. Backups only works for SQLite databases")
}
}
@@ -516,10 +511,10 @@ async fn container_data_folder_is_persistent(data_folder: &str) -> bool {
format!(" /{data_folder} ")
};
let mut lines = BufReader::new(mountinfo).lines();
let re = regex::Regex::new(r"/volumes/[a-z0-9]{64}/_data /").unwrap();
while let Some(line) = lines.next_line().await.unwrap_or_default() {
// Only execute a regex check if we find the base match
if line.contains(&data_folder_match) {
let re = regex::Regex::new(r"/volumes/[a-z0-9]{64}/_data /").unwrap();
if re.is_match(&line) {
return false;
}
@@ -610,7 +605,7 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
// If we need more signals to act upon, we might want to use select! here.
// With only one item to listen for this is enough.
let _ = signal_user1.recv().await;
match backup_sqlite() {
match backup_sqlite().await {
Ok(f) => info!("Backup to '{f}' was successful"),
Err(e) => error!("Backup failed. {e:?}"),
}

View File

@@ -38,8 +38,8 @@ img {
max-width: 130px;
}
#users-table .vw-actions, #orgs-table .vw-actions {
min-width: 130px;
max-width: 130px;
min-width: 135px;
max-width: 140px;
}
#users-table .vw-org-cell {
max-height: 120px;

View File

@@ -7,6 +7,8 @@ var timeCheck = false;
var ntpTimeCheck = false;
var domainCheck = false;
var httpsCheck = false;
var websocketCheck = false;
var httpResponseCheck = false;
// ================================
// Date & Time Check
@@ -76,18 +78,15 @@ async function generateSupportString(event, dj) {
event.preventDefault();
event.stopPropagation();
let supportString = "### Your environment (Generated via diagnostics page)\n";
let supportString = "### Your environment (Generated via diagnostics page)\n\n";
supportString += `* Vaultwarden version: v${dj.current_release}\n`;
supportString += `* Web-vault version: v${dj.web_vault_version}\n`;
supportString += `* OS/Arch: ${dj.host_os}/${dj.host_arch}\n`;
supportString += `* Running within a container: ${dj.running_within_container} (Base: ${dj.container_base_image})\n`;
supportString += "* Environment settings overridden: ";
if (dj.overrides != "") {
supportString += "true\n";
} else {
supportString += "false\n";
}
supportString += `* Database type: ${dj.db_type}\n`;
supportString += `* Database version: ${dj.db_version}\n`;
supportString += `* Environment settings overridden!: ${dj.overrides !== ""}\n`;
supportString += `* Uses a reverse proxy: ${dj.ip_header_exists}\n`;
if (dj.ip_header_exists) {
supportString += `* IP Header check: ${dj.ip_header_match} (${dj.ip_header_name})\n`;
@@ -99,11 +98,12 @@ async function generateSupportString(event, dj) {
supportString += `* Server/NTP Time Check: ${ntpTimeCheck}\n`;
supportString += `* Domain Configuration Check: ${domainCheck}\n`;
supportString += `* HTTPS Check: ${httpsCheck}\n`;
supportString += `* Database type: ${dj.db_type}\n`;
supportString += `* Database version: ${dj.db_version}\n`;
supportString += "* Clients used: \n";
supportString += "* Reverse proxy and version: \n";
supportString += "* Other relevant information: \n";
if (dj.enable_websocket) {
supportString += `* Websocket Check: ${websocketCheck}\n`;
} else {
supportString += "* Websocket Check: disabled\n";
}
supportString += `* HTTP Response Checks: ${httpResponseCheck}\n`;
const jsonResponse = await fetch(`${BASE_URL}/admin/diagnostics/config`, {
"headers": { "Accept": "application/json" }
@@ -113,10 +113,30 @@ async function generateSupportString(event, dj) {
throw new Error(jsonResponse);
}
const configJson = await jsonResponse.json();
supportString += "\n### Config (Generated via diagnostics page)\n<details><summary>Show Running Config</summary>\n";
supportString += `\n**Environment settings which are overridden:** ${dj.overrides}\n`;
supportString += "\n\n```json\n" + JSON.stringify(configJson, undefined, 2) + "\n```\n</details>\n";
// Start Config and Details section within a details block which is collapsed by default
supportString += "\n### Config & Details (Generated via diagnostics page)\n\n";
supportString += "<details><summary>Show Config & Details</summary>\n";
// Add overrides if they exists
if (dj.overrides != "") {
supportString += `\n**Environment settings which are overridden:** ${dj.overrides}\n`;
}
// Add http response check messages if they exists
if (httpResponseCheck === false) {
supportString += "\n**Failed HTTP Checks:**\n";
// We use `innerText` here since that will convert <br> into new-lines
supportString += "\n```yaml\n" + document.getElementById("http-response-errors").innerText.trim() + "\n```\n";
}
// Add the current config in json form
supportString += "\n**Config:**\n";
supportString += "\n```json\n" + JSON.stringify(configJson, undefined, 2) + "\n```\n";
supportString += "\n</details>\n";
// Add the support string to the textbox so it can be viewed and copied
document.getElementById("support-string").textContent = supportString;
document.getElementById("support-string").classList.remove("d-none");
document.getElementById("copy-support").classList.remove("d-none");
@@ -199,6 +219,162 @@ function checkDns(dns_resolved) {
}
}
async function fetchCheckUrl(url) {
try {
const response = await fetch(url);
return { headers: response.headers, status: response.status, text: await response.text() };
} catch (error) {
console.error(`Error fetching ${url}: ${error}`);
return { error };
}
}
function checkSecurityHeaders(headers, omit) {
let securityHeaders = {
"x-frame-options": ["SAMEORIGIN"],
"x-content-type-options": ["nosniff"],
"referrer-policy": ["same-origin"],
"x-xss-protection": ["0"],
"x-robots-tag": ["noindex", "nofollow"],
"content-security-policy": [
"default-src 'self'",
"base-uri 'self'",
"form-action 'self'",
"object-src 'self' blob:",
"script-src 'self' 'wasm-unsafe-eval'",
"style-src 'self' 'unsafe-inline'",
"child-src 'self' https://*.duosecurity.com https://*.duofederal.com",
"frame-src 'self' https://*.duosecurity.com https://*.duofederal.com",
"frame-ancestors 'self' chrome-extension://nngceckbapebfimnlniiiahkandclblb chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh moz-extension://*",
"img-src 'self' data: https://haveibeenpwned.com",
"connect-src 'self' https://api.pwnedpasswords.com https://api.2fa.directory https://app.simplelogin.io/api/ https://app.addy.io/api/ https://api.fastmail.com/ https://api.forwardemail.net",
]
};
let messages = [];
for (let header in securityHeaders) {
// Skip some headers for specific endpoints if needed
if (typeof omit === "object" && omit.includes(header) === true) {
continue;
}
// If the header exists, check if the contents matches what we expect it to be
let headerValue = headers.get(header);
if (headerValue !== null) {
securityHeaders[header].forEach((expectedValue) => {
if (headerValue.indexOf(expectedValue) === -1) {
messages.push(`'${header}' does not contain '${expectedValue}'`);
}
});
} else {
messages.push(`'${header}' is missing!`);
}
}
return messages;
}
async function checkHttpResponse() {
const [apiConfig, webauthnConnector, notFound, notFoundApi, badRequest, unauthorized, forbidden] = await Promise.all([
fetchCheckUrl(`${BASE_URL}/api/config`),
fetchCheckUrl(`${BASE_URL}/webauthn-connector.html`),
fetchCheckUrl(`${BASE_URL}/admin/does-not-exist`),
fetchCheckUrl(`${BASE_URL}/admin/diagnostics/http?code=404`),
fetchCheckUrl(`${BASE_URL}/admin/diagnostics/http?code=400`),
fetchCheckUrl(`${BASE_URL}/admin/diagnostics/http?code=401`),
fetchCheckUrl(`${BASE_URL}/admin/diagnostics/http?code=403`),
]);
const respErrorElm = document.getElementById("http-response-errors");
// Check and validate the default API header responses
let apiErrors = checkSecurityHeaders(apiConfig.headers);
if (apiErrors.length >= 1) {
respErrorElm.innerHTML += "<b>API calls:</b><br>";
apiErrors.forEach((errMsg) => {
respErrorElm.innerHTML += `<b>Header:</b> ${errMsg}<br>`;
});
}
// Check the special `-connector.html` headers, these should have some headers omitted.
const omitConnectorHeaders = ["x-frame-options", "content-security-policy"];
let connectorErrors = checkSecurityHeaders(webauthnConnector.headers, omitConnectorHeaders);
omitConnectorHeaders.forEach((header) => {
if (webauthnConnector.headers.get(header) !== null) {
connectorErrors.push(`'${header}' is present while it should not`);
}
});
if (connectorErrors.length >= 1) {
respErrorElm.innerHTML += "<b>2FA Connector calls:</b><br>";
connectorErrors.forEach((errMsg) => {
respErrorElm.innerHTML += `<b>Header:</b> ${errMsg}<br>`;
});
}
// Check specific error code responses if they are not re-written by a reverse proxy
let responseErrors = [];
if (notFound.status !== 404 || notFound.text.indexOf("return to the web-vault") === -1) {
responseErrors.push("404 (Not Found) HTML is invalid");
}
if (notFoundApi.status !== 404 || notFoundApi.text.indexOf("\"message\":\"Testing error 404 response\",") === -1) {
responseErrors.push("404 (Not Found) JSON is invalid");
}
if (badRequest.status !== 400 || badRequest.text.indexOf("\"message\":\"Testing error 400 response\",") === -1) {
responseErrors.push("400 (Bad Request) is invalid");
}
if (unauthorized.status !== 401 || unauthorized.text.indexOf("\"message\":\"Testing error 401 response\",") === -1) {
responseErrors.push("401 (Unauthorized) is invalid");
}
if (forbidden.status !== 403 || forbidden.text.indexOf("\"message\":\"Testing error 403 response\",") === -1) {
responseErrors.push("403 (Forbidden) is invalid");
}
if (responseErrors.length >= 1) {
respErrorElm.innerHTML += "<b>HTTP error responses:</b><br>";
responseErrors.forEach((errMsg) => {
respErrorElm.innerHTML += `<b>Response to:</b> ${errMsg}<br>`;
});
}
if (responseErrors.length >= 1 || connectorErrors.length >= 1 || apiErrors.length >= 1) {
document.getElementById("http-response-warning").classList.remove("d-none");
} else {
httpResponseCheck = true;
document.getElementById("http-response-success").classList.remove("d-none");
}
}
async function fetchWsUrl(wsUrl) {
return new Promise((resolve, reject) => {
try {
const ws = new WebSocket(wsUrl);
ws.onopen = () => {
ws.close();
resolve(true);
};
ws.onerror = () => {
reject(false);
};
} catch (_) {
reject(false);
}
});
}
async function checkWebsocketConnection() {
// Test Websocket connections via the anonymous (login with device) connection
const isConnected = await fetchWsUrl(`${BASE_URL}/notifications/anonymous-hub?token=admin-diagnostics`).catch(() => false);
if (isConnected) {
websocketCheck = true;
document.getElementById("websocket-success").classList.remove("d-none");
} else {
document.getElementById("websocket-error").classList.remove("d-none");
}
}
function init(dj) {
// Time check
document.getElementById("time-browser-string").textContent = browserUTC;
@@ -225,6 +401,12 @@ function init(dj) {
// DNS Check
checkDns(dj.dns_resolved);
checkHttpResponse();
if (dj.enable_websocket) {
checkWebsocketConnection();
}
}
// onLoad events

View File

@@ -4,10 +4,10 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-2.0.8
* https://datatables.net/download/#bs5/dt-2.1.8
*
* Included libraries:
* DataTables 2.0.8
* DataTables 2.1.8
*/
@charset "UTF-8";
@@ -45,15 +45,21 @@ table.dataTable tr.dt-hasChild td.dt-control:before {
}
html.dark table.dataTable td.dt-control:before,
:root[data-bs-theme=dark] table.dataTable td.dt-control:before {
:root[data-bs-theme=dark] table.dataTable td.dt-control:before,
:root[data-theme=dark] table.dataTable td.dt-control:before {
border-left-color: rgba(255, 255, 255, 0.5);
}
html.dark table.dataTable tr.dt-hasChild td.dt-control:before,
:root[data-bs-theme=dark] table.dataTable tr.dt-hasChild td.dt-control:before {
:root[data-bs-theme=dark] table.dataTable tr.dt-hasChild td.dt-control:before,
:root[data-theme=dark] table.dataTable tr.dt-hasChild td.dt-control:before {
border-top-color: rgba(255, 255, 255, 0.5);
border-left-color: transparent;
}
div.dt-scroll {
width: 100%;
}
div.dt-scroll-body thead tr,
div.dt-scroll-body tfoot tr {
height: 0;
@@ -377,6 +383,31 @@ table.table.dataTable.table-hover > tbody > tr.selected:hover > * {
box-shadow: inset 0 0 0 9999px rgba(var(--dt-row-selected), 0.975);
}
div.dt-container div.dt-layout-start > *:not(:last-child) {
margin-right: 1em;
}
div.dt-container div.dt-layout-end > *:not(:first-child) {
margin-left: 1em;
}
div.dt-container div.dt-layout-full {
width: 100%;
}
div.dt-container div.dt-layout-full > *:only-child {
margin-left: auto;
margin-right: auto;
}
div.dt-container div.dt-layout-table > div {
display: block !important;
}
@media screen and (max-width: 767px) {
div.dt-container div.dt-layout-start > *:not(:last-child) {
margin-right: 0;
}
div.dt-container div.dt-layout-end > *:not(:first-child) {
margin-left: 0;
}
}
div.dt-container div.dt-length label {
font-weight: normal;
text-align: left;
@@ -400,9 +431,6 @@ div.dt-container div.dt-search input {
display: inline-block;
width: auto;
}
div.dt-container div.dt-info {
padding-top: 0.85em;
}
div.dt-container div.dt-paging {
margin: 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -132,6 +132,21 @@
<span class="d-block" title="We have direct internet access, no outgoing proxy configured."><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Websocket enabled
{{#if page_data.enable_websocket}}
<span class="badge bg-success d-none" id="websocket-success" title="Websocket connection is working.">Ok</span>
<span class="badge bg-danger d-none" id="websocket-error" title="Websocket connection error, validate your reverse proxy configuration!">Error</span>
{{/if}}
</dt>
<dd class="col-sm-7">
{{#if page_data.enable_websocket}}
<span class="d-block" title="Websocket connections are enabled (ENABLE_WEBSOCKET is true)."><b>Yes</b></span>
{{/if}}
{{#unless page_data.enable_websocket}}
<span class="d-block" title="Websocket connections are disabled (ENABLE_WEBSOCKET is false)."><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">DNS (github.com)
<span class="badge bg-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge bg-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
@@ -167,6 +182,14 @@
<span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{page_data.admin_url}}</span></span>
<span id="domain-browser" class="d-block"><b>Browser:</b> <span id="domain-browser-string"></span></span>
</dd>
<dt class="col-sm-5">HTTP Response validation
<span class="badge bg-success d-none" id="http-response-success" title="All headers and HTTP request responses seem to be ok.">Ok</span>
<span class="badge bg-danger d-none" id="http-response-warning" title="Some headers or HTTP request responses return invalid data!">Error</span>
</dt>
<dd class="col-sm-7">
<span id="http-response-errors" class="d-block"></span>
</dd>
</dl>
</div>
</div>

View File

@@ -19,7 +19,7 @@
<tr>
<td>
<svg width="48" height="48" class="float-start me-2 rounded" data-jdenticon-value="{{email}}">
<div class="float-start">
<div>
<strong>{{name}}</strong>
<span class="d-block">{{email}}</span>
<span class="d-block">

View File

@@ -2,7 +2,7 @@ Emergency access for {{{grantor_name}}}
<!---------------->
You have been invited to become an emergency contact for {{grantor_name}}. To accept this invite, click the following link:
Click here to join: {{url}}/#/accept-emergency/?id={{emer_id}}&name={{grantor_name}}&email={{email}}&token={{token}}
Click here to join: {{{url}}}
If you do not wish to become an emergency contact for {{grantor_name}}, you can safely ignore this email.
{{> email/email_footer_text }}

View File

@@ -9,7 +9,7 @@ Emergency access for {{{grantor_name}}}
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
<a href="{{url}}/#/accept-emergency/?id={{emer_id}}&name={{grantor_name}}&email={{email}}&token={{token}}"
<a href="{{{url}}}"
clicktracking=off target="_blank" style="color: #ffffff; text-decoration: none; text-align: center; cursor: pointer; display: inline-block; border-radius: 5px; background-color: #3c8dbc; border-color: #3c8dbc; border-style: solid; border-width: 10px 20px; margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
Become emergency contact
</a>
@@ -21,4 +21,4 @@ Emergency access for {{{grantor_name}}}
</td>
</tr>
</table>
{{> email/email_footer }}
{{> email/email_footer }}

View File

@@ -3,7 +3,7 @@ Join {{{org_name}}}
You have been invited to join the *{{org_name}}* organization.
Click here to join: {{url}}
Click here to join: {{{url}}}
If you do not wish to join this organization, you can safely ignore this email.

View File

@@ -9,7 +9,7 @@ Join {{{org_name}}}
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
<a href="{{url}}"
<a href="{{{url}}}"
clicktracking=off target="_blank" style="color: #ffffff; text-decoration: none; text-align: center; cursor: pointer; display: inline-block; border-radius: 5px; background-color: #3c8dbc; border-color: #3c8dbc; border-style: solid; border-width: 10px 20px; margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
Join Organization Now
</a>

View File

@@ -0,0 +1 @@
/* See the wiki for examples and details: https://github.com/dani-garcia/vaultwarden/wiki/Customize-Vaultwarden-CSS */

View File

@@ -0,0 +1,105 @@
/**** START Static Vaultwarden changes ****/
/* This combines all selectors extending it into one */
%vw-hide {
display: none !important;
}
/* This allows searching for the combined style in the browsers dev-tools (look into the head tag) */
.vw-hide,
head {
@extend %vw-hide;
}
/* Hide the Subscription Page tab */
bit-nav-item[route="settings/subscription"] {
@extend %vw-hide;
}
/* Hide any link pointing to Free Bitwarden Families */
a[href$="/settings/sponsored-families"] {
@extend %vw-hide;
}
/* Hide the `Enterprise Single Sign-On` button on the login page */
a[routerlink="/sso"] {
@extend %vw-hide;
}
/* Hide Two-Factor menu in Organization settings */
bit-nav-item[route="settings/two-factor"],
a[href$="/settings/two-factor"] {
@extend %vw-hide;
}
/* Hide Business Owned checkbox */
app-org-info > form:nth-child(1) > div:nth-child(3) {
@extend %vw-hide;
}
/* Hide the `This account is owned by a business` checkbox and label */
#ownedBusiness,
label[for^="ownedBusiness"] {
@extend %vw-hide;
}
/* Hide the radio button and label for the `Custom` org user type */
#userTypeCustom,
label[for^="userTypeCustom"] {
@extend %vw-hide;
}
/* Hide Business Name */
app-org-account form div bit-form-field.tw-block:nth-child(3) {
@extend %vw-hide;
}
/* Hide organization plans */
app-organization-plans > form > bit-section:nth-child(2) {
@extend %vw-hide;
}
/* Hide Device Verification form at the Two Step Login screen */
app-security > app-two-factor-setup > form {
@extend %vw-hide;
}
/**** END Static Vaultwarden Changes ****/
/**** START Dynamic Vaultwarden Changes ****/
{{#if signup_disabled}}
/* Hide the register link on the login screen */
app-frontend-layout > app-login > form > div > div > div > p {
@extend %vw-hide;
}
{{/if}}
/* Hide `Email` 2FA if mail is not enabled */
{{#unless mail_enabled}}
app-two-factor-setup ul.list-group.list-group-2fa li.list-group-item:nth-child(5) {
@extend %vw-hide;
}
{{/unless}}
/* Hide `YubiKey OTP security key` 2FA if it is not enabled */
{{#unless yubico_enabled}}
app-two-factor-setup ul.list-group.list-group-2fa li.list-group-item:nth-child(2) {
@extend %vw-hide;
}
{{/unless}}
/* Hide Emergency Access if not allowed */
{{#unless emergency_access_allowed}}
bit-nav-item[route="settings/emergency-access"] {
@extend %vw-hide;
}
{{/unless}}
/* Hide Sends if not allowed */
{{#unless sends_allowed}}
bit-nav-item[route="sends"] {
@extend %vw-hide;
}
{{/unless}}
/**** End Dynamic Vaultwarden Changes ****/
/**** Include a special user stylesheet for custom changes ****/
{{#if load_user_scss}}
{{> scss/user.vaultwarden.scss }}
{{/if}}

View File

@@ -51,9 +51,11 @@ impl Fairing for AppHeaders {
}
}
// NOTE: When modifying or adding security headers be sure to also update the diagnostic checks in `src/static/scripts/admin_diagnostics.js` in `checkSecurityHeaders`
res.set_raw_header("Permissions-Policy", "accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), keyboard-map=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=()");
res.set_raw_header("Referrer-Policy", "same-origin");
res.set_raw_header("X-Content-Type-Options", "nosniff");
res.set_raw_header("X-Robots-Tag", "noindex, nofollow");
// Obsolete in modern browsers, unsafe (XS-Leak), and largely replaced by CSP
res.set_raw_header("X-XSS-Protection", "0");
@@ -96,10 +98,11 @@ impl Fairing for AppHeaders {
https://app.addy.io/api/ \
https://api.fastmail.com/ \
https://api.forwardemail.net \
;\
{allowed_connect_src};\
",
icon_service_csp = CONFIG._icon_service_csp(),
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors()
allowed_iframe_ancestors = CONFIG.allowed_iframe_ancestors(),
allowed_connect_src = CONFIG.allowed_connect_src(),
);
res.set_raw_header("Content-Security-Policy", csp);
res.set_raw_header("X-Frame-Options", "SAMEORIGIN");
@@ -438,13 +441,19 @@ pub fn get_env_bool(key: &str) -> Option<bool> {
use chrono::{DateTime, Local, NaiveDateTime, TimeZone};
// Format used by Bitwarden API
const DATETIME_FORMAT: &str = "%Y-%m-%dT%H:%M:%S%.6fZ";
/// Formats a UTC-offset `NaiveDateTime` in the format used by Bitwarden API
/// responses with "date" fields (`CreationDate`, `RevisionDate`, etc.).
pub fn format_date(dt: &NaiveDateTime) -> String {
dt.format(DATETIME_FORMAT).to_string()
dt.and_utc().to_rfc3339_opts(chrono::SecondsFormat::Micros, true)
}
/// Validates and formats a RFC3339 timestamp
/// If parsing fails it will return the start of the unix datetime
pub fn validate_and_format_date(dt: &str) -> String {
match DateTime::parse_from_rfc3339(dt) {
Ok(dt) => dt.to_rfc3339_opts(chrono::SecondsFormat::Micros, true),
_ => String::from("1970-01-01T00:00:00.000000Z"),
}
}
/// Formats a `DateTime<Local>` using the specified format string.
@@ -486,7 +495,7 @@ pub fn format_datetime_http(dt: &DateTime<Local>) -> String {
}
pub fn parse_date(date: &str) -> NaiveDateTime {
NaiveDateTime::parse_from_str(date, DATETIME_FORMAT).unwrap()
DateTime::parse_from_rfc3339(date).unwrap().naive_utc()
}
//