Compare commits

..

34 Commits

Author SHA1 Message Date
Mathijs van Veluw
4438da39f9 Fix healthcheck when using .env file (#4299)
It seems Debian based images see the `.env` file in the `pwd` path, but
sourcing it via `. .env` breaks. It does work if you provide the full
path `/.env`. Changed the default to `/.env`.

Alpine does not have an issue with both ways.
2024-01-31 22:31:47 +01:00
Stefan Melmuk
0b2383ab56 fix push device registration (#4297)
don't try to register a push device when the device is new
it will be registered when the push token is saved

fixes #4296
2024-01-31 22:31:22 +01:00
gzfrozen
ad1d65bdf8 Update env template file (#4276)
* update env template to fit the config.rs

* Categorize env template settings

* Fix a wrong setting

* Fix wrong icon redirect code

* Fix ICON_DOWNLOAD_TIMEOUT default value

Co-authored-by: Daniel <daniel.barabasa@gmail.com>

* Move related settings together.
Merge Yubikey, Duo, Email 2FA sections into one.
Other minor fixes.

* Minor fix of some settings position

* Add some comment

* Minor fix.

---------

Co-authored-by: Daniel <daniel.barabasa@gmail.com>
2024-01-30 19:15:37 +01:00
Stefan Melmuk
3b283c289e register missing push devices at login (#3792)
save the push token of new device even if push notifications are not
enabled and provide a way to register the push device at login

unregister device if there already is a push token saved unless the
new token has already been registered.

also the `unregister_push_device` function used the wrong argument
cf. 08d380900b/src/Core/Services/Implementations/RelayPushRegistrationService.cs (L43)
2024-01-30 19:14:25 +01:00
Stefan Melmuk
4b9384cb2b err on invalid feature flag (#4263)
* err on invalid feature flag

* print all invalid flags and improve error message
2024-01-28 23:36:27 +01:00
Mathijs van Veluw
0f39d96518 Fix attachment upload size check (#4282)
The min/max were reversed with the `add` and `sub` functions.
This caused the files to always be out of bounds in the check.

Fixes #4281
2024-01-28 23:32:09 +01:00
Daniel García
edf7484a70 Improve file limit handling (#4242)
* Improve file limit handling

* Oops

* Update PostgreSQL migration

* Review comments

---------

Co-authored-by: BlackDex <black.dex@gmail.com>
2024-01-27 02:43:26 +01:00
Jacques B
8b66e34415 Return 404 when user public_key is empty (#4271) 2024-01-26 20:34:36 +01:00
Mathijs van Veluw
1d00e34bbb Update crates, web-vault and GHA (#4275)
- Update GitHub Actions
- Updated crates
- Updated web-vault to v2024.1.2
2024-01-26 20:19:53 +01:00
Stefan Melmuk
1b801406d6 prevent side effects if groups are disabled (#4265) 2024-01-25 22:02:07 +01:00
Helmut K. C. Tessarek
5e46a43306 fix: use black text for update badge (better contrast) (#4245) 2024-01-25 21:58:05 +01:00
Mathijs van Veluw
5c77431c2d Fix bulk collection deletion (#4257)
The bulk collection delete seems to have removed the extra org_id in the
posted data. Now we only use the org_id from the path.

Fixes #4253
2024-01-25 21:57:35 +01:00
dependabot[bot]
2775c6ce8a Bump h2 from 0.3.23 to 0.3.24 (#4260)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.23 to 0.3.24.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.24/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.23...v0.3.24)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-25 21:56:33 +01:00
Mathijs van Veluw
890e668071 Update crates and fix icon issue (#4237)
- Fix icon download issue by removing the deflate feature
- Updated all the crates
- Updated Handlebars code

Fixes #4224
2024-01-12 20:44:37 +01:00
Stefan Melmuk
596c167312 improve emergency access when not enabled (#4227)
* improve emergency access when not enabled

* display note that emergency access is disabled
2024-01-10 19:02:36 +01:00
Daniel García
ae3a153bdb Update README.md 2024-01-01 19:44:52 +01:00
Stefan Melmuk
2c36993792 enforce 2FA policy on removal of second factor and login (#3803)
* enforce 2fa policy on removal of second factor

users should be revoked when their second factors are removed.

we want to revoke users so they don't have to be invited again and
organization admins and owners are aware that they no longer have
access.

we make an exception for non-confirmed users to speed up the invitation
process as they would have to be restored before they can accept their
invitation or be confirmed.

if email is enabled, invited users have to add a second factor before
they can accept the invitation to an organization with 2fa policy.
and if it is not enabled that check is done when confirming the user.

* use &str instead of String in log_event()

* enforce the 2fa policy on login

if a user doesn't have a second factor check if they are in an
organization that has the 2fa policy enabled to revoke their access
2024-01-01 19:41:40 +01:00
THONY
d672ad3f76 US or EU Data Region Selection (#3752)
* add selection of data region for push

* fix cargo check + rewrite config + add check url

* fix clippy error

* add comment in .env.template, adapt config.rs

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Revert "Update .env.template"

This reverts commit 5bed974ba7b9f481792d2228834585f053d47dc3.

* Revert "Update .env.template"

This reverts commit 0760eff95dfaf2a9cf97bb25f6cf7660bdf55173.

* fix /connect/token to push identity

* fix /connect/token to push identity

* Fixed formatting when solving merge conflicts

---------

Co-authored-by: William Desportes <williamdes@wdes.fr>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-01-01 16:01:57 +01:00
Matlink
a641b48884 Fix #3413: push to users accessing the collections using groups (#3757)
* Fix #3413: push to users acessing the collections using groups

* Notify groups only when enabled
2024-01-01 15:46:03 +01:00
Philipp Kolberg
98b2178c7d Allow customizing the featureStates (#4168)
* Allow customizing the featureStates

Use a comma separated list of features to enable using the FEATURE_FLAGS env variable

* Move feature flag parsing to util

* Fix formatting

* Update supported feature flags

* Rename feature_flags to experimental_client_feature_flags

Additionally, use a caret (^) instead of an exclamation mark (!) to disable features

* Fix formatting issue.

* Add documentation to env template

* Remove functionality to disable feature flags

* Fix JSON key for feature states

* Convert error to warning when feature flag is unrecognized

* Simplify parsing of feature flags

* Fix default value of feature flags in env template

* Fix formatting
2024-01-01 15:44:02 +01:00
Mathijs van Veluw
76a3f0f531 Fix Single Org Policy check (#4207)
There was an error in the single org policy check to determine how many
users there are in an org. The `or` check was at the wrong location in
the DSL.

This is now fixed.

Fixes #4205
2024-01-01 15:42:57 +01:00
Mathijs van Veluw
c5665e7b77 Update Rust and Crates (#4211)
- Updated Rust to v1.75.0
- Updated all the crates
- Fixed warning generated by latest version of Rust
2024-01-01 15:41:54 +01:00
Mathijs van Veluw
cbdcf8ef9f Update web-vault to v2023.12.0 (#4201) 2023-12-24 15:50:58 +01:00
Chris
3337594d60 Add additional build target which optimizes for size (#4096)
OpenWRT is a project which builds and distributes firmware for
embedded devies like routers, access points, and so on. These
devices are usually very limited in terms of storage. Therefore,
optimizing binaries for size at the cost of execution speed is
usually desired.

This PR adds an additional build-target, namely "release-micro",
which implements several parameters which optimize in favor of
binary size.

The following parameters were chosen:
- opt-level "z": Optimize for size with disabled loop vectorization
- strip "symbols": Strip debuginfo and symbols from binary
- lto "fat": Enable link-time optimizations across all crates
- codegen-units 1: Disable parallelization of code generation to
  allow for additional optimizations
- panic "abort": Abort on Panic() instead of unwinding

All these build parameters significantly reduce the binary size
from >40MB to <15MB - the actual amount depends on the target
architecture.

We would like to upstream this new build target to keep our build
environment simple. Other projects which deploy vaultwarden on
size-constrained environments may benefit from this change too.

Signed-off-by: Christian Lachner <gladiac@gmail.com>
2023-12-18 21:46:53 +01:00
Mathijs van Veluw
2daa8be1f1 Update crates (#4173)
Update all crates instead of only the zerocopy from dependabot.
Closes #4170
2023-12-18 21:45:54 +01:00
Mathijs van Veluw
eccb3ab947 Decrease JWT Refresh/Auth token (#4163)
Large JWT's could cause issue because of header or body sizes of the
HTTP request could get too large when you are a member of a lot of organizations.

This PR removes these specific keys since they are not used either
client side or server side.

Because Bitwarden does add these in there JWT's i would suggest to keep
the code we had but then commented out as a reference.

Removing it and searching for this when needed would be a waist of time.

Fixes #4156
2023-12-13 17:49:35 +01:00
Mathijs van Veluw
3246251f29 Fix the version string (#4153)
For some reason still not known, the `.git` directory was not copied
into the container. I think buildkit (buildx) did this by default before, and
stopped this with newer versions.

This PR fixes this by also touching `build.rs` besides `src/main.rs`.

This PR also updates Rust to v1.74.1 and some crates, including the
latest version of Alpine 3.19.

Fixes #4150
2023-12-09 23:04:33 +01:00
Mathijs van Veluw
8ab200224e Several small fixes for open issues (#4143)
* Fix BWDC when re-run with cleared cache

Using the BWDC with a cleared cache caused invited users to be converted
to accepted users.

The problem was a wrong check for the `restore` function.

Fixes #4114

* Remove useless variable

During some refactoring this seems to be overlooked.
This variable gets filled but isn't used at all afterwards.

Fixes #4105

* Check some `.git` paths to force a rebuild

When a checked-out repo switches to a specific tag, and that tag does
not have anything else changed in the files except the tag, it could
happen that the build process doesn't see any changes, while it could be
that the version string needs to be different.

This commit ensures that if some specific paths are changed within the
.git directory, cargo will be triggered to rebuild.

Fixes #4087

* Do not delete dir on file delete

Previously during a `delete_file` check we also tried to delete the
parent directory and ignored all errors, like not being empty for
example.

Since this function is called `delete_file` and does not mention
anything in regards to a directory i have removed that code and it will
now only delete the file and leave the rest as-is.

If this somehow is still needed or wanted, which i do not think we want,
then we should create a new function.

Fixes #4081

* Fix healthcheck when using an ENV file

If someone is using a `.env` file or configured the `ENV_FILE` variable
to use that as it's configuration, this was missed by the healthcheck.

So, `DOMAIN` and `ROCKET_TLS` were not seen, and not used in these cases.

This commit fixes this by checking for this file and if it exists, then
it will load those variables first.

Fixes #4112

* Add missing route

While there was a function and a derive, this endpoint wasn't part of
the routes. Since Bitwarden does have this endpoint ill add the route
instead of deleting it.

Fixes #4076
Fixes #4144

* Update crates to update the openssl crate

Because of a bug in the openssl-sys crate we pinned the version to an
older version. This issue has been fixed and was released 2 days ago.

This commit updates the openssl crates including others.
This should also fix the issues with building Vaultwarden using newer
versions of LibreSSL.

Fixes #4051
2023-12-09 01:21:14 +01:00
Mathijs van Veluw
34e00e1478 Update Rust, Crates, Profile and Actions (#4126)
- Updated Rust to v1.74.0
- Updated all crates (where possible)
- Changed release profile to use
  * fat lto
  * 1 codegen-unit
  This should optimize a bit for speed and a lot for size ~15MB smaller
- Updated Github actions to use caching for the bake process
- Added a schedule to clean the cache every week to prevent stale Debian/Alpine base images
- During the release action, the Alpine/static binaries are added as artifects.
  Later we could also automatically add them to the releases maybe.
- Added CODEWONERS to prevent unchecked changes to github actions workflows
2023-12-04 20:26:11 +01:00
Mathijs van Veluw
0fdda3bc2f Prevent generating an error during ws close (#4127)
When a WebSocket connection was closing it was sending a message after
it was closed already. This generated an error in the logs.
While this error didn't harm any of the functionallity of Vaultwarden it
isn't nice to see them of course.

This PR Fixes this by catching the close message and breaks the loop at
that point. This prevents the `_` catch-all from replying the close
message back to the client, which was causing the error message.

Fixes #4090
2023-12-04 20:20:13 +01:00
Mathijs van Veluw
48836501bf Update crates (#4074)
* Remove another header for websocket connections

* Fix small bake issue

* Update crates

Updated crates and adjusted code where needed.
One major update is Rocket rc4, no need anymore (again) for crates.io patching.

The only item still pending is openssl/openssl-sys for which we need to
wait if https://github.com/sfackler/rust-openssl/pull/2094 will be
merged. If, then we can remove the pinned versions for the openssl crate.
2023-11-15 10:41:14 +01:00
Mathijs van Veluw
f863ffb89a Add Protected Actions Check (#4067)
Since the feature `Login with device` some actions done via the
web-vault need to be verified via an OTP instead of providing the MasterPassword.

This only happens if a user used the `Login with device` on a device
which uses either Biometrics login or PIN. These actions prevent the
athorizing device to send the MasterPasswordHash. When this happens, the
web-vault requests an OTP to be filled-in and this OTP is send to the
users email address which is the same as the email address to login.

The only way to bypass this is by logging in with the your password, in
those cases a password is requested instead of an OTP.

In case SMTP is not enabled, it will show an error message telling to
user to login using there password.

Fixes #4042
2023-11-12 22:15:44 +01:00
Mathijs van Veluw
03c6ed2e07 Disable autofill-v2 (#4056)
Disabled autofill-v2 as it seems to cause strange issues as reported
here: https://github.com/dani-garcia/vaultwarden/discussions/4052

Also added the Vaultwarden server version back again but at a different
location.

Fixes #4052
2023-11-09 00:16:27 +01:00
Mathijs van Veluw
efc6eb0073 Fix missing alpine tag during buildx bake (#4043)
The bake recipt was missing the single `:alpine` tag for the alpine
builds when we were releasing a `stable/latest` version of Vaultwarden.

This PR fixes this by checking for those conditions and add the
`:alpine` tag too.

We will keep the `:latest-alpine` also, which i find even nicer then just
`:alpine`

Fixes #4035
2023-11-07 10:50:58 +01:00
62 changed files with 2474 additions and 1416 deletions

View File

@@ -10,39 +10,13 @@
## variable ENV_FILE can be set to the location of this file prior to starting ## variable ENV_FILE can be set to the location of this file prior to starting
## Vaultwarden. ## Vaultwarden.
####################
### Data folders ###
####################
## Main data folder ## Main data folder
# DATA_FOLDER=data # DATA_FOLDER=data
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/diesel/mysql/struct.MysqlConnection.html
# DATABASE_URL=mysql://user:password@host[:port]/database_name
## When using PostgreSQL, specify an appropriate connection URI (recommended)
## or keyword/value connection string.
## Details:
## - https://docs.diesel.rs/diesel/pg/struct.PgConnection.html
## - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
# DATABASE_URL=postgresql://user:password@host[:port]/database_name
## Database max connections
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database timeout
## Timeout when acquiring database connection
# DATABASE_TIMEOUT=30
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
## Individual folders, these override %DATA_FOLDER% ## Individual folders, these override %DATA_FOLDER%
# RSA_KEY_FILENAME=data/rsa_key # RSA_KEY_FILENAME=data/rsa_key
# ICON_CACHE_FOLDER=data/icon_cache # ICON_CACHE_FOLDER=data/icon_cache
@@ -52,23 +26,64 @@
## Templates data folder, by default uses embedded templates ## Templates data folder, by default uses embedded templates
## Check source code to see the format ## Check source code to see the format
# TEMPLATES_FOLDER=/path/to/templates # TEMPLATES_FOLDER=data/templates
## Automatically reload the templates for every request, slow, use only for development ## Automatically reload the templates for every request, slow, use only for development
# RELOAD_TEMPLATES=false # RELOAD_TEMPLATES=false
## Client IP Header, used to identify the IP of the client, defaults to "X-Real-IP"
## Set to the string "none" (without quotes), to disable any headers and just use the remote IP
# IP_HEADER=X-Real-IP
## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever")
# ICON_CACHE_TTL=2592000
## Cache time-to-live for icons which weren't available, in seconds (0 is "forever")
# ICON_CACHE_NEGTTL=259200
## Web vault settings ## Web vault settings
# WEB_VAULT_FOLDER=web-vault/ # WEB_VAULT_FOLDER=web-vault/
# WEB_VAULT_ENABLED=true # WEB_VAULT_ENABLED=true
#########################
### Database settings ###
#########################
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/2.1.x/diesel/mysql/struct.MysqlConnection.html
# DATABASE_URL=mysql://user:password@host[:port]/database_name
## When using PostgreSQL, specify an appropriate connection URI (recommended)
## or keyword/value connection string.
## Details:
## - https://docs.diesel.rs/2.1.x/diesel/pg/struct.PgConnection.html
## - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
# DATABASE_URL=postgresql://user:password@host[:port]/database_name
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
## Database connection retries
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Database timeout
## Timeout when acquiring database connection
# DATABASE_TIMEOUT=30
## Database max connections
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
#################
### WebSocket ###
#################
## Enables websocket notifications ## Enables websocket notifications
# WEBSOCKET_ENABLED=false # WEBSOCKET_ENABLED=false
@@ -76,41 +91,24 @@
# WEBSOCKET_ADDRESS=0.0.0.0 # WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012 # WEBSOCKET_PORT=3012
##########################
### Push notifications ###
##########################
## Enables push notifications (requires key and id from https://bitwarden.com/host) ## Enables push notifications (requires key and id from https://bitwarden.com/host)
# PUSH_ENABLED=true ## If you choose "European Union" Data Region, uncomment PUSH_RELAY_URI and PUSH_IDENTITY_URI then replace .com by .eu
## Details about mobile client push notification:
## - https://github.com/dani-garcia/vaultwarden/wiki/Enabling-Mobile-Client-push-notification
# PUSH_ENABLED=false
# PUSH_INSTALLATION_ID=CHANGEME # PUSH_INSTALLATION_ID=CHANGEME
# PUSH_INSTALLATION_KEY=CHANGEME # PUSH_INSTALLATION_KEY=CHANGEME
## Don't change this unless you know what you're doing. ## Don't change this unless you know what you're doing.
# PUSH_RELAY_URI=https://push.bitwarden.com # PUSH_RELAY_URI=https://push.bitwarden.com
# PUSH_IDENTITY_URI=https://identity.bitwarden.com
## Controls whether users are allowed to create Bitwarden Sends. #####################
## This setting applies globally to all users. ### Schedule jobs ###
## To control this on a per-org basis instead, use the "Disable Send" org policy. #####################
# SENDS_ALLOWED=true
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Controls whether users can change their email.
## This setting applies globally to all users
# EMAIL_CHANGE_ALLOWED=true
## Number of days to retain events stored in the database.
## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EVENTS_DAYS_RETAIN=
## BETA FEATURE: Groups
## Controls whether group support is enabled for organizations
## This setting applies to organizations.
## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Job scheduler settings ## Job scheduler settings
## ##
@@ -151,60 +149,69 @@
## Cron schedule of the job that cleans old events from the event table. ## Cron schedule of the job that cleans old events from the event table.
## Defaults to daily. Set blank to disable this job. Also without EVENTS_DAYS_RETAIN set, this job will not start. ## Defaults to daily. Set blank to disable this job. Also without EVENTS_DAYS_RETAIN set, this job will not start.
# EVENT_CLEANUP_SCHEDULE="0 10 0 * * *" # EVENT_CLEANUP_SCHEDULE="0 10 0 * * *"
## Number of days to retain events stored in the database.
## Enable extended logging, which shows timestamps and targets in the logs ## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EXTENDED_LOGGING=true # EVENTS_DAYS_RETAIN=
## Timestamp format used in extended logging.
## Format specifiers: https://docs.rs/chrono/latest/chrono/format/strftime
# LOG_TIMESTAMP_FORMAT="%Y-%m-%d %H:%M:%S.%3f"
## Logging to file
# LOG_FILE=/path/to/log
## Logging to Syslog
## This requires extended logging
# USE_SYSLOG=false
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
# LOG_LEVEL=Info
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
## Database connection retries
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Icon service
## The predefined icon services are: internal, bitwarden, duckduckgo, google.
## To specify a custom icon service, set a URL template with exactly one instance of `{}`,
## which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
## ##
## `internal` refers to Vaultwarden's built-in icon fetching implementation. ## Cron schedule of the job that cleans old auth requests from the auth request.
## If an external service is set, an icon request to Vaultwarden will return an HTTP ## Defaults to every minute. Set blank to disable this job.
## redirect to the corresponding icon at the external service. An external service may # AUTH_REQUEST_PURGE_SCHEDULE="30 * * * * *"
## be useful if your Vaultwarden instance has no external network connectivity, or if
## you are concerned that someone may probe your instance to try to detect whether icons
## for certain sites have been cached.
# ICON_SERVICE=internal
## Icon redirect code ########################
## The HTTP status code to use for redirects to an external icon service. ### General settings ###
## The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent). ########################
## Temporary redirects are useful while testing different icon services, but once a service
## has been decided on, consider using permanent redirects for cacheability. The legacy codes ## Domain settings
## are currently better supported by the Bitwarden clients. ## The domain must match the address from where you access the server
# ICON_REDIRECT_CODE=302 ## It's recommended to configure this value, otherwise certain functionality might not work,
## like attachment downloads, email links and U2F.
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
## To use HTTPS, the recommended way is to put Vaultwarden behind a reverse proxy
## Details:
## - https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS
## - https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples
## For development
# DOMAIN=http://localhost
## For public server
# DOMAIN=https://vw.domain.tld
## For public server (URL with port number)
# DOMAIN=https://vw.domain.tld:8443
## For public server (URL with path)
# DOMAIN=https://domain.tld/vw
## Controls whether users are allowed to create Bitwarden Sends.
## This setting applies globally to all users.
## To control this on a per-org basis instead, use the "Disable Send" org policy.
# SENDS_ALLOWED=true
## HIBP Api Key
## HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
# HIBP_API_KEY=
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
# ORG_ATTACHMENT_LIMIT=
## Per-user attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further attachments.
# USER_ATTACHMENT_LIMIT=
## Per-user send storage limit (KB)
## Max kilobytes of send storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further sends.
# USER_SEND_LIMIT=
## Number of days to wait before auto-deleting a trashed item.
## If unset (the default), trashed items are not auto-deleted.
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
## resulting in an email notification. An incomplete 2FA login is one where the correct
## master password was provided but the required 2FA step was not completed, which
## potentially indicates a master password compromise. Set to 0 to disable this check.
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Disable icon downloading ## Disable icon downloading
## Set to true to disable icon downloading in the internal icon service. ## Set to true to disable icon downloading in the internal icon service.
@@ -213,38 +220,6 @@
## will be deleted eventually, but won't be downloaded again. ## will be deleted eventually, but won't be downloaded again.
# DISABLE_ICON_DOWNLOAD=false # DISABLE_ICON_DOWNLOAD=false
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
## Disable 2FA remember
## Enabling this would force the users to use a second factor to login every time.
## Note that the checkbox would still be present, but ignored.
# DISABLE_2FA_REMEMBER=false
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
## Token expiration time
## Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
# EMAIL_EXPIRATION_TIME=600
## Email token size
## Number of digits in an email 2FA token (min: 6, max: 255).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
## Controls if new users can register ## Controls if new users can register
# SIGNUPS_ALLOWED=true # SIGNUPS_ALLOWED=true
@@ -266,6 +241,11 @@
## even if SIGNUPS_ALLOWED is set to false ## even if SIGNUPS_ALLOWED is set to false
# SIGNUPS_DOMAINS_WHITELIST=example.com,example.net,example.org # SIGNUPS_DOMAINS_WHITELIST=example.com,example.net,example.org
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Controls which users can create new orgs. ## Controls which users can create new orgs.
## Blank or 'all' means all users can create orgs (this is the default): ## Blank or 'all' means all users can create orgs (this is the default):
# ORG_CREATION_USERS= # ORG_CREATION_USERS=
@@ -274,6 +254,122 @@
## A comma-separated list means only those users can create orgs: ## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com # ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
## The number of hours after which an organization invite token, emergency access invite token,
## email verification token and deletion request token will expire (must be at least 1)
# INVITATION_EXPIRATION_HOURS=120
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether users can change their email.
## This setting applies globally to all users
# EMAIL_CHANGE_ALLOWED=true
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=600000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
#########################
### Advanced settings ###
#########################
## Client IP Header, used to identify the IP of the client, defaults to "X-Real-IP"
## Set to the string "none" (without quotes), to disable any headers and just use the remote IP
# IP_HEADER=X-Real-IP
## Icon service
## The predefined icon services are: internal, bitwarden, duckduckgo, google.
## To specify a custom icon service, set a URL template with exactly one instance of `{}`,
## which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
##
## `internal` refers to Vaultwarden's built-in icon fetching implementation.
## If an external service is set, an icon request to Vaultwarden will return an HTTP
## redirect to the corresponding icon at the external service. An external service may
## be useful if your Vaultwarden instance has no external network connectivity, or if
## you are concerned that someone may probe your instance to try to detect whether icons
## for certain sites have been cached.
# ICON_SERVICE=internal
## Icon redirect code
## The HTTP status code to use for redirects to an external icon service.
## The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
## Temporary redirects are useful while testing different icon services, but once a service
## has been decided on, consider using permanent redirects for cacheability. The legacy codes
## are currently better supported by the Bitwarden clients.
# ICON_REDIRECT_CODE=302
## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever")
## Default: 2592000 (30 days)
# ICON_CACHE_TTL=2592000
## Cache time-to-live for icons which weren't available, in seconds (0 is "forever")
## Default: 2592000 (3 days)
# ICON_CACHE_NEGTTL=259200
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
## Client Settings
## Enable experimental feature flags for clients.
## This is a comma-separated list of flags, e.g. "flag1,flag2,flag3".
##
## The following flags are available:
## - "autofill-overlay": Add an overlay menu to form fields for quick access to credentials.
## - "autofill-v2": Use the new autofill implementation.
## - "browser-fileless-import": Directly import credentials from other providers without a file.
## - "fido2-vault-credentials": Enable the use of FIDO2 security keys as second factor.
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!!
# REQUIRE_DEVICE_EMAIL=false
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
## Timestamp format used in extended logging.
## Format specifiers: https://docs.rs/chrono/latest/chrono/format/strftime
# LOG_TIMESTAMP_FORMAT="%Y-%m-%d %H:%M:%S.%3f"
## Logging to Syslog
## This requires extended logging
# USE_SYSLOG=false
## Logging to file
# LOG_FILE=/path/to/log
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
# LOG_LEVEL=info
## Token for the admin interface, preferably an Argon2 PCH string ## Token for the admin interface, preferably an Argon2 PCH string
## Vaultwarden has a built-in generator by calling `vaultwarden hash` ## Vaultwarden has a built-in generator by calling `vaultwarden hash`
## For details see: https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page#secure-the-admin_token ## For details see: https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page#secure-the-admin_token
@@ -289,54 +385,13 @@
## meant to be used with the use of a separate auth layer in front ## meant to be used with the use of a separate auth layer in front
# DISABLE_ADMIN_TOKEN=false # DISABLE_ADMIN_TOKEN=false
## Invitations org admins to invite users, even when signups are disabled ## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in.
# INVITATIONS_ALLOWED=true # ADMIN_RATELIMIT_SECONDS=300
## Name shown in the invitation emails that don't come from a specific organization ## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# INVITATION_ORG_NAME=Vaultwarden # ADMIN_RATELIMIT_MAX_BURST=3
## The number of hours after which an organization invite token, emergency access invite token, ## Set the lifetime of admin sessions to this value (in minutes).
## email verification token and deletion request token will expire (must be at least 1) # ADMIN_SESSION_LIFETIME=20
# INVITATION_EXPIRATION_HOURS=120
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
# ORG_ATTACHMENT_LIMIT=
## Per-user attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further attachments.
# USER_ATTACHMENT_LIMIT=
## Number of days to wait before auto-deleting a trashed item.
## If unset (the default), trashed items are not auto-deleted.
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
## resulting in an email notification. An incomplete 2FA login is one where the correct
## master password was provided but the required 2FA step was not completed, which
## potentially indicates a master password compromise. Set to 0 to disable this check.
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=350000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
## Domain settings
## The domain must match the address from where you access the server
## It's recommended to configure this value, otherwise certain functionality might not work,
## like attachment downloads, email links and U2F.
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
# DOMAIN=https://vw.domain.tld:8443
## Allowed iframe ancestors (Know the risks!) ## Allowed iframe ancestors (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors ## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors
@@ -351,13 +406,16 @@
## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2. ## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2.
# LOGIN_RATELIMIT_MAX_BURST=10 # LOGIN_RATELIMIT_MAX_BURST=10
## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in. ## BETA FEATURE: Groups
# ADMIN_RATELIMIT_SECONDS=300 ## Controls whether group support is enabled for organizations
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`. ## This setting applies to organizations.
# ADMIN_RATELIMIT_MAX_BURST=3 ## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Set the lifetime of admin sessions to this value (in minutes). ########################
# ADMIN_SESSION_LIFETIME=20 ### MFA/2FA settings ###
########################
## Yubico (Yubikey) Settings ## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP ## Set your Client ID and Secret Key for Yubikey OTP
@@ -378,6 +436,25 @@
## After that, you should be able to follow the rest of the guide linked above, ## After that, you should be able to follow the rest of the guide linked above,
## ignoring the fields that ask for the values that you already configured beforehand. ## ignoring the fields that ask for the values that you already configured beforehand.
## Email 2FA settings
## Email token size
## Number of digits in an email 2FA token (min: 6, max: 255).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
##
## Token expiration time
## Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
# EMAIL_EXPIRATION_TIME=600
##
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
## Other MFA/2FA settings
## Disable 2FA remember
## Enabling this would force the users to use a second factor to login every time.
## Note that the checkbox would still be present, but ignored.
# DISABLE_2FA_REMEMBER=false
##
## Authenticator Settings ## Authenticator Settings
## Disable authenticator time drifted codes to be valid. ## Disable authenticator time drifted codes to be valid.
## TOTP codes of the previous and next 30 seconds will be invalid ## TOTP codes of the previous and next 30 seconds will be invalid
@@ -390,12 +467,9 @@
## In any case, if a code has been used it can not be used again, also codes which predates it will be invalid. ## In any case, if a code has been used it can not be used again, also codes which predates it will be invalid.
# AUTHENTICATOR_DISABLE_TIME_DRIFT=false # AUTHENTICATOR_DISABLE_TIME_DRIFT=false
## Rocket specific settings ###########################
## See https://rocket.rs/v0.4/guide/configuration/ for more details. ### SMTP Email settings ###
# ROCKET_ADDRESS=0.0.0.0 ###########################
# ROCKET_PORT=80 # Defaults to 80 in the Docker images, or 8000 otherwise.
# ROCKET_WORKERS=10
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## Mail specific settings, set SMTP_FROM and either SMTP_HOST or USE_SENDMAIL to enable the mail service. ## Mail specific settings, set SMTP_FROM and either SMTP_HOST or USE_SENDMAIL to enable the mail service.
## To make sure the email links are pointing to the correct host, set the DOMAIN variable. ## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
@@ -417,7 +491,7 @@
## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections. ## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections.
## Possible values: ["Plain", "Login", "Xoauth2"]. ## Possible values: ["Plain", "Login", "Xoauth2"].
## Multiple options need to be separated by a comma ','. ## Multiple options need to be separated by a comma ','.
# SMTP_AUTH_MECHANISM="Plain" # SMTP_AUTH_MECHANISM=
## Server name sent during the SMTP HELO ## Server name sent during the SMTP HELO
## By default this value should be is on the machine's hostname, ## By default this value should be is on the machine's hostname,
@@ -425,30 +499,33 @@
# HELO_NAME= # HELO_NAME=
## Embed images as email attachments ## Embed images as email attachments
# SMTP_EMBED_IMAGES=false # SMTP_EMBED_IMAGES=true
## SMTP debugging ## SMTP debugging
## When set to true this will output very detailed SMTP messages. ## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting! ## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
# SMTP_DEBUG=false # SMTP_DEBUG=false
## Accept Invalid Hostnames
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
## Accept Invalid Certificates ## Accept Invalid Certificates
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks! ## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate. ## Only use this as a last resort if you are not able to use a valid certificate.
## If the Certificate is valid but the hostname doesn't match, please use SMTP_ACCEPT_INVALID_HOSTNAMES instead. ## If the Certificate is valid but the hostname doesn't match, please use SMTP_ACCEPT_INVALID_HOSTNAMES instead.
# SMTP_ACCEPT_INVALID_CERTS=false # SMTP_ACCEPT_INVALID_CERTS=false
## Require new device emails. When a user logs in an email is required to be sent. ## Accept Invalid Hostnames
## If sending the email fails the login attempt will fail!! ## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
# REQUIRE_DEVICE_EMAIL=false ## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
##########################
### Rocket settings ###
##########################
## Rocket specific settings
## See https://rocket.rs/v0.5/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0
# ROCKET_PORT=80 # Defaults to 80 in the Docker images, or 8000 otherwise.
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## HIBP Api Key
## HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
# HIBP_API_KEY=
# vim: syntax=ini # vim: syntax=ini

3
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,3 @@
/.github @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex

View File

@@ -46,7 +46,7 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: "Checkout" - name: "Checkout"
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
# End Checkout the repo # End Checkout the repo
@@ -74,7 +74,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain # Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version" - name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2 uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }} if: ${{ matrix.channel == 'rust-toolchain' }}
with: with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}" toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -84,7 +84,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt # Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version" - name: "Install MSRV version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2 uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }} if: ${{ matrix.channel != 'rust-toolchain' }}
with: with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}" toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -106,7 +106,7 @@ jobs:
# End Show environment # End Show environment
# Enable Rust Caching # Enable Rust Caching
- uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 # v2.7.0 - uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84 # v2.7.3
with: with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes. # Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example. # Like changing the build host from Ubuntu 20.04 to 22.04 for example.

View File

@@ -13,7 +13,7 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
# End Checkout the repo # End Checkout the repo
# Download hadolint - https://github.com/hadolint/hadolint/releases # Download hadolint - https://github.com/hadolint/hadolint/releases

View File

@@ -14,7 +14,6 @@ on:
branches: # Only on paths above branches: # Only on paths above
- main - main
- release-build-revision
tags: # Always, regardless of paths above tags: # Always, regardless of paths above
- '*' - '*'
@@ -31,7 +30,7 @@ jobs:
steps: steps:
- name: Skip Duplicates Actions - name: Skip Duplicates Actions
id: skip_check id: skip_check
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0 uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with: with:
cancel_others: 'true' cancel_others: 'true'
# Only run this when not creating a tag # Only run this when not creating a tag
@@ -42,12 +41,12 @@ jobs:
timeout-minutes: 120 timeout-minutes: 120
needs: skip_check needs: skip_check
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }} if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
# TODO: Start a local docker registry to be used to extract the final Alpine static build images # Start a local docker registry to extract the final Alpine static build binaries
# services: services:
# registry: registry:
# image: registry:2 image: registry:2
# ports: ports:
# - 5000:5000 - 5000:5000
env: env:
SOURCE_COMMIT: ${{ github.sha }} SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}" SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
@@ -69,7 +68,7 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -140,6 +139,12 @@ jobs:
run: | run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
# Login to Quay.io # Login to Quay.io
- name: Login to Quay.io - name: Login to Quay.io
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0 uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
@@ -155,8 +160,28 @@ jobs:
run: | run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.QUAY_REPO }}" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.QUAY_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Configure build cache from/to
shell: bash
run: |
#
# Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},mode=max" | tee -a "${GITHUB_ENV}"
else
echo "BAKE_CACHE_FROM="
echo "BAKE_CACHE_TO="
fi
#
- name: Add localhost registry
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}"
- name: Bake ${{ matrix.base_image }} containers - name: Bake ${{ matrix.base_image }} containers
uses: docker/bake-action@511fde2517761e303af548ec9e0ea74a8a100112 # v4.0.0 uses: docker/bake-action@849707117b03d39aba7924c50a10376a69e88d7d # v4.1.0
env: env:
BASE_TAGS: "${{ env.BASE_TAGS }}" BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}" SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -168,3 +193,76 @@ jobs:
push: true push: true
files: docker/docker-bake.hcl files: docker/docker-bake.hcl
targets: "${{ matrix.base_image }}-multi" targets: "${{ matrix.base_image }}-multi"
set: |
*.cache-from=${{ env.BAKE_CACHE_FROM }}
*.cache-to=${{ env.BAKE_CACHE_TO }}
# Extract the Alpine binaries from the containers
- name: Extract binaries
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref_type
if [[ "${{ github.ref_type }}" == "tag" ]]; then
EXTRACT_TAG="latest"
elif [[ "${{ github.ref_type }}" == "branch" ]]; then
EXTRACT_TAG="testing"
fi
# After each extraction the image is removed.
# This is needed because using different platforms doesn't trigger a new pull/download
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp amd64:/vaultwarden vaultwarden-amd64
docker rm --force amd64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp arm64:/vaultwarden vaultwarden-arm64
docker rm --force arm64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv7:/vaultwarden vaultwarden-armv7
docker rm --force armv7
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv6:/vaultwarden vaultwarden-armv6
docker rm --force armv6
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Upload artifacts to Github Actions
- name: "Upload amd64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64
path: vaultwarden-amd64
- name: "Upload arm64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64
path: vaultwarden-arm64
- name: "Upload armv7 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7
path: vaultwarden-armv7
- name: "Upload armv6 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6
path: vaultwarden-armv6
# End Upload artifacts to Github Actions

View File

@@ -0,0 +1,25 @@
on:
workflow_dispatch:
inputs:
manual_trigger:
description: "Manual trigger buildcache cleanup"
required: false
default: ""
schedule:
- cron: '0 1 * * FRI'
name: Cleanup
jobs:
releasecache-cleanup:
name: Releasecache Cleanup
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Delete vaultwarden-buildcache containers
uses: actions/delete-package-versions@0d39a63126868f5eefaa47169615edd3c0f61e20 # v4.1.1
with:
package-name: 'vaultwarden-buildcache'
package-type: 'container'
min-versions-to-keep: 0
delete-only-untagged-versions: 'false'

View File

@@ -4,7 +4,6 @@ on:
push: push:
branches: branches:
- main - main
- release-build-revision
tags: tags:
- '*' - '*'
pull_request: pull_request:
@@ -29,7 +28,7 @@ jobs:
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
- name: Run Trivy vulnerability scanner - name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@f78e9ecf42a1271402d4f484518b9313235990e1 # v0.13.1 uses: aquasecurity/trivy-action@d43c1f16c00cfd3978dde6c07f4bbcf9eb6993ca # v0.16.1
with: with:
scan-type: repo scan-type: repo
ignore-unfixed: true ignore-unfixed: true
@@ -38,6 +37,6 @@ jobs:
severity: CRITICAL,HIGH severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab - name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@bad341350a2f5616f9e048e51360cedc49181ce8 # v2.22.4 uses: github/codeql-action/upload-sarif@b7bf0a3ed3ecfa44160715d7c442788f65f0f923 # v3.23.2
with: with:
sarif_file: 'trivy-results.sarif' sarif_file: 'trivy-results.sarif'

1140
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ name = "vaultwarden"
version = "1.0.0" version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"] authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021" edition = "2021"
rust-version = "1.71.1" rust-version = "1.73.0"
resolver = "2" resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden" repository = "https://github.com/dani-garcia/vaultwarden"
@@ -48,63 +48,63 @@ tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and
dotenvy = { version = "0.15.7", default-features = false } dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization # Lazy initialization
once_cell = "1.18.0" once_cell = "1.19.0"
# Numerical libraries # Numerical libraries
num-traits = "0.2.17" num-traits = "0.2.17"
num-derive = "0.4.1" num-derive = "0.4.1"
bigdecimal = "0.4.2"
# Web framework # Web framework
rocket = { version = "0.5.0-rc.3", features = ["tls", "json"], default-features = false } rocket = { version = "0.5.0", features = ["tls", "json"], default-features = false }
# rocket_ws = { version ="0.1.0-rc.3" } rocket_ws = { version ="0.1.0" }
rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = "ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa" } # v0.5 branch
# WebSockets libraries # WebSockets libraries
tokio-tungstenite = "0.19.0" tokio-tungstenite = "0.20.1"
rmpv = "1.0.1" # MessagePack library rmpv = "1.0.1" # MessagePack library
# Concurrent HashMap used for WebSocket messaging and favicons # Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.5.3" dashmap = "5.5.3"
# Async futures # Async futures
futures = "0.3.28" futures = "0.3.30"
tokio = { version = "1.33.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] } tokio = { version = "1.35.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = { version = "1.0.189", features = ["derive"] } serde = { version = "1.0.195", features = ["derive"] }
serde_json = "1.0.107" serde_json = "1.0.111"
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "2.1.3", features = ["chrono", "r2d2"] } diesel = { version = "2.1.4", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.1.0" diesel_migrations = "2.1.0"
diesel_logger = { version = "0.3.0", optional = true } diesel_logger = { version = "0.3.0", optional = true }
# Bundled/Static SQLite # Bundled/Static SQLite
libsqlite3-sys = { version = "0.26.0", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.27.0", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] } rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.17.5" ring = "0.17.7"
# UUID generation # UUID generation
uuid = { version = "1.5.0", features = ["v4"] } uuid = { version = "1.7.0", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = { version = "0.4.31", features = ["clock", "serde"], default-features = false } chrono = { version = "0.4.33", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.3" chrono-tz = "0.8.5"
time = "0.3.30" time = "0.3.31"
# Job scheduler # Job scheduler
job_scheduler_ng = "2.0.4" job_scheduler_ng = "2.0.4"
# Data encoding library Hex/Base32/Base64 # Data encoding library Hex/Base32/Base64
data-encoding = "2.4.0" data-encoding = "2.5.0"
# JWT library # JWT library
jsonwebtoken = "9.0.0" jsonwebtoken = "9.2.0"
# TOTP library # TOTP library
totp-lite = "2.0.0" totp-lite = "2.0.1"
# Yubico Library # Yubico Library
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false } yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
@@ -113,37 +113,34 @@ yubico = { version = "0.11.0", features = ["online-tokio"], default-features = f
webauthn-rs = "0.3.2" webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn and favicons # Handling of URL's for WebAuthn and favicons
url = "2.4.1" url = "2.5.0"
# Email libraries # Email libraries
lettre = { version = "0.11.0", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false } lettre = { version = "0.11.3", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.0" # URL encoding library used for URL's in the emails percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.4" email_address = "0.2.4"
# HTML Template library # HTML Template library
handlebars = { version = "4.4.0", features = ["dir_source"] } handlebars = { version = "5.1.1", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API) # HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.22", features = ["stream", "json", "deflate", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] } reqwest = { version = "0.11.23", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
# Favicon extraction libraries # Favicon extraction libraries
html5gum = "0.5.7" html5gum = "0.5.7"
regex = { version = "1.10.2", features = ["std", "perf", "unicode-perl"], default-features = false } regex = { version = "1.10.3", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.0" data-url = "0.3.1"
bytes = "1.5.0" bytes = "1.5.0"
# Cache function results (Used for version check and favicon fetching) # Cache function results (Used for version check and favicon fetching)
cached = { version = "0.46.0", features = ["async"] } cached = { version = "0.48.1", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction # Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.2" cookie = "0.16.2"
cookie_store = "0.19.1" cookie_store = "0.19.1"
# Used by U2F, JWT and PostgreSQL # Used by U2F, JWT and PostgreSQL
openssl = "0.10.57" openssl = "0.10.63"
# Set openssl-sys fixed to v0.9.92 to prevent building issues with musl, arm and 32bit pointer width
# It will force add a dynamically linked library which prevents the build from being static
openssl-sys = "=0.9.92"
# CLI argument parsing # CLI argument parsing
pico-args = "0.5.0" pico-args = "0.5.0"
@@ -153,30 +150,27 @@ paste = "1.0.14"
governor = "0.6.0" governor = "0.6.0"
# Check client versions for specific features. # Check client versions for specific features.
semver = "1.0.20" semver = "1.0.21"
# Allow overriding the default memory allocator # Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow # Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.39", features = ["secure"], default-features = false, optional = true } mimalloc = { version = "0.1.39", features = ["secure"], default-features = false, optional = true }
which = "5.0.0" which = "6.0.0"
# Argon2 library with support for the PHC format # Argon2 library with support for the PHC format
argon2 = "0.5.2" argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN # Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.2.0" rpassword = "7.3.1"
[patch.crates-io]
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# Strip debuginfo from the release builds # Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations # The symbols are the provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations
[profile.release] [profile.release]
strip = "debuginfo" strip = "debuginfo"
lto = "thin" lto = "fat"
codegen-units = 1
# A little bit of a speedup # A little bit of a speedup
@@ -187,3 +181,12 @@ split-debuginfo = "unpacked"
# This is a huge speed improvement during testing # This is a huge speed improvement during testing
[profile.dev.package.argon2] [profile.dev.package.argon2]
opt-level = 3 opt-level = 3
# Optimize for size
[profile.release-micro]
inherits = "release"
opt-level = "z"
strip = "symbols"
lto = "fat"
codegen-units = 1
panic = "abort"

View File

@@ -92,4 +92,11 @@ Thanks for your contribution to the project!
</a> </a>
</td> </td>
</tr> </tr>
<tr>
<td align="center">
<a href="https://github.com/IQ333777" style="width: 75px">
<sub><b>IQ333777</b></sub>
</a>
</td>
</tr>
</table> </table>

View File

@@ -17,6 +17,13 @@ fn main() {
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite" "You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
); );
// Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed.
println!("cargo:rerun-if-changed=.git");
println!("cargo:rerun-if-changed=.git/HEAD");
println!("cargo:rerun-if-changed=.git/index");
println!("cargo:rerun-if-changed=.git/refs/tags");
#[cfg(all(not(debug_assertions), feature = "query_logger"))] #[cfg(all(not(debug_assertions), feature = "query_logger"))]
compile_error!("Query Logging is only allowed during development, it is not intended for production usage!"); compile_error!("Query Logging is only allowed during development, it is not intended for production usage!");

View File

@@ -1,12 +1,12 @@
--- ---
vault_version: "v2023.10.0" vault_version: "v2024.1.2"
vault_image_digest: "sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935" vault_image_digest: "sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b"
# Cross Compile Docker Helper Scripts v1.3.0 # Cross Compile Docker Helper Scripts v1.3.0
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts # We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
xx_image_digest: "sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc" xx_image_digest: "sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc"
rust_version: 1.73.0 # Rust version to be used rust_version: 1.75.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used debian_version: bookworm # Debian release name to be used
alpine_version: 3.18 # Alpine version to be used alpine_version: 3.19 # Alpine version to be used
# For which platforms/architectures will we try to build images # For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"] platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch # Determine the build images per OS/Arch

View File

@@ -18,23 +18,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0 # $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935] # [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2023.10.0] # [docker.io/vaultwarden/web-vault:v2024.1.2]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
########################## ALPINE BUILD IMAGES ########################## ########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 ## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used ## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.73.0 as build_amd64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.75.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.73.0 as build_arm64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.75.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.73.0 as build_armv7 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.75.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.73.0 as build_armv6 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.75.0 as build_armv6
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
@@ -100,7 +100,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build # Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \ RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp # Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \ # Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage # Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \ cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \ if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
@@ -126,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*' # To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
# #
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742 # We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.18 FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.19
ENV ROCKET_PROFILE="release" \ ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \ ROCKET_ADDRESS=0.0.0.0 \

View File

@@ -18,15 +18,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0 # $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935] # [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2023.10.0] # [docker.io/vaultwarden/web-vault:v2024.1.2]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
########################## Cross Compile Docker Helper Scripts ########################## ########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts ## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -35,7 +35,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:c9609ace652bbe51dd4ce
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.73.0-slim-bookworm as build FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.75.0-slim-bookworm as build
COPY --from=xx / / COPY --from=xx / /
ARG TARGETARCH ARG TARGETARCH
ARG TARGETVARIANT ARG TARGETVARIANT
@@ -73,7 +73,8 @@ RUN xx-apt-get install -y \
libmariadb3 \ libmariadb3 \
libpq-dev \ libpq-dev \
libpq5 \ libpq5 \
libssl-dev && \ libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages # Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again) # Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \ apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
@@ -130,7 +131,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build # Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \ RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp # Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \ # Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage # Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \ cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \ if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \

View File

@@ -91,7 +91,8 @@ RUN xx-apt-get install -y \
libmariadb3 \ libmariadb3 \
libpq-dev \ libpq-dev \
libpq5 \ libpq5 \
libssl-dev && \ libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages # Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again) # Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \ apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
@@ -161,7 +162,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build # Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \ RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp # Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \ # Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage # Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \ cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \ if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \

View File

@@ -88,7 +88,7 @@ target "debian" {
inherits = ["_default_attributes"] inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.debian" dockerfile = "docker/Dockerfile.debian"
tags = generate_tags("", platform_tag()) tags = generate_tags("", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))] output = ["type=docker"]
} }
// Multi Platform target, will build one tagged manifest with all supported architectures // Multi Platform target, will build one tagged manifest with all supported architectures
@@ -138,7 +138,7 @@ target "alpine" {
inherits = ["_default_attributes"] inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.alpine" dockerfile = "docker/Dockerfile.alpine"
tags = generate_tags("-alpine", platform_tag()) tags = generate_tags("-alpine", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))] output = ["type=docker"]
} }
// Multi Platform target, will build one tagged manifest with all supported architectures // Multi Platform target, will build one tagged manifest with all supported architectures
@@ -216,7 +216,13 @@ function "generate_tags" {
result = flatten([ result = flatten([
for registry in get_container_registries() : for registry in get_container_registries() :
[for base_tag in get_base_tags() : [for base_tag in get_base_tags() :
concat(["${registry}:${base_tag}${suffix}${platform}"])] concat(
# If the base_tag contains latest, and the suffix contains `-alpine` add a `:alpine` tag too
base_tag == "latest" ? suffix == "-alpine" ? ["${registry}:alpine${platform}"] : [] : [],
# The default tagging strategy
["${registry}:${base_tag}${suffix}${platform}"]
)
]
]) ])
} }

View File

@@ -1,12 +1,20 @@
#!/bin/sh #!/usr/bin/env sh
# Use the value of the corresponding env var (if present), # Use the value of the corresponding env var (if present),
# or a default value otherwise. # or a default value otherwise.
: "${DATA_FOLDER:="data"}" : "${DATA_FOLDER:="/data"}"
: "${ROCKET_PORT:="80"}" : "${ROCKET_PORT:="80"}"
: "${ENV_FILE:="/.env"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json CONFIG_FILE="${DATA_FOLDER}"/config.json
# Check if the $ENV_FILE file exist and is readable
# If that is the case, load it into the environment before running any check
if [ -r "${ENV_FILE}" ]; then
# shellcheck disable=SC1090
. "${ENV_FILE}"
fi
# Given a config key, return the corresponding config value from the # Given a config key, return the corresponding config value from the
# config file. If the key doesn't exist, return an empty string. # config file. If the key doesn't exist, return an empty string.
get_config_val() { get_config_val() {

View File

@@ -0,0 +1 @@
ALTER TABLE attachments MODIFY file_size BIGINT NOT NULL;

View File

@@ -0,0 +1,3 @@
ALTER TABLE attachments
ALTER COLUMN file_size TYPE BIGINT,
ALTER COLUMN file_size SET NOT NULL;

View File

@@ -0,0 +1 @@
-- Integer size in SQLite is already i64, so we don't need to do anything

View File

@@ -1,4 +1,4 @@
[toolchain] [toolchain]
channel = "1.73.0" channel = "1.75.0"
components = [ "rustfmt", "clippy" ] components = [ "rustfmt", "clippy" ]
profile = "minimal" profile = "minimal"

View File

@@ -13,7 +13,10 @@ use rocket::{
}; };
use crate::{ use crate::{
api::{core::log_event, unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify, NumberOrString}, api::{
core::{log_event, two_factor},
unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify,
},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp}, auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder, config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType}, db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
@@ -21,6 +24,7 @@ use crate::{
mail, mail,
util::{ util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker, docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
NumberOrString,
}, },
CONFIG, VERSION, CONFIG, VERSION,
}; };
@@ -184,12 +188,11 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
let claims = generate_admin_claims(); let claims = generate_admin_claims();
let jwt = encode_jwt(&claims); let jwt = encode_jwt(&claims);
let cookie = Cookie::build(COOKIE_NAME, jwt) let cookie = Cookie::build((COOKIE_NAME, jwt))
.path(admin_path()) .path(admin_path())
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime())) .max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict) .same_site(SameSite::Strict)
.http_only(true) .http_only(true);
.finish();
cookies.add(cookie); cookies.add(cookie);
if let Some(redirect) = redirect { if let Some(redirect) = redirect {
@@ -313,7 +316,7 @@ async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
#[get("/logout")] #[get("/logout")]
fn logout(cookies: &CookieJar<'_>) -> Redirect { fn logout(cookies: &CookieJar<'_>) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish()); cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
Redirect::to(admin_path()) Redirect::to(admin_path())
} }
@@ -343,7 +346,7 @@ async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<
let mut usr = u.to_json(&mut conn).await; let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await); usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await); usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await as i32)); usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await));
usr["user_enabled"] = json!(u.enabled); usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT)); usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&mut conn).await { usr["last_active"] = match u.last_active(&mut conn).await {
@@ -391,7 +394,7 @@ async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyRe
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&user_org.uuid, &user_org.uuid,
&user_org.org_uuid, &user_org.org_uuid,
String::from(ACTING_ADMIN_USER), ACTING_ADMIN_USER,
14, // Use UnknownBrowser type 14, // Use UnknownBrowser type
&token.ip.ip, &token.ip.ip,
&mut conn, &mut conn,
@@ -410,7 +413,7 @@ async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notif
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await { for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await {
match unregister_push_device(device.uuid).await { match unregister_push_device(device.push_uuid).await {
Ok(r) => r, Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e), Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e),
}; };
@@ -446,9 +449,10 @@ async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyR
} }
#[post("/users/<uuid>/remove-2fa")] #[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn remove_2fa(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?; let mut user = get_user_or_404(uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?; TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, ACTING_ADMIN_USER, 14, &token.ip.ip, &mut conn).await?;
user.totp_recover = None; user.totp_recover = None;
user.save(&mut conn).await user.save(&mut conn).await
} }
@@ -518,7 +522,7 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
EventType::OrganizationUserUpdated as i32, EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid, &user_to_edit.uuid,
&data.org_uuid, &data.org_uuid,
String::from(ACTING_ADMIN_USER), ACTING_ADMIN_USER,
14, // Use UnknownBrowser type 14, // Use UnknownBrowser type
&token.ip.ip, &token.ip.ip,
&mut conn, &mut conn,
@@ -546,7 +550,7 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await); org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await); org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await); org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32)); org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await));
organizations_json.push(org); organizations_json.push(org);
} }
@@ -786,16 +790,16 @@ impl<'r> FromRequest<'r> for AdminToken {
if requested_page.is_empty() { if requested_page.is_empty() {
return Outcome::Forward(Status::Unauthorized); return Outcome::Forward(Status::Unauthorized);
} else { } else {
return Outcome::Failure((Status::Unauthorized, "Unauthorized")); return Outcome::Error((Status::Unauthorized, "Unauthorized"));
} }
} }
}; };
if decode_admin(access_token).is_err() { if decode_admin(access_token).is_err() {
// Remove admin cookie // Remove admin cookie
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish()); cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
error!("Invalid or expired admin JWT. IP: {}.", &ip.ip); error!("Invalid or expired admin JWT. IP: {}.", &ip.ip);
return Outcome::Failure((Status::Unauthorized, "Session expired")); return Outcome::Error((Status::Unauthorized, "Session expired"));
} }
Outcome::Success(Self { Outcome::Success(Self {

View File

@@ -6,12 +6,14 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::log_user_event, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult, core::log_user_event, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType, JsonUpcase, Notify, PasswordOrOtpData, UpdateType,
}, },
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers}, auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto, crypto,
db::{models::*, DbConn}, db::{models::*, DbConn},
mail, CONFIG, mail,
util::NumberOrString,
CONFIG,
}; };
use rocket::{ use rocket::{
@@ -279,8 +281,9 @@ async fn put_avatar(data: JsonUpcase<AvatarData>, headers: Headers, mut conn: Db
#[get("/users/<uuid>/public-key")] #[get("/users/<uuid>/public-key")]
async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(uuid, &mut conn).await { let user = match User::find_by_uuid(uuid, &mut conn).await {
Some(user) => user, Some(user) if user.public_key.is_some() => user,
None => err!("User doesn't exist"), Some(_) => err_code!("User has no public_key", Status::NotFound.code),
None => err_code!("User doesn't exist", Status::NotFound.code),
}; };
Ok(Json(json!({ Ok(Json(json!({
@@ -503,17 +506,15 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: D
#[post("/accounts/security-stamp", data = "<data>")] #[post("/accounts/security-stamp", data = "<data>")]
async fn post_sstamp( async fn post_sstamp(
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
nt: Notify<'_>, nt: Notify<'_>,
) -> EmptyResult { ) -> EmptyResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password")
}
Device::delete_all_by_user(&user.uuid, &mut conn).await?; Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp(); user.reset_security_stamp();
@@ -736,18 +737,16 @@ async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, mut
} }
#[post("/accounts/delete", data = "<data>")] #[post("/accounts/delete", data = "<data>")]
async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult { async fn post_delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn).await delete_account(data, headers, conn).await
} }
#[delete("/accounts", data = "<data>")] #[delete("/accounts", data = "<data>")]
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password")
}
user.delete(&mut conn).await user.delete(&mut conn).await
} }
@@ -854,20 +853,13 @@ fn verify_password(data: JsonUpcase<SecretVerificationRequest>, headers: Headers
Ok(()) Ok(())
} }
async fn _api_key( async fn _api_key(data: JsonUpcase<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult {
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
use crate::util::format_date; use crate::util::format_date;
let data: SecretVerificationRequest = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password")
}
if rotate || user.api_key.is_none() { if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key()); user.api_key = Some(crypto::generate_api_key());
@@ -882,12 +874,12 @@ async fn _api_key(
} }
#[post("/accounts/api-key", data = "<data>")] #[post("/accounts/api-key", data = "<data>")]
async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult { async fn api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn).await _api_key(data, false, headers, conn).await
} }
#[post("/accounts/rotate-api-key", data = "<data>")] #[post("/accounts/rotate-api-key", data = "<data>")]
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult { async fn rotate_api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await _api_key(data, true, headers, conn).await
} }
@@ -921,26 +913,23 @@ impl<'r> FromRequest<'r> for KnownDevice {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) { let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes, Ok(bytes) => bytes,
Err(_) => { Err(_) => {
return Outcome::Failure(( return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
Status::BadRequest,
"X-Request-Email value failed to decode as base64url",
));
} }
}; };
match String::from_utf8(email_bytes) { match String::from_utf8(email_bytes) {
Ok(email) => email, Ok(email) => email,
Err(_) => { Err(_) => {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8")); return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
} }
} }
} else { } else {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value is required")); return Outcome::Error((Status::BadRequest, "X-Request-Email value is required"));
}; };
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") { let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string() uuid.to_string()
} else { } else {
return Outcome::Failure((Status::BadRequest, "X-Device-Identifier value is required")); return Outcome::Error((Status::BadRequest, "X-Device-Identifier value is required"));
}; };
Outcome::Success(KnownDevice { Outcome::Success(KnownDevice {
@@ -963,26 +952,33 @@ async fn post_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Hea
#[put("/devices/identifier/<uuid>/token", data = "<data>")] #[put("/devices/identifier/<uuid>/token", data = "<data>")]
async fn put_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn put_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let data = data.into_inner().data; let data = data.into_inner().data;
let token = data.PushToken; let token = data.PushToken;
let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await { let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await {
Some(device) => device, Some(device) => device,
None => err!(format!("Error: device {uuid} should be present before a token can be assigned")), None => err!(format!("Error: device {uuid} should be present before a token can be assigned")),
}; };
device.push_token = Some(token);
if device.push_uuid.is_none() { // if the device already has been registered
device.push_uuid = Some(uuid::Uuid::new_v4().to_string()); if device.is_registered() {
// check if the new token is the same as the registered token
if device.push_token.is_some() && device.push_token.unwrap() == token.clone() {
debug!("Device {} is already registered and token is the same", uuid);
return Ok(());
} else {
// Try to unregister already registered device
let _ = unregister_push_device(device.push_uuid).await;
}
// clear the push_uuid
device.push_uuid = None;
} }
device.push_token = Some(token);
if let Err(e) = device.save(&mut conn).await { if let Err(e) = device.save(&mut conn).await {
err!(format!("An error occurred while trying to save the device push token: {e}")); err!(format!("An error occurred while trying to save the device push token: {e}"));
} }
if let Err(e) = register_push_device(headers.user.uuid, device).await {
err!(format!("An error occurred while proceeding registration of a device: {e}")); register_push_device(&mut device, &mut conn).await?;
}
Ok(()) Ok(())
} }
@@ -999,7 +995,7 @@ async fn put_clear_device_token(uuid: &str, mut conn: DbConn) -> EmptyResult {
if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await { if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await {
Device::clear_push_token_by_uuid(uuid, &mut conn).await?; Device::clear_push_token_by_uuid(uuid, &mut conn).await?;
unregister_push_device(device.uuid).await?; unregister_push_device(device.push_uuid).await?;
} }
Ok(()) Ok(())

View File

@@ -1,6 +1,7 @@
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use num_traits::ToPrimitive;
use rocket::fs::TempFile; use rocket::fs::TempFile;
use rocket::serde::json::Json; use rocket::serde::json::Json;
use rocket::{ use rocket::{
@@ -10,7 +11,7 @@ use rocket::{
use serde_json::Value; use serde_json::Value;
use crate::{ use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordData, UpdateType}, api::{self, core::log_event, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordOrOtpData, UpdateType},
auth::Headers, auth::Headers,
crypto, crypto,
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
@@ -510,7 +511,7 @@ pub async fn update_cipher_from_data(
event_type as i32, event_type as i32,
&cipher.uuid, &cipher.uuid,
org_uuid, org_uuid,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -791,7 +792,7 @@ async fn post_collections_admin(
EventType::CipherUpdatedCollections as i32, EventType::CipherUpdatedCollections as i32,
&cipher.uuid, &cipher.uuid,
&cipher.organization_uuid.unwrap(), &cipher.organization_uuid.unwrap(),
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -849,7 +850,6 @@ async fn put_cipher_share_selected(
nt: Notify<'_>, nt: Notify<'_>,
) -> EmptyResult { ) -> EmptyResult {
let mut data: ShareSelectedCipherData = data.into_inner().data; let mut data: ShareSelectedCipherData = data.into_inner().data;
let mut cipher_ids: Vec<String> = Vec::new();
if data.Ciphers.is_empty() { if data.Ciphers.is_empty() {
err!("You must select at least one cipher.") err!("You must select at least one cipher.")
@@ -860,10 +860,9 @@ async fn put_cipher_share_selected(
} }
for cipher in data.Ciphers.iter() { for cipher in data.Ciphers.iter() {
match cipher.Id { if cipher.Id.is_none() {
Some(ref id) => cipher_ids.push(id.to_string()), err!("Request missing ids field")
None => err!("Request missing ids field"), }
};
} }
while let Some(cipher) = data.Ciphers.pop() { while let Some(cipher) = data.Ciphers.pop() {
@@ -958,7 +957,7 @@ async fn get_attachment(uuid: &str, attachment_id: &str, headers: Headers, mut c
struct AttachmentRequestData { struct AttachmentRequestData {
Key: String, Key: String,
FileName: String, FileName: String,
FileSize: i32, FileSize: i64,
AdminRequest: Option<bool>, // true when attaching from an org vault view AdminRequest: Option<bool>, // true when attaching from an org vault view
} }
@@ -987,8 +986,11 @@ async fn post_attachment_v2(
err!("Cipher is not write accessible") err!("Cipher is not write accessible")
} }
let attachment_id = crypto::generate_attachment_id();
let data: AttachmentRequestData = data.into_inner().data; let data: AttachmentRequestData = data.into_inner().data;
if data.FileSize < 0 {
err!("Attachment size can't be negative")
}
let attachment_id = crypto::generate_attachment_id();
let attachment = let attachment =
Attachment::new(attachment_id.clone(), cipher.uuid.clone(), data.FileName, data.FileSize, Some(data.Key)); Attachment::new(attachment_id.clone(), cipher.uuid.clone(), data.FileName, data.FileSize, Some(data.Key));
attachment.save(&mut conn).await.expect("Error saving attachment"); attachment.save(&mut conn).await.expect("Error saving attachment");
@@ -1030,6 +1032,15 @@ async fn save_attachment(
mut conn: DbConn, mut conn: DbConn,
nt: Notify<'_>, nt: Notify<'_>,
) -> Result<(Cipher, DbConn), crate::error::Error> { ) -> Result<(Cipher, DbConn), crate::error::Error> {
let mut data = data.into_inner();
let Some(size) = data.data.len().to_i64() else {
err!("Attachment data size overflow");
};
if size < 0 {
err!("Attachment size can't be negative")
}
let cipher = match Cipher::find_by_uuid(cipher_uuid, &mut conn).await { let cipher = match Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
@@ -1042,19 +1053,29 @@ async fn save_attachment(
// In the v2 API, the attachment record has already been created, // In the v2 API, the attachment record has already been created,
// so the size limit needs to be adjusted to account for that. // so the size limit needs to be adjusted to account for that.
let size_adjust = match &attachment { let size_adjust = match &attachment {
None => 0, // Legacy API None => 0, // Legacy API
Some(a) => i64::from(a.file_size), // v2 API Some(a) => a.file_size, // v2 API
}; };
let size_limit = if let Some(ref user_uuid) = cipher.user_uuid { let size_limit = if let Some(ref user_uuid) = cipher.user_uuid {
match CONFIG.user_attachment_limit() { match CONFIG.user_attachment_limit() {
Some(0) => err!("Attachments are disabled"), Some(0) => err!("Attachments are disabled"),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, &mut conn).await + size_adjust; let already_used = Attachment::size_by_user(user_uuid, &mut conn).await;
let left = limit_kb
.checked_mul(1024)
.and_then(|l| l.checked_sub(already_used))
.and_then(|l| l.checked_add(size_adjust));
let Some(left) = left else {
err!("Attachment size overflow");
};
if left <= 0 { if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space") err!("Attachment storage limit reached! Delete some attachments to free up space")
} }
Some(left as u64)
Some(left)
} }
None => None, None => None,
} }
@@ -1062,11 +1083,21 @@ async fn save_attachment(
match CONFIG.org_attachment_limit() { match CONFIG.org_attachment_limit() {
Some(0) => err!("Attachments are disabled"), Some(0) => err!("Attachments are disabled"),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, &mut conn).await + size_adjust; let already_used = Attachment::size_by_org(org_uuid, &mut conn).await;
let left = limit_kb
.checked_mul(1024)
.and_then(|l| l.checked_sub(already_used))
.and_then(|l| l.checked_add(size_adjust));
let Some(left) = left else {
err!("Attachment size overflow");
};
if left <= 0 { if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space") err!("Attachment storage limit reached! Delete some attachments to free up space")
} }
Some(left as u64)
Some(left)
} }
None => None, None => None,
} }
@@ -1074,10 +1105,8 @@ async fn save_attachment(
err!("Cipher is neither owned by a user nor an organization"); err!("Cipher is neither owned by a user nor an organization");
}; };
let mut data = data.into_inner();
if let Some(size_limit) = size_limit { if let Some(size_limit) = size_limit {
if data.data.len() > size_limit { if size > size_limit {
err!("Attachment storage limit exceeded with this file"); err!("Attachment storage limit exceeded with this file");
} }
} }
@@ -1087,20 +1116,19 @@ async fn save_attachment(
None => crypto::generate_attachment_id(), // Legacy API None => crypto::generate_attachment_id(), // Legacy API
}; };
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
let size = data.data.len() as i32;
if let Some(attachment) = &mut attachment { if let Some(attachment) = &mut attachment {
// v2 API // v2 API
// Check the actual size against the size initially provided by // Check the actual size against the size initially provided by
// the client. Upstream allows +/- 1 MiB deviation from this // the client. Upstream allows +/- 1 MiB deviation from this
// size, but it's not clear when or why this is needed. // size, but it's not clear when or why this is needed.
const LEEWAY: i32 = 1024 * 1024; // 1 MiB const LEEWAY: i64 = 1024 * 1024; // 1 MiB
let min_size = attachment.file_size - LEEWAY; let Some(max_size) = attachment.file_size.checked_add(LEEWAY) else {
let max_size = attachment.file_size + LEEWAY; err!("Invalid attachment size max")
};
let Some(min_size) = attachment.file_size.checked_sub(LEEWAY) else {
err!("Invalid attachment size min")
};
if min_size <= size && size <= max_size { if min_size <= size && size <= max_size {
if size != attachment.file_size { if size != attachment.file_size {
@@ -1115,6 +1143,10 @@ async fn save_attachment(
} }
} else { } else {
// Legacy API // Legacy API
// SAFETY: This value is only stored in the database and is not used to access the file system.
// As a result, the conditions specified by Rocket [0] are met and this is safe to use.
// [0]: https://docs.rs/rocket/latest/rocket/fs/struct.FileName.html#-danger-
let encrypted_filename = data.data.raw_name().map(|s| s.dangerous_unsafe_unsanitized_raw().to_string()); let encrypted_filename = data.data.raw_name().map(|s| s.dangerous_unsafe_unsanitized_raw().to_string());
if encrypted_filename.is_none() { if encrypted_filename.is_none() {
@@ -1124,10 +1156,14 @@ async fn save_attachment(
err!("No attachment key provided") err!("No attachment key provided")
} }
let attachment = let attachment =
Attachment::new(file_id, String::from(cipher_uuid), encrypted_filename.unwrap(), size, data.key); Attachment::new(file_id.clone(), String::from(cipher_uuid), encrypted_filename.unwrap(), size, data.key);
attachment.save(&mut conn).await.expect("Error saving attachment"); attachment.save(&mut conn).await.expect("Error saving attachment");
} }
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await { if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await? data.data.move_copy_to(file_path).await?
} }
@@ -1147,7 +1183,7 @@ async fn save_attachment(
EventType::CipherAttachmentCreated as i32, EventType::CipherAttachmentCreated as i32,
&cipher.uuid, &cipher.uuid,
org_uuid, org_uuid,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1457,19 +1493,15 @@ struct OrganizationId {
#[post("/ciphers/purge?<organization..>", data = "<data>")] #[post("/ciphers/purge?<organization..>", data = "<data>")]
async fn delete_all( async fn delete_all(
organization: Option<OrganizationId>, organization: Option<OrganizationId>,
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
nt: Notify<'_>, nt: Notify<'_>,
) -> EmptyResult { ) -> EmptyResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&password_hash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password")
}
match organization { match organization {
Some(org_data) => { Some(org_data) => {
@@ -1485,7 +1517,7 @@ async fn delete_all(
EventType::OrganizationPurgedVault as i32, EventType::OrganizationPurgedVault as i32,
&org_data.org_id, &org_data.org_id,
&org_data.org_id, &org_data.org_id,
user.uuid, &user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1566,16 +1598,8 @@ async fn _delete_cipher_by_uuid(
false => EventType::CipherDeleted as i32, false => EventType::CipherDeleted as i32,
}; };
log_event( log_event(event_type, &cipher.uuid, &org_uuid, &headers.user.uuid, headers.device.atype, &headers.ip.ip, conn)
event_type, .await;
&cipher.uuid,
&org_uuid,
headers.user.uuid.clone(),
headers.device.atype,
&headers.ip.ip,
conn,
)
.await;
} }
Ok(()) Ok(())
@@ -1635,7 +1659,7 @@ async fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &mut DbCon
EventType::CipherRestored as i32, EventType::CipherRestored as i32,
&cipher.uuid.clone(), &cipher.uuid.clone(),
org_uuid, org_uuid,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -1719,7 +1743,7 @@ async fn _delete_cipher_attachment_by_id(
EventType::CipherAttachmentDeleted as i32, EventType::CipherAttachmentDeleted as i32,
&cipher.uuid, &cipher.uuid,
&org_uuid, &org_uuid,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -1802,15 +1826,22 @@ impl CipherSyncData {
.collect(); .collect();
// Generate a HashMap with the collections_uuid as key and the CollectionGroup record // Generate a HashMap with the collections_uuid as key and the CollectionGroup record
let user_collections_groups: HashMap<String, CollectionGroup> = CollectionGroup::find_by_user(user_uuid, conn) let user_collections_groups: HashMap<String, CollectionGroup> = if CONFIG.org_groups_enabled() {
.await CollectionGroup::find_by_user(user_uuid, conn)
.into_iter() .await
.map(|collection_group| (collection_group.collections_uuid.clone(), collection_group)) .into_iter()
.collect(); .map(|collection_group| (collection_group.collections_uuid.clone(), collection_group))
.collect()
} else {
HashMap::new()
};
// Get all organizations that the user has full access to via group assignment // Get all organizations that the user has full access to via group assignment
let user_group_full_access_for_organizations: HashSet<String> = let user_group_full_access_for_organizations: HashSet<String> = if CONFIG.org_groups_enabled() {
Group::gather_user_organizations_full_access(user_uuid, conn).await.into_iter().collect(); Group::gather_user_organizations_full_access(user_uuid, conn).await.into_iter().collect()
} else {
HashSet::new()
};
Self { Self {
cipher_attachments, cipher_attachments,

View File

@@ -5,11 +5,13 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::{CipherSyncData, CipherSyncType}, core::{CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, EmptyResult, JsonResult, JsonUpcase,
}, },
auth::{decode_emergency_access_invite, Headers}, auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
mail, CONFIG, mail,
util::NumberOrString,
CONFIG,
}; };
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
@@ -18,6 +20,7 @@ pub fn routes() -> Vec<Route> {
get_grantees, get_grantees,
get_emergency_access, get_emergency_access,
put_emergency_access, put_emergency_access,
post_emergency_access,
delete_emergency_access, delete_emergency_access,
post_delete_emergency_access, post_delete_emergency_access,
send_invite, send_invite,
@@ -37,42 +40,59 @@ pub fn routes() -> Vec<Route> {
// region get // region get
#[get("/emergency-access/trusted")] #[get("/emergency-access/trusted")]
async fn get_contacts(headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_contacts(headers: Headers, mut conn: DbConn) -> Json<Value> {
check_emergency_access_allowed()?; if !CONFIG.emergency_access_allowed() {
return Json(json!({
"Data": [{
"Id": "",
"Status": 2,
"Type": 0,
"WaitTimeDays": 0,
"GranteeId": "",
"Email": "",
"Name": "NOTE: Emergency Access is disabled!",
"Object": "emergencyAccessGranteeDetails",
}],
"Object": "list",
"ContinuationToken": null
}));
}
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await; let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len()); let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list { for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await); emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
} }
Ok(Json(json!({ Json(json!({
"Data": emergency_access_list_json, "Data": emergency_access_list_json,
"Object": "list", "Object": "list",
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[get("/emergency-access/granted")] #[get("/emergency-access/granted")]
async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_grantees(headers: Headers, mut conn: DbConn) -> Json<Value> {
check_emergency_access_allowed()?; let emergency_access_list = if CONFIG.emergency_access_allowed() {
EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await; } else {
Vec::new()
};
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len()); let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list { for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await); emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await);
} }
Ok(Json(json!({ Json(json!({
"Data": emergency_access_list_json, "Data": emergency_access_list_json,
"Object": "list", "Object": "list",
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[get("/emergency-access/<emer_id>")] #[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult { async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)), Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
@@ -103,7 +123,7 @@ async fn post_emergency_access(
data: JsonUpcase<EmergencyAccessUpdateData>, data: JsonUpcase<EmergencyAccessUpdateData>,
mut conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let data: EmergencyAccessUpdateData = data.into_inner().data; let data: EmergencyAccessUpdateData = data.into_inner().data;
@@ -133,7 +153,7 @@ async fn post_emergency_access(
#[delete("/emergency-access/<emer_id>")] #[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let grantor_user = headers.user; let grantor_user = headers.user;
@@ -169,7 +189,7 @@ struct EmergencyAccessInviteData {
#[post("/emergency-access/invite", data = "<data>")] #[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let data: EmergencyAccessInviteData = data.into_inner().data; let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase(); let email = data.Email.to_lowercase();
@@ -252,7 +272,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
#[post("/emergency-access/<emer_id>/reinvite")] #[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer, Some(emer) => emer,
@@ -312,7 +332,7 @@ struct AcceptData {
#[post("/emergency-access/<emer_id>/accept", data = "<data>")] #[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let data: AcceptData = data.into_inner().data; let data: AcceptData = data.into_inner().data;
let token = &data.Token; let token = &data.Token;
@@ -395,7 +415,7 @@ async fn confirm_emergency_access(
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let confirming_user = headers.user; let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data; let data: ConfirmData = data.into_inner().data;
@@ -444,7 +464,7 @@ async fn confirm_emergency_access(
#[post("/emergency-access/<emer_id>/initiate")] #[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let initiating_user = headers.user; let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@@ -484,7 +504,7 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
#[post("/emergency-access/<emer_id>/approve")] #[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer, Some(emer) => emer,
@@ -522,7 +542,7 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
#[post("/emergency-access/<emer_id>/reject")] #[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer, Some(emer) => emer,
@@ -565,7 +585,7 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
#[post("/emergency-access/<emer_id>/view")] #[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer, Some(emer) => emer,
@@ -602,7 +622,7 @@ async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn
#[post("/emergency-access/<emer_id>/takeover")] #[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let requesting_user = headers.user; let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await { let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@@ -645,7 +665,7 @@ async fn password_emergency_access(
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
check_emergency_access_allowed()?; check_emergency_access_enabled()?;
let data: EmergencyAccessPasswordData = data.into_inner().data; let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash; let new_master_password_hash = &data.NewMasterPasswordHash;
@@ -722,9 +742,9 @@ fn is_valid_request(
&& emergency_access.atype == requested_access_type as i32 && emergency_access.atype == requested_access_type as i32
} }
fn check_emergency_access_allowed() -> EmptyResult { fn check_emergency_access_enabled() -> EmptyResult {
if !CONFIG.emergency_access_allowed() { if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not allowed.") err!("Emergency access is not enabled.")
} }
Ok(()) Ok(())
} }

View File

@@ -263,7 +263,7 @@ pub async fn log_event(
event_type: i32, event_type: i32,
source_uuid: &str, source_uuid: &str,
org_uuid: &str, org_uuid: &str,
act_user_uuid: String, act_user_uuid: &str,
device_type: i32, device_type: i32,
ip: &IpAddr, ip: &IpAddr,
conn: &mut DbConn, conn: &mut DbConn,
@@ -271,7 +271,7 @@ pub async fn log_event(
if !CONFIG.org_events_enabled() { if !CONFIG.org_events_enabled() {
return; return;
} }
_log_event(event_type, source_uuid, org_uuid, &act_user_uuid, device_type, None, ip, conn).await; _log_event(event_type, source_uuid, org_uuid, act_user_uuid, device_type, None, ip, conn).await;
} }
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]

View File

@@ -13,7 +13,6 @@ pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncT
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job}; pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event}; pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends; pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains]; let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
@@ -47,15 +46,14 @@ pub fn events_routes() -> Vec<Route> {
// //
// Move this somewhere else // Move this somewhere else
// //
use rocket::{serde::json::Json, Catcher, Route}; use rocket::{serde::json::Json, serde::json::Value, Catcher, Route};
use serde_json::Value;
use crate::{ use crate::{
api::{JsonResult, JsonUpcase, Notify, UpdateType}, api::{JsonResult, JsonUpcase, Notify, UpdateType},
auth::Headers, auth::Headers,
db::DbConn, db::DbConn,
error::Error, error::Error,
util::get_reqwest_client, util::{get_reqwest_client, parse_experimental_client_feature_flags},
}; };
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
@@ -193,6 +191,7 @@ fn version() -> Json<&'static str> {
#[get("/config")] #[get("/config")]
fn config() -> Json<Value> { fn config() -> Json<Value> {
let domain = crate::CONFIG.domain(); let domain = crate::CONFIG.domain();
let feature_states = parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
Json(json!({ Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns // Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version // This means they expect a version that closely matches the Bitwarden server version
@@ -203,7 +202,8 @@ fn config() -> Json<Value> {
"gitHash": option_env!("GIT_REV"), "gitHash": option_env!("GIT_REV"),
"server": { "server": {
"name": "Vaultwarden", "name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden" "url": "https://github.com/dani-garcia/vaultwarden",
"version": crate::VERSION
}, },
"environment": { "environment": {
"vault": domain, "vault": domain,
@@ -212,13 +212,7 @@ fn config() -> Json<Value> {
"notifications": format!("{domain}/notifications"), "notifications": format!("{domain}/notifications"),
"sso": "", "sso": "",
}, },
"featureStates": { "featureStates": feature_states,
// Any feature flags that we want the clients to use
// Can check the enabled ones at:
// https://vault.bitwarden.com/api/config
"autofill-v2": true,
"fido2-vault-credentials": true
},
"object": "config", "object": "config",
})) }))
} }

View File

@@ -5,14 +5,14 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::{log_event, CipherSyncData, CipherSyncType}, core::{log_event, two_factor, CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, JsonVec, Notify, NumberOrString, PasswordData, UpdateType, EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, JsonVec, Notify, PasswordOrOtpData, UpdateType,
}, },
auth::{decode_invite, AdminHeaders, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders}, auth::{decode_invite, AdminHeaders, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders},
db::{models::*, DbConn}, db::{models::*, DbConn},
error::Error, error::Error,
mail, mail,
util::convert_json_key_lcase_first, util::{convert_json_key_lcase_first, NumberOrString},
CONFIG, CONFIG,
}; };
@@ -186,16 +186,13 @@ async fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, mut co
#[delete("/organizations/<org_id>", data = "<data>")] #[delete("/organizations/<org_id>", data = "<data>")]
async fn delete_organization( async fn delete_organization(
org_id: &str, org_id: &str,
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
headers: OwnerHeaders, headers: OwnerHeaders,
mut conn: DbConn, mut conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
if !headers.user.check_valid_password(&password_hash) { data.validate(&headers.user, true, &mut conn).await?;
err!("Invalid password")
}
match Organization::find_by_uuid(org_id, &mut conn).await { match Organization::find_by_uuid(org_id, &mut conn).await {
None => err!("Organization not found"), None => err!("Organization not found"),
@@ -206,7 +203,7 @@ async fn delete_organization(
#[post("/organizations/<org_id>/delete", data = "<data>")] #[post("/organizations/<org_id>/delete", data = "<data>")]
async fn post_delete_organization( async fn post_delete_organization(
org_id: &str, org_id: &str,
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
headers: OwnerHeaders, headers: OwnerHeaders,
conn: DbConn, conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
@@ -228,7 +225,7 @@ async fn leave_organization(org_id: &str, headers: Headers, mut conn: DbConn) ->
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&user_org.uuid, &user_org.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -281,7 +278,7 @@ async fn post_organization(
EventType::OrganizationUpdated as i32, EventType::OrganizationUpdated as i32,
org_id, org_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -296,7 +293,7 @@ async fn post_organization(
async fn get_user_collections(headers: Headers, mut conn: DbConn) -> Json<Value> { async fn get_user_collections(headers: Headers, mut conn: DbConn) -> Json<Value> {
Json(json!({ Json(json!({
"Data": "Data":
Collection::find_by_user_uuid(headers.user.uuid.clone(), &mut conn).await Collection::find_by_user_uuid(headers.user.uuid, &mut conn).await
.iter() .iter()
.map(Collection::to_json) .map(Collection::to_json)
.collect::<Value>(), .collect::<Value>(),
@@ -398,7 +395,7 @@ async fn post_organization_collections(
EventType::CollectionCreated as i32, EventType::CollectionCreated as i32,
&collection.uuid, &collection.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -479,7 +476,7 @@ async fn post_organization_collection_update(
EventType::CollectionUpdated as i32, EventType::CollectionUpdated as i32,
&collection.uuid, &collection.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -567,7 +564,7 @@ async fn _delete_organization_collection(
EventType::CollectionDeleted as i32, EventType::CollectionDeleted as i32,
&collection.uuid, &collection.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -613,7 +610,6 @@ async fn post_organization_collection_delete(
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct BulkCollectionIds { struct BulkCollectionIds {
Ids: Vec<String>, Ids: Vec<String>,
OrganizationId: String,
} }
#[delete("/organizations/<org_id>/collections", data = "<data>")] #[delete("/organizations/<org_id>/collections", data = "<data>")]
@@ -624,9 +620,6 @@ async fn bulk_delete_organization_collections(
mut conn: DbConn, mut conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
let data: BulkCollectionIds = data.into_inner().data; let data: BulkCollectionIds = data.into_inner().data;
if org_id != data.OrganizationId {
err!("OrganizationId mismatch");
}
let collections = data.Ids; let collections = data.Ids;
@@ -948,7 +941,7 @@ async fn send_invite(
EventType::OrganizationUserInvited as i32, EventType::OrganizationUserInvited as i32,
&new_user.uuid, &new_user.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1242,7 +1235,7 @@ async fn _confirm_invite(
EventType::OrganizationUserConfirmed as i32, EventType::OrganizationUserConfirmed as i32,
&user_to_confirm.uuid, &user_to_confirm.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -1404,7 +1397,7 @@ async fn edit_user(
EventType::OrganizationUserUpdated as i32, EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid, &user_to_edit.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1496,7 +1489,7 @@ async fn _delete_user(
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&user_to_delete.uuid, &user_to_delete.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -1699,38 +1692,16 @@ async fn put_policy(
None => err!("Invalid or unsupported policy type"), None => err!("Invalid or unsupported policy type"),
}; };
// When enabling the TwoFactorAuthentication policy, remove this org's members that do have 2FA // When enabling the TwoFactorAuthentication policy, revoke all members that do not have 2FA
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled { if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
for member in UserOrganization::find_by_org(org_id, &mut conn).await.into_iter() { two_factor::enforce_2fa_policy_for_org(
let user_twofactor_disabled = TwoFactor::find_by_user(&member.user_uuid, &mut conn).await.is_empty(); org_id,
&headers.user.uuid,
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org headers.device.atype,
// Invited users still need to accept the invite and will get an error when they try to accept the invite. &headers.ip.ip,
if user_twofactor_disabled &mut conn,
&& member.atype < UserOrgType::Admin )
&& member.status != UserOrgStatus::Invited as i32 .await?;
{
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, &mut conn).await.unwrap();
let user = User::find_by_uuid(&member.user_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
log_event(
EventType::OrganizationUserRemoved as i32,
&member.uuid,
org_id,
headers.user.uuid.clone(),
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
member.delete(&mut conn).await?;
}
}
} }
// When enabling the SingleOrg policy, remove this org's members that are members of other orgs // When enabling the SingleOrg policy, remove this org's members that are members of other orgs
@@ -1755,7 +1726,7 @@ async fn put_policy(
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&member.uuid, &member.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1780,7 +1751,7 @@ async fn put_policy(
EventType::PolicyUpdated as i32, EventType::PolicyUpdated as i32,
&policy.uuid, &policy.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1897,7 +1868,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&user_org.uuid, &user_org.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1927,7 +1898,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserInvited as i32, EventType::OrganizationUserInvited as i32,
&new_org_user.uuid, &new_org_user.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -1963,7 +1934,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserRemoved as i32,
&user_org.uuid, &user_org.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2076,7 +2047,7 @@ async fn _revoke_organization_user(
EventType::OrganizationUserRevoked as i32, EventType::OrganizationUserRevoked as i32,
&user_org.uuid, &user_org.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -2195,7 +2166,7 @@ async fn _restore_organization_user(
EventType::OrganizationUserRestored as i32, EventType::OrganizationUserRestored as i32,
&user_org.uuid, &user_org.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -2324,7 +2295,7 @@ async fn post_groups(
EventType::GroupCreated as i32, EventType::GroupCreated as i32,
&group.uuid, &group.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2361,7 +2332,7 @@ async fn put_group(
EventType::GroupUpdated as i32, EventType::GroupUpdated as i32,
&updated_group.uuid, &updated_group.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2394,7 +2365,7 @@ async fn add_update_group(
EventType::OrganizationUserUpdatedGroups as i32, EventType::OrganizationUserUpdatedGroups as i32,
&assigned_user_id, &assigned_user_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -2449,7 +2420,7 @@ async fn _delete_group(org_id: &str, group_id: &str, headers: &AdminHeaders, con
EventType::GroupDeleted as i32, EventType::GroupDeleted as i32,
&group.uuid, &group.uuid,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
conn, conn,
@@ -2540,7 +2511,7 @@ async fn put_group_users(
EventType::OrganizationUserUpdatedGroups as i32, EventType::OrganizationUserUpdatedGroups as i32,
&assigned_user_id, &assigned_user_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2618,7 +2589,7 @@ async fn put_user_groups(
EventType::OrganizationUserUpdatedGroups as i32, EventType::OrganizationUserUpdatedGroups as i32,
org_user_id, org_user_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2673,7 +2644,7 @@ async fn delete_group_user(
EventType::OrganizationUserUpdatedGroups as i32, EventType::OrganizationUserUpdatedGroups as i32,
org_user_id, org_user_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2762,7 +2733,7 @@ async fn put_reset_password(
EventType::OrganizationUserAdminResetPassword as i32, EventType::OrganizationUserAdminResetPassword as i32,
org_user_id, org_user_id,
org_id, org_id,
headers.user.uuid.clone(), &headers.user.uuid,
headers.device.atype, headers.device.atype,
&headers.ip.ip, &headers.ip.ip,
&mut conn, &mut conn,
@@ -2889,8 +2860,7 @@ async fn put_reset_password_enrollment(
EventType::OrganizationUserResetPasswordWithdraw as i32 EventType::OrganizationUserResetPasswordWithdraw as i32
}; };
log_event(log_id, org_user_id, org_id, headers.user.uuid.clone(), headers.device.atype, &headers.ip.ip, &mut conn) log_event(log_id, org_user_id, org_id, &headers.user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
.await;
Ok(()) Ok(())
} }
@@ -2945,18 +2915,16 @@ async fn get_org_export(org_id: &str, headers: AdminHeaders, mut conn: DbConn) -
async fn _api_key( async fn _api_key(
org_id: &str, org_id: &str,
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
rotate: bool, rotate: bool,
headers: AdminHeaders, headers: AdminHeaders,
conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
// Validate the admin users password // Validate the admin users password/otp
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password")
}
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_id, &conn).await { let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_id, &conn).await {
Some(mut org_api_key) => { Some(mut org_api_key) => {
@@ -2983,14 +2951,14 @@ async fn _api_key(
} }
#[post("/organizations/<org_id>/api-key", data = "<data>")] #[post("/organizations/<org_id>/api-key", data = "<data>")]
async fn api_key(org_id: &str, data: JsonUpcase<PasswordData>, headers: AdminHeaders, conn: DbConn) -> JsonResult { async fn api_key(org_id: &str, data: JsonUpcase<PasswordOrOtpData>, headers: AdminHeaders, conn: DbConn) -> JsonResult {
_api_key(org_id, data, false, headers, conn).await _api_key(org_id, data, false, headers, conn).await
} }
#[post("/organizations/<org_id>/rotate-api-key", data = "<data>")] #[post("/organizations/<org_id>/rotate-api-key", data = "<data>")]
async fn rotate_api_key( async fn rotate_api_key(
org_id: &str, org_id: &str,
data: JsonUpcase<PasswordData>, data: JsonUpcase<PasswordOrOtpData>,
headers: AdminHeaders, headers: AdminHeaders,
conn: DbConn, conn: DbConn,
) -> JsonResult { ) -> JsonResult {

View File

@@ -1,6 +1,7 @@
use std::path::Path; use std::path::Path;
use chrono::{DateTime, Duration, Utc}; use chrono::{DateTime, Duration, Utc};
use num_traits::ToPrimitive;
use rocket::form::Form; use rocket::form::Form;
use rocket::fs::NamedFile; use rocket::fs::NamedFile;
use rocket::fs::TempFile; use rocket::fs::TempFile;
@@ -8,17 +9,17 @@ use rocket::serde::json::Json;
use serde_json::Value; use serde_json::Value;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, UpdateType}, api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
auth::{ClientIp, Headers, Host}, auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
util::SafeString, util::{NumberOrString, SafeString},
CONFIG, CONFIG,
}; };
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available"; const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
// The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues // The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400; const SIZE_525_MB: i64 = 550_502_400;
pub fn routes() -> Vec<rocket::Route> { pub fn routes() -> Vec<rocket::Route> {
routes![ routes![
@@ -216,30 +217,41 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
} = data.into_inner(); } = data.into_inner();
let model = model.into_inner().data; let model = model.into_inner().data;
let Some(size) = data.len().to_i64() else {
err!("Invalid send size");
};
if size < 0 {
err!("Send size can't be negative")
}
enforce_disable_hide_email_policy(&model, &headers, &mut conn).await?; enforce_disable_hide_email_policy(&model, &headers, &mut conn).await?;
let size_limit = match CONFIG.user_attachment_limit() { let size_limit = match CONFIG.user_send_limit() {
Some(0) => err!("File uploads are disabled"), Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await; let Some(already_used) = Send::size_by_user(&headers.user.uuid, &mut conn).await else {
err!("Existing sends overflow")
};
let Some(left) = limit_kb.checked_mul(1024).and_then(|l| l.checked_sub(already_used)) else {
err!("Send size overflow");
};
if left <= 0 { if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space") err!("Send storage limit reached! Delete some sends to free up space")
} }
std::cmp::Ord::max(left as u64, SIZE_525_MB) i64::clamp(left, 0, SIZE_525_MB)
} }
None => SIZE_525_MB, None => SIZE_525_MB,
}; };
if size > size_limit {
err!("Send storage limit exceeded with this file");
}
let mut send = create_send(model, headers.user.uuid)?; let mut send = create_send(model, headers.user.uuid)?;
if send.atype != SendType::File as i32 { if send.atype != SendType::File as i32 {
err!("Send content is not a file"); err!("Send content is not a file");
} }
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
}
let file_id = crate::crypto::generate_send_id(); let file_id = crate::crypto::generate_send_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid); let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id); let file_path = folder_path.join(&file_id);
@@ -253,7 +265,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
if let Some(o) = data_value.as_object_mut() { if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id)); o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into())); o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size as i32))); o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
} }
send.data = serde_json::to_string(&data_value)?; send.data = serde_json::to_string(&data_value)?;
@@ -285,24 +297,32 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?; enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let file_length = match &data.FileLength { let file_length = match &data.FileLength {
Some(m) => Some(m.into_i32()?), Some(m) => m.into_i64()?,
_ => None, _ => err!("Invalid send length"),
}; };
if file_length < 0 {
err!("Send size can't be negative")
}
let size_limit = match CONFIG.user_attachment_limit() { let size_limit = match CONFIG.user_send_limit() {
Some(0) => err!("File uploads are disabled"), Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await; let Some(already_used) = Send::size_by_user(&headers.user.uuid, &mut conn).await else {
err!("Existing sends overflow")
};
let Some(left) = limit_kb.checked_mul(1024).and_then(|l| l.checked_sub(already_used)) else {
err!("Send size overflow");
};
if left <= 0 { if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space") err!("Send storage limit reached! Delete some sends to free up space")
} }
std::cmp::Ord::max(left as u64, SIZE_525_MB) i64::clamp(left, 0, SIZE_525_MB)
} }
None => SIZE_525_MB, None => SIZE_525_MB,
}; };
if file_length.is_some() && file_length.unwrap() as u64 > size_limit { if file_length > size_limit {
err!("Attachment storage limit exceeded with this file"); err!("Send storage limit exceeded with this file");
} }
let mut send = create_send(data, headers.user.uuid)?; let mut send = create_send(data, headers.user.uuid)?;
@@ -312,8 +332,8 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
let mut data_value: Value = serde_json::from_str(&send.data)?; let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() { if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id.clone())); o.insert(String::from("Id"), Value::String(file_id.clone()));
o.insert(String::from("Size"), Value::Number(file_length.unwrap().into())); o.insert(String::from("Size"), Value::Number(file_length.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length.unwrap()))); o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length)));
} }
send.data = serde_json::to_string(&data_value)?; send.data = serde_json::to_string(&data_value)?;
send.save(&mut conn).await?; send.save(&mut conn).await?;

View File

@@ -5,7 +5,7 @@ use rocket::Route;
use crate::{ use crate::{
api::{ api::{
core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase,
NumberOrString, PasswordData, PasswordOrOtpData,
}, },
auth::{ClientIp, Headers}, auth::{ClientIp, Headers},
crypto, crypto,
@@ -13,6 +13,7 @@ use crate::{
models::{EventType, TwoFactor, TwoFactorType}, models::{EventType, TwoFactor, TwoFactorType},
DbConn, DbConn,
}, },
util::NumberOrString,
}; };
pub use crate::config::CONFIG; pub use crate::config::CONFIG;
@@ -22,13 +23,11 @@ pub fn routes() -> Vec<Route> {
} }
#[post("/two-factor/get-authenticator", data = "<data>")] #[post("/two-factor/get-authenticator", data = "<data>")]
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn generate_authenticator(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, false, &mut conn).await?;
err!("Invalid password");
}
let type_ = TwoFactorType::Authenticator as i32; let type_ = TwoFactorType::Authenticator as i32;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await; let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await;
@@ -48,9 +47,10 @@ async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct EnableAuthenticatorData { struct EnableAuthenticatorData {
MasterPasswordHash: String,
Key: String, Key: String,
Token: NumberOrString, Token: NumberOrString,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
#[post("/two-factor/authenticator", data = "<data>")] #[post("/two-factor/authenticator", data = "<data>")]
@@ -60,15 +60,17 @@ async fn activate_authenticator(
mut conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data; let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let key = data.Key; let key = data.Key;
let token = data.Token.into_string(); let token = data.Token.into_string();
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&password_hash) { PasswordOrOtpData {
err!("Invalid password"); MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
} }
.validate(&user, true, &mut conn)
.await?;
// Validate key as base32 and 20 bytes length // Validate key as base32 and 20 bytes length
let decoded_key: Vec<u8> = match BASE32.decode(key.as_bytes()) { let decoded_key: Vec<u8> = match BASE32.decode(key.as_bytes()) {

View File

@@ -6,7 +6,7 @@ use rocket::Route;
use crate::{ use crate::{
api::{ api::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
PasswordData, PasswordOrOtpData,
}, },
auth::Headers, auth::Headers,
crypto, crypto,
@@ -92,14 +92,13 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>"; const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")] #[post("/two-factor/get-duo", data = "<data>")]
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_duo(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !headers.user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, false, &mut conn).await?;
err!("Invalid password");
}
let data = get_user_duo_data(&headers.user.uuid, &mut conn).await; let data = get_user_duo_data(&user.uuid, &mut conn).await;
let (enabled, data) = match data { let (enabled, data) = match data {
DuoStatus::Global(_) => (true, Some(DuoData::secret())), DuoStatus::Global(_) => (true, Some(DuoData::secret())),
@@ -129,10 +128,11 @@ async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbC
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case, dead_code)] #[allow(non_snake_case, dead_code)]
struct EnableDuoData { struct EnableDuoData {
MasterPasswordHash: String,
Host: String, Host: String,
SecretKey: String, SecretKey: String,
IntegrationKey: String, IntegrationKey: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
impl From<EnableDuoData> for DuoData { impl From<EnableDuoData> for DuoData {
@@ -159,9 +159,12 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut con
let data: EnableDuoData = data.into_inner().data; let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { PasswordOrOtpData {
err!("Invalid password"); MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
} }
.validate(&user, true, &mut conn)
.await?;
let (data, data_str) = if check_duo_fields_custom(&data) { let (data, data_str) = if check_duo_fields_custom(&data) {
let data_req: DuoData = data.into(); let data_req: DuoData = data.into();

View File

@@ -5,7 +5,7 @@ use rocket::Route;
use crate::{ use crate::{
api::{ api::{
core::{log_user_event, two_factor::_generate_recover_code}, core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData, EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
}, },
auth::Headers, auth::Headers,
crypto, crypto,
@@ -76,13 +76,11 @@ pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
/// When user clicks on Manage email 2FA show the user the related information /// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")] #[post("/two-factor/get-email", data = "<data>")]
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_email(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, false, &mut conn).await?;
err!("Invalid password");
}
let (enabled, mfa_email) = let (enabled, mfa_email) =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &mut conn).await { match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &mut conn).await {
@@ -105,7 +103,8 @@ async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: D
struct SendEmailData { struct SendEmailData {
/// Email where 2FA codes will be sent to, can be different than user email account. /// Email where 2FA codes will be sent to, can be different than user email account.
Email: String, Email: String,
MasterPasswordHash: String, MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
/// Send a verification email to the specified email address to check whether it exists/belongs to user. /// Send a verification email to the specified email address to check whether it exists/belongs to user.
@@ -114,9 +113,12 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
let data: SendEmailData = data.into_inner().data; let data: SendEmailData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { PasswordOrOtpData {
err!("Invalid password"); MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
} }
.validate(&user, false, &mut conn)
.await?;
if !CONFIG._enable_email_2fa() { if !CONFIG._enable_email_2fa() {
err!("Email 2FA is disabled") err!("Email 2FA is disabled")
@@ -144,8 +146,9 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct EmailData { struct EmailData {
Email: String, Email: String,
MasterPasswordHash: String,
Token: String, Token: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
/// Verify email belongs to user and can be used for 2FA email codes. /// Verify email belongs to user and can be used for 2FA email codes.
@@ -154,9 +157,13 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn)
let data: EmailData = data.into_inner().data; let data: EmailData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { // This is the last step in the verification process, delete the otp directly afterwards
err!("Invalid password"); PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
} }
.validate(&user, true, &mut conn)
.await?;
let type_ = TwoFactorType::EmailVerificationChallenge as i32; let type_ = TwoFactorType::EmailVerificationChallenge as i32;
let mut twofactor = let mut twofactor =

View File

@@ -5,16 +5,22 @@ use rocket::Route;
use serde_json::Value; use serde_json::Value;
use crate::{ use crate::{
api::{core::log_user_event, JsonResult, JsonUpcase, NumberOrString, PasswordData}, api::{
core::{log_event, log_user_event},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
},
auth::{ClientHeaders, Headers}, auth::{ClientHeaders, Headers},
crypto, crypto,
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
mail, CONFIG, mail,
util::NumberOrString,
CONFIG,
}; };
pub mod authenticator; pub mod authenticator;
pub mod duo; pub mod duo;
pub mod email; pub mod email;
pub mod protected_actions;
pub mod webauthn; pub mod webauthn;
pub mod yubikey; pub mod yubikey;
@@ -33,6 +39,7 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut email::routes()); routes.append(&mut email::routes());
routes.append(&mut webauthn::routes()); routes.append(&mut webauthn::routes());
routes.append(&mut yubikey::routes()); routes.append(&mut yubikey::routes());
routes.append(&mut protected_actions::routes());
routes routes
} }
@@ -50,13 +57,11 @@ async fn get_twofactor(headers: Headers, mut conn: DbConn) -> Json<Value> {
} }
#[post("/two-factor/get-recover", data = "<data>")] #[post("/two-factor/get-recover", data = "<data>")]
fn get_recover(data: JsonUpcase<PasswordData>, headers: Headers) -> JsonResult { async fn get_recover(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, true, &mut conn).await?;
err!("Invalid password");
}
Ok(Json(json!({ Ok(Json(json!({
"Code": user.totp_recover, "Code": user.totp_recover,
@@ -96,6 +101,7 @@ async fn recover(data: JsonUpcase<RecoverTwoFactor>, client_headers: ClientHeade
// Remove all twofactors from the user // Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?; TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
enforce_2fa_policy(&user, &user.uuid, client_headers.device_type, &client_headers.ip.ip, &mut conn).await?;
log_user_event( log_user_event(
EventType::UserRecovered2fa as i32, EventType::UserRecovered2fa as i32,
@@ -123,19 +129,23 @@ async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct DisableTwoFactorData { struct DisableTwoFactorData {
MasterPasswordHash: String, MasterPasswordHash: Option<String>,
Otp: Option<String>,
Type: NumberOrString, Type: NumberOrString,
} }
#[post("/two-factor/disable", data = "<data>")] #[post("/two-factor/disable", data = "<data>")]
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data; let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&password_hash) { // Delete directly after a valid token has been provided
err!("Invalid password"); PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
} }
.validate(&user, true, &mut conn)
.await?;
let type_ = data.Type.into_i32()?; let type_ = data.Type.into_i32()?;
@@ -145,22 +155,8 @@ async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Head
.await; .await;
} }
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty(); if TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty() {
enforce_2fa_policy(&user, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await?;
if twofactor_disabled {
for user_org in
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &mut conn)
.await
.into_iter()
{
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
user_org.delete(&mut conn).await?;
}
}
} }
Ok(Json(json!({ Ok(Json(json!({
@@ -175,6 +171,78 @@ async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers:
disable_twofactor(data, headers, conn).await disable_twofactor(data, headers, conn).await
} }
pub async fn enforce_2fa_policy(
user: &User,
act_uuid: &str,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
for member in UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn)
.await
.into_iter()
{
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if member.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
let mut member = member;
member.revoke();
member.save(conn).await?;
log_event(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
&member.org_uuid,
act_uuid,
device_type,
ip,
conn,
)
.await;
}
}
Ok(())
}
pub async fn enforce_2fa_policy_for_org(
org_uuid: &str,
act_uuid: &str,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
let org = Organization::find_by_uuid(org_uuid, conn).await.unwrap();
for member in UserOrganization::find_confirmed_by_org(org_uuid, conn).await.into_iter() {
// Don't enforce the policy for Admins and Owners.
if member.atype < UserOrgType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() {
if CONFIG.mail_enabled() {
let user = User::find_by_uuid(&member.user_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
let mut member = member;
member.revoke();
member.save(conn).await?;
log_event(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
org_uuid,
act_uuid,
device_type,
ip,
conn,
)
.await;
}
}
Ok(())
}
pub async fn send_incomplete_2fa_notifications(pool: DbPool) { pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
debug!("Sending notifications for incomplete 2FA logins"); debug!("Sending notifications for incomplete 2FA logins");

View File

@@ -0,0 +1,142 @@
use chrono::{Duration, NaiveDateTime, Utc};
use rocket::Route;
use crate::{
api::{EmptyResult, JsonUpcase},
auth::Headers,
crypto,
db::{
models::{TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
mail, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![request_otp, verify_otp]
}
/// Data stored in the TwoFactor table in the db
#[derive(Serialize, Deserialize, Debug)]
pub struct ProtectedActionData {
/// Token issued to validate the protected action
pub token: String,
/// UNIX timestamp of token issue.
pub token_sent: i64,
// The total amount of attempts
pub attempts: u8,
}
impl ProtectedActionData {
pub fn new(token: String) -> Self {
Self {
token,
token_sent: Utc::now().naive_utc().timestamp(),
attempts: 0,
}
}
pub fn to_json(&self) -> String {
serde_json::to_string(&self).unwrap()
}
pub fn from_json(string: &str) -> Result<Self, Error> {
let res: Result<Self, crate::serde_json::Error> = serde_json::from_str(string);
match res {
Ok(x) => Ok(x),
Err(_) => err!("Could not decode ProtectedActionData from string"),
}
}
pub fn add_attempt(&mut self) {
self.attempts += 1;
}
}
#[post("/accounts/request-otp")]
async fn request_otp(headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() {
err!("Email is disabled for this server. Either enable email or login using your master password instead of login via device.");
}
let user = headers.user;
// Only one Protected Action per user is allowed to take place, delete the previous one
if let Some(pa) =
TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::ProtectedActions as i32, &mut conn).await
{
pa.delete(&mut conn).await?;
}
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let pa_data = ProtectedActionData::new(generated_token);
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::ProtectedActions, pa_data.to_json());
twofactor.save(&mut conn).await?;
mail::send_protected_action_token(&user.email, &pa_data.token).await?;
Ok(())
}
#[derive(Deserialize, Serialize, Debug)]
#[allow(non_snake_case)]
struct ProtectedActionVerify {
OTP: String,
}
#[post("/accounts/verify-otp", data = "<data>")]
async fn verify_otp(data: JsonUpcase<ProtectedActionVerify>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() {
err!("Email is disabled for this server. Either enable email or login using your master password instead of login via device.");
}
let user = headers.user;
let data: ProtectedActionVerify = data.into_inner().data;
// Delete the token after one validation attempt
// This endpoint only gets called for the vault export, and doesn't need a second attempt
validate_protected_action_otp(&data.OTP, &user.uuid, true, &mut conn).await
}
pub async fn validate_protected_action_otp(
otp: &str,
user_uuid: &str,
delete_if_valid: bool,
conn: &mut DbConn,
) -> EmptyResult {
let pa = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::ProtectedActions as i32, conn)
.await
.map_res("Protected action token not found, try sending the code again or restart the process")?;
let mut pa_data = ProtectedActionData::from_json(&pa.data)?;
pa_data.add_attempt();
// Delete the token after x attempts if it has been used too many times
// We use the 6, which should be more then enough for invalid attempts and multiple valid checks
if pa_data.attempts > 6 {
pa.delete(conn).await?;
err!("Token has expired")
}
// Check if the token has expired (Using the email 2fa expiration time)
let date =
NaiveDateTime::from_timestamp_opt(pa_data.token_sent, 0).expect("Protected Action token timestamp invalid.");
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
pa.delete(conn).await?;
err!("Token has expired")
}
if !crypto::ct_eq(&pa_data.token, otp) {
pa.save(conn).await?;
err!("Token is invalid")
}
if delete_if_valid {
pa.delete(conn).await?;
}
Ok(())
}

View File

@@ -7,7 +7,7 @@ use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState,
use crate::{ use crate::{
api::{ api::{
core::{log_user_event, two_factor::_generate_recover_code}, core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData, EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
}, },
auth::Headers, auth::Headers,
db::{ db::{
@@ -15,6 +15,7 @@ use crate::{
DbConn, DbConn,
}, },
error::Error, error::Error,
util::NumberOrString,
CONFIG, CONFIG,
}; };
@@ -103,16 +104,17 @@ impl WebauthnRegistration {
} }
#[post("/two-factor/get-webauthn", data = "<data>")] #[post("/two-factor/get-webauthn", data = "<data>")]
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_webauthn(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() { if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled") err!("`DOMAIN` environment variable is not set. Webauthn disabled")
} }
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) { let data: PasswordOrOtpData = data.into_inner().data;
err!("Invalid password"); let user = headers.user;
}
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &mut conn).await?; data.validate(&user, false, &mut conn).await?;
let (enabled, registrations) = get_webauthn_registrations(&user.uuid, &mut conn).await?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect(); let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({ Ok(Json(json!({
@@ -123,12 +125,17 @@ async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn
} }
#[post("/two-factor/get-webauthn-challenge", data = "<data>")] #[post("/two-factor/get-webauthn-challenge", data = "<data>")]
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn generate_webauthn_challenge(
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) { data: JsonUpcase<PasswordOrOtpData>,
err!("Invalid password"); headers: Headers,
} mut conn: DbConn,
) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
let registrations = get_webauthn_registrations(&headers.user.uuid, &mut conn) data.validate(&user, false, &mut conn).await?;
let registrations = get_webauthn_registrations(&user.uuid, &mut conn)
.await? .await?
.1 .1
.into_iter() .into_iter()
@@ -136,16 +143,16 @@ async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: He
.collect(); .collect();
let (challenge, state) = WebauthnConfig::load().generate_challenge_register_options( let (challenge, state) = WebauthnConfig::load().generate_challenge_register_options(
headers.user.uuid.as_bytes().to_vec(), user.uuid.as_bytes().to_vec(),
headers.user.email, user.email,
headers.user.name, user.name,
Some(registrations), Some(registrations),
None, None,
None, None,
)?; )?;
let type_ = TwoFactorType::WebauthnRegisterChallenge; let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?; TwoFactor::new(user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?; let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into(); challenge_value["status"] = "ok".into();
@@ -158,8 +165,9 @@ async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: He
struct EnableWebauthnData { struct EnableWebauthnData {
Id: NumberOrString, // 1..5 Id: NumberOrString, // 1..5
Name: String, Name: String,
MasterPasswordHash: String,
DeviceResponse: RegisterPublicKeyCredentialCopy, DeviceResponse: RegisterPublicKeyCredentialCopy,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
// This is copied from RegisterPublicKeyCredential to change the Response objects casing // This is copied from RegisterPublicKeyCredential to change the Response objects casing
@@ -246,9 +254,12 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
let data: EnableWebauthnData = data.into_inner().data; let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { PasswordOrOtpData {
err!("Invalid password"); MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
} }
.validate(&user, true, &mut conn)
.await?;
// Retrieve and delete the saved challenge state // Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32; let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;

View File

@@ -6,7 +6,7 @@ use yubico::{config::Config, verify};
use crate::{ use crate::{
api::{ api::{
core::{log_user_event, two_factor::_generate_recover_code}, core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData, EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
}, },
auth::Headers, auth::Headers,
db::{ db::{
@@ -24,13 +24,14 @@ pub fn routes() -> Vec<Route> {
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct EnableYubikeyData { struct EnableYubikeyData {
MasterPasswordHash: String,
Key1: Option<String>, Key1: Option<String>,
Key2: Option<String>, Key2: Option<String>,
Key3: Option<String>, Key3: Option<String>,
Key4: Option<String>, Key4: Option<String>,
Key5: Option<String>, Key5: Option<String>,
Nfc: bool, Nfc: bool,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
#[derive(Deserialize, Serialize, Debug)] #[derive(Deserialize, Serialize, Debug)]
@@ -83,16 +84,14 @@ async fn verify_yubikey_otp(otp: String) -> EmptyResult {
} }
#[post("/two-factor/get-yubikey", data = "<data>")] #[post("/two-factor/get-yubikey", data = "<data>")]
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn generate_yubikey(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
// Make sure the credentials are set // Make sure the credentials are set
get_yubico_credentials()?; get_yubico_credentials()?;
let data: PasswordData = data.into_inner().data; let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user; let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { data.validate(&user, false, &mut conn).await?;
err!("Invalid password");
}
let user_uuid = &user.uuid; let user_uuid = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32; let yubikey_type = TwoFactorType::YubiKey as i32;
@@ -122,9 +121,12 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
let data: EnableYubikeyData = data.into_inner().data; let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user; let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) { PasswordOrOtpData {
err!("Invalid password"); MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
} }
.validate(&user, true, &mut conn)
.await?;
// Check if we already have some data // Check if we already have some data
let mut yubikey_data = let mut yubikey_data =

View File

@@ -9,9 +9,12 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::accounts::{PreloginData, RegisterData, _prelogin, _register}, core::{
core::log_user_event, accounts::{PreloginData, RegisterData, _prelogin, _register},
core::two_factor::{duo, email, email::EmailTokenData, yubikey}, log_user_event,
two_factor::{authenticator, duo, email, enforce_2fa_policy, webauthn, yubikey},
},
push::register_push_device,
ApiResult, EmptyResult, JsonResult, JsonUpcase, ApiResult, EmptyResult, JsonResult, JsonUpcase,
}, },
auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp}, auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp},
@@ -103,8 +106,13 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Common // Common
let user = User::find_by_uuid(&device.user_uuid, conn).await.unwrap(); let user = User::find_by_uuid(&device.user_uuid, conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // ---
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec); // Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?; device.save(conn).await?;
let result = json!({ let result = json!({
@@ -242,7 +250,7 @@ async fn _password_login(
let (mut device, new_device) = get_device(&data, conn, &user).await; let (mut device, new_device) = get_device(&data, conn, &user).await;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, conn).await?; let twofactor_token = twofactor_auth(&user, &data, &mut device, ip, conn).await?;
if CONFIG.mail_enabled() && new_device { if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await { if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
@@ -259,9 +267,19 @@ async fn _password_login(
} }
} }
// register push device
if !new_device {
register_push_device(&mut device, conn).await?;
}
// Common // Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // ---
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec); // Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?; device.save(conn).await?;
let mut result = json!({ let mut result = json!({
@@ -374,8 +392,13 @@ async fn _user_api_key_login(
// Common // Common
let scope_vec = vec!["api".into()]; let scope_vec = vec!["api".into()];
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // ---
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec); // Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?; device.save(conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip); info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
@@ -453,32 +476,32 @@ async fn get_device(data: &ConnectData, conn: &mut DbConn, user: &User) -> (Devi
} }
async fn twofactor_auth( async fn twofactor_auth(
user_uuid: &str, user: &User,
data: &ConnectData, data: &ConnectData,
device: &mut Device, device: &mut Device,
ip: &ClientIp, ip: &ClientIp,
conn: &mut DbConn, conn: &mut DbConn,
) -> ApiResult<Option<String>> { ) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn).await; let twofactors = TwoFactor::find_by_user(&user.uuid, conn).await;
// No twofactor token if twofactor is disabled // No twofactor token if twofactor is disabled
if twofactors.is_empty() { if twofactors.is_empty() {
enforce_2fa_policy(user, &user.uuid, device.atype, &ip.ip, conn).await?;
return Ok(None); return Ok(None);
} }
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn).await?; TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect(); let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one
let twofactor_code = match data.two_factor_token { let twofactor_code = match data.two_factor_token {
Some(ref code) => code, Some(ref code) => code,
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?, "2FA token not provided"), None => err_json!(_json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?, "2FA token not provided"),
}; };
let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled); let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
use crate::api::core::two_factor as _tf;
use crate::crypto::ct_eq; use crate::crypto::ct_eq;
let selected_data = _selected_data(selected_twofactor); let selected_data = _selected_data(selected_twofactor);
@@ -486,17 +509,15 @@ async fn twofactor_auth(
match TwoFactorType::from_i32(selected_id) { match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => { Some(TwoFactorType::Authenticator) => {
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn).await? authenticator::validate_totp_code_str(&user.uuid, twofactor_code, &selected_data?, ip, conn).await?
} }
Some(TwoFactorType::Webauthn) => { Some(TwoFactorType::Webauthn) => webauthn::validate_webauthn_login(&user.uuid, twofactor_code, conn).await?,
_tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn).await? Some(TwoFactorType::YubiKey) => yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
}
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Duo) => { Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await? duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
} }
Some(TwoFactorType::Email) => { Some(TwoFactorType::Email) => {
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn).await? email::validate_email_code_str(&user.uuid, twofactor_code, &selected_data?, conn).await?
} }
Some(TwoFactorType::Remember) => { Some(TwoFactorType::Remember) => {
@@ -506,7 +527,7 @@ async fn twofactor_auth(
} }
_ => { _ => {
err_json!( err_json!(
_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?, _json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?,
"2FA Remember token not provided" "2FA Remember token not provided"
) )
} }
@@ -520,7 +541,7 @@ async fn twofactor_auth(
), ),
} }
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn).await?; TwoFactorIncomplete::mark_complete(&user.uuid, &device.uuid, conn).await?;
if !CONFIG.disable_2fa_remember() && remember == 1 { if !CONFIG.disable_2fa_remember() && remember == 1 {
Ok(Some(device.refresh_twofactor_remember())) Ok(Some(device.refresh_twofactor_remember()))
@@ -535,8 +556,6 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
} }
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> { async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> {
use crate::api::core::two_factor;
let mut result = json!({ let mut result = json!({
"error" : "invalid_grant", "error" : "invalid_grant",
"error_description" : "Two factor required.", "error_description" : "Two factor required.",
@@ -551,7 +570,7 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ } Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => { Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn).await?; let request = webauthn::generate_webauthn_login(user_uuid, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0; result["TwoFactorProviders2"][provider.to_string()] = request.0;
} }
@@ -583,8 +602,6 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
} }
Some(tf_type @ TwoFactorType::Email) => { Some(tf_type @ TwoFactorType::Email) => {
use crate::api::core::two_factor as _tf;
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await { let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf, Some(tf) => tf,
None => err!("No twofactor email registered"), None => err!("No twofactor email registered"),
@@ -592,10 +609,10 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
// Send email immediately if email is the only 2FA option // Send email immediately if email is the only 2FA option
if providers.len() == 1 { if providers.len() == 1 {
_tf::email::send_token(user_uuid, conn).await? email::send_token(user_uuid, conn).await?
} }
let email_data = EmailTokenData::from_json(&twofactor.data)?; let email_data = email::EmailTokenData::from_json(&twofactor.data)?;
result["TwoFactorProviders2"][provider.to_string()] = json!({ result["TwoFactorProviders2"][provider.to_string()] = json!({
"Email": email::obscure_email(&email_data.email), "Email": email::obscure_email(&email_data.email),
}) })

View File

@@ -32,6 +32,7 @@ pub use crate::api::{
web::routes as web_routes, web::routes as web_routes,
web::static_files, web::static_files,
}; };
use crate::db::{models::User, DbConn};
use crate::util; use crate::util;
// Type aliases for API methods results // Type aliases for API methods results
@@ -46,33 +47,29 @@ type JsonVec<T> = Json<Vec<T>>;
// Common structs representing JSON data received // Common structs representing JSON data received
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct PasswordData { struct PasswordOrOtpData {
MasterPasswordHash: String, MasterPasswordHash: Option<String>,
Otp: Option<String>,
} }
#[derive(Deserialize, Debug, Clone)] impl PasswordOrOtpData {
#[serde(untagged)] /// Tokens used via this struct can be used multiple times during the process
enum NumberOrString { /// First for the validation to continue, after that to enable or validate the following actions
Number(i32), /// This is different per caller, so it can be adjusted to delete the token or not
String(String), pub async fn validate(&self, user: &User, delete_if_valid: bool, conn: &mut DbConn) -> EmptyResult {
} use crate::api::core::two_factor::protected_actions::validate_protected_action_otp;
impl NumberOrString { match (self.MasterPasswordHash.as_deref(), self.Otp.as_deref()) {
fn into_string(self) -> String { (Some(pw_hash), None) => {
match self { if !user.check_valid_password(pw_hash) {
NumberOrString::Number(n) => n.to_string(), err!("Invalid password");
NumberOrString::String(s) => s, }
}
}
#[allow(clippy::wrong_self_convention)]
fn into_i32(&self) -> ApiResult<i32> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => Ok(*n),
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
} }
(None, Some(otp)) => {
validate_protected_action_otp(otp, &user.uuid, delete_if_valid, conn).await?;
}
_ => err!("No validation provided"),
} }
Ok(())
} }
} }

View File

@@ -164,6 +164,11 @@ fn websockets_hub<'r>(
continue; continue;
} }
} }
// Prevent sending anything back when a `Close` Message is received.
// Just break the loop
Message::Close(_) => break,
// Just echo anything else the client sends // Just echo anything else the client sends
_ => yield message, _ => yield message,
} }
@@ -230,6 +235,11 @@ fn anonymous_websockets_hub<'r>(
continue; continue;
} }
} }
// Prevent sending anything back when a `Close` Message is received.
// Just break the loop
Message::Close(_) => break,
// Just echo anything else the client sends // Just echo anything else the client sends
_ => yield message, _ => yield message,
} }

View File

@@ -50,7 +50,11 @@ async fn get_auth_push_token() -> ApiResult<String> {
("client_secret", &client_secret), ("client_secret", &client_secret),
]; ];
let res = match get_reqwest_client().post("https://identity.bitwarden.com/connect/token").form(&params).send().await let res = match get_reqwest_client()
.post(&format!("{}/connect/token", CONFIG.push_identity_uri()))
.form(&params)
.send()
.await
{ {
Ok(r) => r, Ok(r) => r,
Err(e) => err!(format!("Error getting push token from bitwarden server: {e}")), Err(e) => err!(format!("Error getting push token from bitwarden server: {e}")),
@@ -72,24 +76,35 @@ async fn get_auth_push_token() -> ApiResult<String> {
Ok(push_token.access_token.clone()) Ok(push_token.access_token.clone())
} }
pub async fn register_push_device(user_uuid: String, device: Device) -> EmptyResult { pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbConn) -> EmptyResult {
if !CONFIG.push_enabled() { if !CONFIG.push_enabled() || !device.is_push_device() || device.is_registered() {
return Ok(()); return Ok(());
} }
let auth_push_token = get_auth_push_token().await?;
if device.push_token.is_none() {
warn!("Skipping the registration of the device {} because the push_token field is empty.", device.uuid);
warn!("To get rid of this message you need to clear the app data and reconnect the device.");
return Ok(());
}
debug!("Registering Device {}", device.uuid);
// generate a random push_uuid so we know the device is registered
device.push_uuid = Some(uuid::Uuid::new_v4().to_string());
//Needed to register a device for push to bitwarden : //Needed to register a device for push to bitwarden :
let data = json!({ let data = json!({
"userId": user_uuid, "userId": device.user_uuid,
"deviceId": device.push_uuid, "deviceId": device.push_uuid,
"identifier": device.uuid, "identifier": device.uuid,
"type": device.atype, "type": device.atype,
"pushToken": device.push_token "pushToken": device.push_token
}); });
let auth_push_token = get_auth_push_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token); let auth_header = format!("Bearer {}", &auth_push_token);
get_reqwest_client() if let Err(e) = get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/register") .post(CONFIG.push_relay_uri() + "/push/register")
.header(CONTENT_TYPE, "application/json") .header(CONTENT_TYPE, "application/json")
.header(ACCEPT, "application/json") .header(ACCEPT, "application/json")
@@ -97,12 +112,20 @@ pub async fn register_push_device(user_uuid: String, device: Device) -> EmptyRes
.json(&data) .json(&data)
.send() .send()
.await? .await?
.error_for_status()?; .error_for_status()
{
err!(format!("An error occured while proceeding registration of a device: {e}"));
}
if let Err(e) = device.save(conn).await {
err!(format!("An error occured while trying to save the (registered) device push uuid: {e}"));
}
Ok(()) Ok(())
} }
pub async fn unregister_push_device(uuid: String) -> EmptyResult { pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult {
if !CONFIG.push_enabled() { if !CONFIG.push_enabled() || push_uuid.is_none() {
return Ok(()); return Ok(());
} }
let auth_push_token = get_auth_push_token().await?; let auth_push_token = get_auth_push_token().await?;
@@ -110,7 +133,7 @@ pub async fn unregister_push_device(uuid: String) -> EmptyResult {
let auth_header = format!("Bearer {}", &auth_push_token); let auth_header = format!("Bearer {}", &auth_push_token);
match get_reqwest_client() match get_reqwest_client()
.delete(CONFIG.push_relay_uri() + "/push/" + &uuid) .delete(CONFIG.push_relay_uri() + "/push/" + &push_uuid.unwrap())
.header(AUTHORIZATION, auth_header) .header(AUTHORIZATION, auth_header)
.send() .send()
.await .await

View File

@@ -119,10 +119,16 @@ pub struct LoginJwtClaims {
pub email: String, pub email: String,
pub email_verified: bool, pub email_verified: bool,
pub orgowner: Vec<String>, // ---
pub orgadmin: Vec<String>, // Disabled these keys to be added to the JWT since they could cause the JWT to get too large
pub orguser: Vec<String>, // Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
pub orgmanager: Vec<String>, // Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// pub orgowner: Vec<String>,
// pub orgadmin: Vec<String>,
// pub orguser: Vec<String>,
// pub orgmanager: Vec<String>,
// user security_stamp // user security_stamp
pub sstamp: String, pub sstamp: String,

View File

@@ -9,7 +9,7 @@ use reqwest::Url;
use crate::{ use crate::{
db::DbConnType, db::DbConnType,
error::Error, error::Error,
util::{get_env, get_env_bool}, util::{get_env, get_env_bool, parse_experimental_client_feature_flags},
}; };
static CONFIG_FILE: Lazy<String> = Lazy::new(|| { static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
@@ -380,8 +380,10 @@ make_config! {
push { push {
/// Enable push notifications /// Enable push notifications
push_enabled: bool, false, def, false; push_enabled: bool, false, def, false;
/// Push relay base uri /// Push relay uri
push_relay_uri: String, false, def, "https://push.bitwarden.com".to_string(); push_relay_uri: String, false, def, "https://push.bitwarden.com".to_string();
/// Push identity uri
push_identity_uri: String, false, def, "https://identity.bitwarden.com".to_string();
/// Installation id |> The installation id from https://bitwarden.com/host /// Installation id |> The installation id from https://bitwarden.com/host
push_installation_id: Pass, false, def, String::new(); push_installation_id: Pass, false, def, String::new();
/// Installation key |> The installation key from https://bitwarden.com/host /// Installation key |> The installation key from https://bitwarden.com/host
@@ -440,6 +442,8 @@ make_config! {
user_attachment_limit: i64, true, option; user_attachment_limit: i64, true, option;
/// Per-organization attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per org. When this limit is reached, org members will not be allowed to upload further attachments for ciphers owned by that org. /// Per-organization attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per org. When this limit is reached, org members will not be allowed to upload further attachments for ciphers owned by that org.
org_attachment_limit: i64, true, option; org_attachment_limit: i64, true, option;
/// Per-user send storage limit (KB) |> Max kilobytes of sends storage allowed per user. When this limit is reached, the user will not be allowed to upload further sends.
user_send_limit: i64, true, option;
/// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item. /// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item.
/// If unset, trashed items are not auto-deleted. This setting applies globally, so make /// If unset, trashed items are not auto-deleted. This setting applies globally, so make
@@ -478,7 +482,7 @@ make_config! {
/// Invitation token expiration time (in hours) |> The number of hours after which an organization invite token, emergency access invite token, /// Invitation token expiration time (in hours) |> The number of hours after which an organization invite token, emergency access invite token,
/// email verification token and deletion request token will expire (must be at least 1) /// email verification token and deletion request token will expire (must be at least 1)
invitation_expiration_hours: u32, false, def, 120; invitation_expiration_hours: u32, false, def, 120;
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users. /// Enable emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
emergency_access_allowed: bool, true, def, true; emergency_access_allowed: bool, true, def, true;
/// Allow email change |> Controls whether users can change their email. This setting applies globally to all users. /// Allow email change |> Controls whether users can change their email. This setting applies globally to all users.
email_change_allowed: bool, true, def, true; email_change_allowed: bool, true, def, true;
@@ -547,6 +551,9 @@ make_config! {
/// TOTP codes of the previous and next 30 seconds will be invalid. /// TOTP codes of the previous and next 30 seconds will be invalid.
authenticator_disable_time_drift: bool, true, def, false; authenticator_disable_time_drift: bool, true, def, false;
/// Customize the enabled feature flags on the clients |> This is a comma separated list of feature flags to enable.
experimental_client_feature_flags: String, false, def, "fido2-vault-credentials".to_string();
/// Require new device emails |> When a user logs in an email is required to be sent. /// Require new device emails |> When a user logs in an email is required to be sent.
/// If sending the email fails the login attempt will fail. /// If sending the email fails the login attempt will fail.
require_device_email: bool, true, def, false; require_device_email: bool, true, def, false;
@@ -751,6 +758,57 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
) )
} }
if cfg.push_enabled {
let push_relay_uri = cfg.push_relay_uri.to_lowercase();
if !push_relay_uri.starts_with("https://") {
err!("`PUSH_RELAY_URI` must start with 'https://'.")
}
if Url::parse(&push_relay_uri).is_err() {
err!("Invalid URL format for `PUSH_RELAY_URI`.");
}
let push_identity_uri = cfg.push_identity_uri.to_lowercase();
if !push_identity_uri.starts_with("https://") {
err!("`PUSH_IDENTITY_URI` must start with 'https://'.")
}
if Url::parse(&push_identity_uri).is_err() {
err!("Invalid URL format for `PUSH_IDENTITY_URI`.");
}
}
// TODO: deal with deprecated flags so they can be removed from this list, cf. #4263
const KNOWN_FLAGS: &[&str] =
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "fido2-vault-credentials"];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
if !invalid_flags.is_empty() {
err!(format!("Unrecognized experimental client feature flags: {invalid_flags:?}.\n\n\
Please ensure all feature flags are spelled correctly and that they are supported in this version.\n\
Supported flags: {KNOWN_FLAGS:?}"));
}
const MAX_FILESIZE_KB: i64 = i64::MAX >> 10;
if let Some(limit) = cfg.user_attachment_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`USER_ATTACHMENT_LIMIT` is out of bounds");
}
}
if let Some(limit) = cfg.org_attachment_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`ORG_ATTACHMENT_LIMIT` is out of bounds");
}
}
if let Some(limit) = cfg.user_send_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`USER_SEND_LIMIT` is out of bounds");
}
}
if cfg._enable_duo if cfg._enable_duo
&& (cfg.duo_host.is_some() || cfg.duo_ikey.is_some() || cfg.duo_skey.is_some()) && (cfg.duo_host.is_some() || cfg.duo_ikey.is_some() || cfg.duo_skey.is_some())
&& !(cfg.duo_host.is_some() && cfg.duo_ikey.is_some() && cfg.duo_skey.is_some()) && !(cfg.duo_host.is_some() && cfg.duo_ikey.is_some() && cfg.duo_skey.is_some())
@@ -1199,7 +1257,10 @@ impl Config {
} }
} }
use handlebars::{Context, Handlebars, Helper, HelperResult, Output, RenderContext, RenderError, Renderable}; use handlebars::{
Context, DirectorySourceOptions, Handlebars, Helper, HelperResult, Output, RenderContext, RenderErrorReason,
Renderable,
};
fn load_templates<P>(path: P) -> Handlebars<'static> fn load_templates<P>(path: P) -> Handlebars<'static>
where where
@@ -1243,17 +1304,18 @@ where
reg!("email/invite_accepted", ".html"); reg!("email/invite_accepted", ".html");
reg!("email/invite_confirmed", ".html"); reg!("email/invite_confirmed", ".html");
reg!("email/new_device_logged_in", ".html"); reg!("email/new_device_logged_in", ".html");
reg!("email/protected_action", ".html");
reg!("email/pw_hint_none", ".html"); reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html"); reg!("email/pw_hint_some", ".html");
reg!("email/send_2fa_removed_from_org", ".html"); reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/send_org_invite", ".html");
reg!("email/send_emergency_access_invite", ".html"); reg!("email/send_emergency_access_invite", ".html");
reg!("email/send_org_invite", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/smtp_test", ".html");
reg!("email/twofactor_email", ".html"); reg!("email/twofactor_email", ".html");
reg!("email/verify_email", ".html"); reg!("email/verify_email", ".html");
reg!("email/welcome", ".html");
reg!("email/welcome_must_verify", ".html"); reg!("email/welcome_must_verify", ".html");
reg!("email/smtp_test", ".html"); reg!("email/welcome", ".html");
reg!("admin/base"); reg!("admin/base");
reg!("admin/login"); reg!("admin/login");
@@ -1267,19 +1329,27 @@ where
// And then load user templates to overwrite the defaults // And then load user templates to overwrite the defaults
// Use .hbs extension for the files // Use .hbs extension for the files
// Templates get registered with their relative name // Templates get registered with their relative name
hb.register_templates_directory(".hbs", path).unwrap(); hb.register_templates_directory(
path,
DirectorySourceOptions {
tpl_extension: ".hbs".to_owned(),
..Default::default()
},
)
.unwrap();
hb hb
} }
fn case_helper<'reg, 'rc>( fn case_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>, h: &Helper<'rc>,
r: &'reg Handlebars<'_>, r: &'reg Handlebars<'_>,
ctx: &'rc Context, ctx: &'rc Context,
rc: &mut RenderContext<'reg, 'rc>, rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output, out: &mut dyn Output,
) -> HelperResult { ) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?; let param =
h.param(0).ok_or_else(|| RenderErrorReason::Other(String::from("Param not found for helper \"case\"")))?;
let value = param.value().clone(); let value = param.value().clone();
if h.params().iter().skip(1).any(|x| x.value() == &value) { if h.params().iter().skip(1).any(|x| x.value() == &value) {
@@ -1290,17 +1360,21 @@ fn case_helper<'reg, 'rc>(
} }
fn js_escape_helper<'reg, 'rc>( fn js_escape_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>, h: &Helper<'rc>,
_r: &'reg Handlebars<'_>, _r: &'reg Handlebars<'_>,
_ctx: &'rc Context, _ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>, _rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output, out: &mut dyn Output,
) -> HelperResult { ) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"jsesc\""))?; let param =
h.param(0).ok_or_else(|| RenderErrorReason::Other(String::from("Param not found for helper \"jsesc\"")))?;
let no_quote = h.param(1).is_some(); let no_quote = h.param(1).is_some();
let value = param.value().as_str().ok_or_else(|| RenderError::new("Param for helper \"jsesc\" is not a String"))?; let value = param
.value()
.as_str()
.ok_or_else(|| RenderErrorReason::Other(String::from("Param for helper \"jsesc\" is not a String")))?;
let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27"); let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
if !no_quote { if !no_quote {
@@ -1312,15 +1386,18 @@ fn js_escape_helper<'reg, 'rc>(
} }
fn to_json<'reg, 'rc>( fn to_json<'reg, 'rc>(
h: &Helper<'reg, 'rc>, h: &Helper<'rc>,
_r: &'reg Handlebars<'_>, _r: &'reg Handlebars<'_>,
_ctx: &'rc Context, _ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>, _rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output, out: &mut dyn Output,
) -> HelperResult { ) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Expected 1 parameter for \"to_json\""))?.value(); let param = h
.param(0)
.ok_or_else(|| RenderErrorReason::Other(String::from("Expected 1 parameter for \"to_json\"")))?
.value();
let json = serde_json::to_string(param) let json = serde_json::to_string(param)
.map_err(|e| RenderError::new(format!("Can't serialize parameter to JSON: {e}")))?; .map_err(|e| RenderErrorReason::Other(format!("Can't serialize parameter to JSON: {e}")))?;
out.write(&json)?; out.write(&json)?;
Ok(()) Ok(())
} }

View File

@@ -7,7 +7,6 @@ use diesel::{
use rocket::{ use rocket::{
http::Status, http::Status,
outcome::IntoOutcome,
request::{FromRequest, Outcome}, request::{FromRequest, Outcome},
Request, Request,
}; };
@@ -413,8 +412,11 @@ impl<'r> FromRequest<'r> for DbConn {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
match request.rocket().state::<DbPool>() { match request.rocket().state::<DbPool>() {
Some(p) => p.get().await.map_err(|_| ()).into_outcome(Status::ServiceUnavailable), Some(p) => match p.get().await {
None => Outcome::Failure((Status::InternalServerError, ())), Ok(dbconn) => Outcome::Success(dbconn),
_ => Outcome::Error((Status::ServiceUnavailable, ())),
},
None => Outcome::Error((Status::InternalServerError, ())),
} }
} }
} }

View File

@@ -1,5 +1,6 @@
use std::io::ErrorKind; use std::io::ErrorKind;
use bigdecimal::{BigDecimal, ToPrimitive};
use serde_json::Value; use serde_json::Value;
use crate::CONFIG; use crate::CONFIG;
@@ -13,14 +14,14 @@ db_object! {
pub id: String, pub id: String,
pub cipher_uuid: String, pub cipher_uuid: String,
pub file_name: String, // encrypted pub file_name: String, // encrypted
pub file_size: i32, pub file_size: i64,
pub akey: Option<String>, pub akey: Option<String>,
} }
} }
/// Local methods /// Local methods
impl Attachment { impl Attachment {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32, akey: Option<String>) -> Self { pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i64, akey: Option<String>) -> Self {
Self { Self {
id, id,
cipher_uuid, cipher_uuid,
@@ -145,13 +146,18 @@ impl Attachment {
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
let result: Option<i64> = attachments::table let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid)) .filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size)) .select(diesel::dsl::sum(attachments::file_size))
.first(conn) .first(conn)
.expect("Error loading user attachment total size"); .expect("Error loading user attachment total size");
result.unwrap_or(0)
match result.map(|r| r.to_i64()) {
Some(Some(r)) => r,
Some(None) => i64::MAX,
None => 0
}
}} }}
} }
@@ -168,13 +174,18 @@ impl Attachment {
pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
let result: Option<i64> = attachments::table let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid)) .filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size)) .select(diesel::dsl::sum(attachments::file_size))
.first(conn) .first(conn)
.expect("Error loading user attachment total size"); .expect("Error loading user attachment total size");
result.unwrap_or(0)
match result.map(|r| r.to_i64()) {
Some(Some(r)) => r,
Some(None) => i64::MAX,
None => 0
}
}} }}
} }

View File

@@ -273,7 +273,16 @@ impl Cipher {
None => { None => {
// Belongs to Organization, need to update affected users // Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
for user_org in UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await.iter() { // users having access to the collection
let mut collection_users =
UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
if CONFIG.org_groups_enabled() {
// members of a group having access to the collection
let group_users =
UserOrganization::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await;
collection_users.extend(group_users);
}
for user_org in collection_users {
User::update_uuid_revision(&user_org.user_uuid, conn).await; User::update_uuid_revision(&user_org.user_uuid, conn).await;
user_uuids.push(user_org.user_uuid.clone()) user_uuids.push(user_org.user_uuid.clone())
} }
@@ -417,6 +426,9 @@ impl Cipher {
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn, conn: &mut DbConn,
) -> bool { ) -> bool {
if !CONFIG.org_groups_enabled() {
return false;
}
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data { if let Some(cipher_sync_data) = cipher_sync_data {
return cipher_sync_data.user_group_full_access_for_organizations.get(org_uuid).is_some(); return cipher_sync_data.user_group_full_access_for_organizations.get(org_uuid).is_some();
@@ -512,6 +524,9 @@ impl Cipher {
} }
async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> { async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
if !CONFIG.org_groups_enabled() {
return Vec::new();
}
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::uuid.eq(&self.uuid)) .filter(ciphers::uuid.eq(&self.uuid))
@@ -593,50 +608,84 @@ impl Cipher {
// result, those ciphers will not appear in "My Vault" for the org // result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view. // owner/admin, but they can still be accessed via the org vault view.
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { if CONFIG.org_groups_enabled() {
let mut query = ciphers::table db_run! {conn: {
.left_join(ciphers_collections::table.on( let mut query = ciphers::table
ciphers::uuid.eq(ciphers_collections::cipher_uuid) .left_join(ciphers_collections::table.on(
)) ciphers::uuid.eq(ciphers_collections::cipher_uuid)
.left_join(users_organizations::table.on( ))
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable()) .left_join(users_organizations::table.on(
.and(users_organizations::user_uuid.eq(user_uuid)) ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32)) .and(users_organizations::user_uuid.eq(user_uuid))
)) .and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.left_join(users_collections::table.on( ))
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid) .left_join(users_collections::table.on(
// Ensure that users_collections::user_uuid is NULL for unconfirmed users. ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_organizations::user_uuid.eq(users_collections::user_uuid)) // Ensure that users_collections::user_uuid is NULL for unconfirmed users.
)) .and(users_organizations::user_uuid.eq(users_collections::user_uuid))
.left_join(groups_users::table.on( ))
groups_users::users_organizations_uuid.eq(users_organizations::uuid) .left_join(groups_users::table.on(
)) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
.left_join(groups::table.on( ))
groups::uuid.eq(groups_users::groups_uuid) .left_join(groups::table.on(
)) groups::uuid.eq(groups_users::groups_uuid)
.left_join(collections_groups::table.on( ))
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups::uuid) collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
) collections_groups::groups_uuid.eq(groups::uuid)
)) )
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner ))
.or_filter(users_organizations::access_all.eq(true)) // access_all in org .filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection .or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(groups::access_all.eq(true)) // Access via groups .or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.or_filter(collections_groups::collections_uuid.is_not_null()) // Access via groups .or_filter(groups::access_all.eq(true)) // Access via groups
.into_boxed(); .or_filter(collections_groups::collections_uuid.is_not_null()) // Access via groups
.into_boxed();
if !visible_only { if !visible_only {
query = query.or_filter( query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
); );
} }
query query
.select(ciphers::all_columns) .select(ciphers::all_columns)
.distinct() .distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db() .load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}} }}
} else {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.into_boxed();
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
);
}
query
.select(ciphers::all_columns)
.distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
} }
// Find all ciphers visible to the specified user. // Find all ciphers visible to the specified user.

View File

@@ -1,6 +1,7 @@
use serde_json::Value; use serde_json::Value;
use super::{CollectionGroup, User, UserOrgStatus, UserOrgType, UserOrganization}; use super::{CollectionGroup, User, UserOrgStatus, UserOrgType, UserOrganization};
use crate::CONFIG;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -181,47 +182,74 @@ impl Collection {
} }
pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { if CONFIG.org_groups_enabled() {
collections::table db_run! { conn: {
.left_join(users_collections::table.on( collections::table
users_collections::collection_uuid.eq(collections::uuid).and( .left_join(users_collections::table.on(
users_collections::user_uuid.eq(user_uuid.clone()) users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
) )
)) .filter(
.left_join(users_organizations::table.on( users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
collections::org_uuid.eq(users_organizations::org_uuid).and( users_organizations::access_all.eq(true) // access_all in Organization
users_organizations::user_uuid.eq(user_uuid.clone()) ).or(
) groups::access_all.eq(true) // access_all in groups
)) ).or( // access via groups
.left_join(groups_users::table.on( groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) collections_groups::collections_uuid.is_not_null()
)) )
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
) )
) )
) .select(collections::all_columns)
.select(collections::all_columns) .distinct()
.distinct() .load::<CollectionDb>(conn).expect("Error loading collections").from_db()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db() }}
}} } else {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
)
)
.select(collections::all_columns)
.distinct()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
}
} }
// Check if a user has access to a specific collection // Check if a user has access to a specific collection
@@ -277,45 +305,70 @@ impl Collection {
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { if CONFIG.org_groups_enabled() {
collections::table db_run! { conn: {
.left_join(users_collections::table.on( collections::table
users_collections::collection_uuid.eq(collections::uuid).and( .left_join(users_collections::table.on(
users_collections::user_uuid.eq(user_uuid.clone()) users_collections::collection_uuid.eq(collections::uuid).and(
) users_collections::user_uuid.eq(user_uuid.clone())
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
) )
) ))
).select(collections::all_columns) .left_join(users_organizations::table.on(
.first::<CollectionDb>(conn).ok() collections::org_uuid.eq(users_organizations::org_uuid).and(
.from_db() users_organizations::user_uuid.eq(user_uuid)
}} )
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
)
)
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
} else {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
))
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
}
} }
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {

View File

@@ -59,12 +59,7 @@ impl Device {
self.twofactor_remember = None; self.twofactor_remember = None;
} }
pub fn refresh_tokens( pub fn refresh_tokens(&mut self, user: &super::User, scope: Vec<String>) -> (String, i64) {
&mut self,
user: &super::User,
orgs: Vec<super::UserOrganization>,
scope: Vec<String>,
) -> (String, i64) {
// If there is no refresh token, we create one // If there is no refresh token, we create one
if self.refresh_token.is_empty() { if self.refresh_token.is_empty() {
use data_encoding::BASE64URL; use data_encoding::BASE64URL;
@@ -75,10 +70,17 @@ impl Device {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
self.updated_at = time_now; self.updated_at = time_now;
let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect(); // ---
let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect(); // Disabled these keys to be added to the JWT since they could cause the JWT to get too large
let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect(); // Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect(); // Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// ---
// fn arg: orgs: Vec<super::UserOrganization>,
// ---
// let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect();
// let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect();
// let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect();
// let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client // Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER}; use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
@@ -93,11 +95,16 @@ impl Device {
email: user.email.clone(), email: user.email.clone(),
email_verified: !CONFIG.mail_enabled() || user.verified_at.is_some(), email_verified: !CONFIG.mail_enabled() || user.verified_at.is_some(),
orgowner, // ---
orgadmin, // Disabled these keys to be added to the JWT since they could cause the JWT to get too large
orguser, // Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
orgmanager, // Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// orgowner,
// orgadmin,
// orguser,
// orgmanager,
sstamp: user.security_stamp.clone(), sstamp: user.security_stamp.clone(),
device: self.uuid.clone(), device: self.uuid.clone(),
scope, scope,
@@ -106,6 +113,14 @@ impl Device {
(encode_jwt(&claims), DEFAULT_VALIDITY.num_seconds()) (encode_jwt(&claims), DEFAULT_VALIDITY.num_seconds())
} }
pub fn is_push_device(&self) -> bool {
matches!(DeviceType::from_i32(self.atype), DeviceType::Android | DeviceType::Ios)
}
pub fn is_registered(&self) -> bool {
self.push_uuid.is_some()
}
} }
use crate::db::DbConn; use crate::db::DbConn;
@@ -203,6 +218,7 @@ impl Device {
.from_db() .from_db()
}} }}
} }
pub async fn find_push_devices_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_push_devices_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table

View File

@@ -214,7 +214,7 @@ impl UserOrganization {
} }
pub fn restore(&mut self) -> bool { pub fn restore(&mut self) -> bool {
if self.status < UserOrgStatus::Accepted as i32 { if self.status < UserOrgStatus::Invited as i32 {
self.status += ACTIVATE_REVOKE_DIFF; self.status += ACTIVATE_REVOKE_DIFF;
return true; return true;
} }
@@ -648,8 +648,7 @@ impl UserOrganization {
db_run! { conn: { db_run! { conn: {
users_organizations::table users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Accepted as i32)) .filter(users_organizations::status.eq(UserOrgStatus::Accepted as i32).or(users_organizations::status.eq(UserOrgStatus::Confirmed as i32)))
.or_filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.count() .count()
.first::<i64>(conn) .first::<i64>(conn)
.unwrap_or(0) .unwrap_or(0)
@@ -665,6 +664,16 @@ impl UserOrganization {
}} }}
} }
pub async fn find_confirmed_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
users_organizations::table users_organizations::table
@@ -769,6 +778,32 @@ impl UserOrganization {
}} }}
} }
pub async fn find_by_cipher_and_org_with_group(cipher_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.inner_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid)
))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)))
.left_join(ciphers_collections::table.on(
ciphers_collections::collection_uuid.eq(collections_groups::collections_uuid).and(ciphers_collections::cipher_uuid.eq(&cipher_uuid))
))
.filter(
groups::access_all.eq(true).or( // AccessAll via groups
ciphers_collections::cipher_uuid.eq(&cipher_uuid) // ..or access to collection via group
)
)
.select(users_organizations::all_columns)
.distinct()
.load::<UserOrganizationDb>(conn).expect("Error loading user organizations with groups").from_db()
}}
}
pub async fn user_has_ge_admin_access_to_cipher(user_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> bool { pub async fn user_has_ge_admin_access_to_cipher(user_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: { db_run! { conn: {
users_organizations::table users_organizations::table

View File

@@ -172,6 +172,7 @@ use crate::db::DbConn;
use crate::api::EmptyResult; use crate::api::EmptyResult;
use crate::error::MapResult; use crate::error::MapResult;
use crate::util::NumberOrString;
impl Send { impl Send {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult { pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
@@ -286,6 +287,36 @@ impl Send {
}} }}
} }
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<i64> {
let sends = Self::find_by_user(user_uuid, conn).await;
#[allow(non_snake_case)]
#[derive(serde::Deserialize, Default)]
struct FileData {
Size: Option<NumberOrString>,
size: Option<NumberOrString>,
}
let mut total: i64 = 0;
for send in sends {
if send.atype == SendType::File as i32 {
let data: FileData = serde_json::from_str(&send.data).unwrap_or_default();
let size = match (data.size, data.Size) {
(Some(s), _) => s.into_i64(),
(_, Some(s)) => s.into_i64(),
(None, None) => continue,
};
if let Ok(size) = size {
total = total.checked_add(size)?;
};
}
}
Some(total)
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
sends::table sends::table

View File

@@ -34,6 +34,9 @@ pub enum TwoFactorType {
EmailVerificationChallenge = 1002, EmailVerificationChallenge = 1002,
WebauthnRegisterChallenge = 1003, WebauthnRegisterChallenge = 1003,
WebauthnLoginChallenge = 1004, WebauthnLoginChallenge = 1004,
// Special type for Protected Actions verification via email
ProtectedActions = 2000,
} }
/// Local methods /// Local methods

View File

@@ -3,7 +3,7 @@ table! {
id -> Text, id -> Text,
cipher_uuid -> Text, cipher_uuid -> Text,
file_name -> Text, file_name -> Text,
file_size -> Integer, file_size -> BigInt,
akey -> Nullable<Text>, akey -> Nullable<Text>,
} }
} }

View File

@@ -3,7 +3,7 @@ table! {
id -> Text, id -> Text,
cipher_uuid -> Text, cipher_uuid -> Text,
file_name -> Text, file_name -> Text,
file_size -> Integer, file_size -> BigInt,
akey -> Nullable<Text>, akey -> Nullable<Text>,
} }
} }

View File

@@ -3,7 +3,7 @@ table! {
id -> Text, id -> Text,
cipher_uuid -> Text, cipher_uuid -> Text,
file_name -> Text, file_name -> Text,
file_size -> Integer, file_size -> BigInt,
akey -> Nullable<Text>, akey -> Nullable<Text>,
} }
} }

View File

@@ -291,10 +291,10 @@ macro_rules! err_json {
macro_rules! err_handler { macro_rules! err_handler {
($expr:expr) => {{ ($expr:expr) => {{
error!(target: "auth", "Unauthorized Error: {}", $expr); error!(target: "auth", "Unauthorized Error: {}", $expr);
return ::rocket::request::Outcome::Failure((rocket::http::Status::Unauthorized, $expr)); return ::rocket::request::Outcome::Error((rocket::http::Status::Unauthorized, $expr));
}}; }};
($usr_msg:expr, $log_value:expr) => {{ ($usr_msg:expr, $log_value:expr) => {{
error!(target: "auth", "Unauthorized Error: {}. {}", $usr_msg, $log_value); error!(target: "auth", "Unauthorized Error: {}. {}", $usr_msg, $log_value);
return ::rocket::request::Outcome::Failure((rocket::http::Status::Unauthorized, $usr_msg)); return ::rocket::request::Outcome::Error((rocket::http::Status::Unauthorized, $usr_msg));
}}; }};
} }

View File

@@ -517,6 +517,19 @@ pub async fn send_admin_reset_password(address: &str, user_name: &str, org_name:
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_protected_action_token(address: &str, token: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/protected_action",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"token": token,
}),
)?;
send_email(address, &subject, body_html, body_text).await
}
async fn send_with_selected_transport(email: Message) -> EmptyResult { async fn send_with_selected_transport(email: Message) -> EmptyResult {
if CONFIG.use_sendmail() { if CONFIG.use_sendmail() {
match sendmail_transport().send(email).await { match sendmail_transport().send(email).await {

View File

@@ -23,7 +23,7 @@
{{#if page_data.web_vault_enabled}} {{#if page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed <dt class="col-sm-5">Web Installed
<span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span> <span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning d-none" id="web-warning" title="There seems to be an update available.">Update</span> <span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-installed">{{page_data.web_vault_version}}</span> <span id="web-installed">{{page_data.web_vault_version}}</span>

View File

@@ -0,0 +1,6 @@
Your Vaultwarden Verification Code
<!---------------->
Your email verification code is: {{token}}
Use this code to complete the protected action in Vaultwarden.
{{> email/email_footer_text }}

View File

@@ -0,0 +1,16 @@
Your Vaultwarden Verification Code
<!---------------->
{{> email/email_header }}
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Your email verification code is: <b>{{token}}</b>
</td>
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Use this code to complete the protected action in Vaultwarden.
</td>
</tr>
</table>
{{> email/email_footer }}

View File

@@ -2,10 +2,12 @@
// Web Headers and caching // Web Headers and caching
// //
use std::{ use std::{
collections::HashMap,
io::{Cursor, ErrorKind}, io::{Cursor, ErrorKind},
ops::Deref, ops::Deref,
}; };
use num_traits::ToPrimitive;
use rocket::{ use rocket::{
fairing::{Fairing, Info, Kind}, fairing::{Fairing, Info, Kind},
http::{ContentType, Header, HeaderMap, Method, Status}, http::{ContentType, Header, HeaderMap, Method, Status},
@@ -46,6 +48,7 @@ impl Fairing for AppHeaders {
// Remove headers which could cause websocket connection issues // Remove headers which could cause websocket connection issues
res.remove_header("X-Frame-Options"); res.remove_header("X-Frame-Options");
res.remove_header("X-Content-Type-Options"); res.remove_header("X-Content-Type-Options");
res.remove_header("Permissions-Policy");
return; return;
} }
(_, _) => (), (_, _) => (),
@@ -362,21 +365,17 @@ pub fn write_file(path: &str, content: &[u8]) -> Result<(), crate::error::Error>
} }
pub fn delete_file(path: &str) -> IOResult<()> { pub fn delete_file(path: &str) -> IOResult<()> {
let res = fs::remove_file(path); fs::remove_file(path)
if let Some(parent) = Path::new(path).parent() {
// If the directory isn't empty, this returns an error, which we ignore
// We only want to delete the folder if it's empty
fs::remove_dir(parent).ok();
}
res
} }
pub fn get_display_size(size: i32) -> String { pub fn get_display_size(size: i64) -> String {
const UNITS: [&str; 6] = ["bytes", "KB", "MB", "GB", "TB", "PB"]; const UNITS: [&str; 6] = ["bytes", "KB", "MB", "GB", "TB", "PB"];
let mut size: f64 = size.into(); // If we're somehow too big for a f64, just return the size in bytes
let Some(mut size) = size.to_f64() else {
return format!("{size} bytes");
};
let mut unit_counter = 0; let mut unit_counter = 0;
loop { loop {
@@ -644,6 +643,47 @@ fn _process_key(key: &str) -> String {
} }
} }
#[derive(Deserialize, Debug, Clone)]
#[serde(untagged)]
pub enum NumberOrString {
Number(i64),
String(String),
}
impl NumberOrString {
pub fn into_string(self) -> String {
match self {
NumberOrString::Number(n) => n.to_string(),
NumberOrString::String(s) => s,
}
}
#[allow(clippy::wrong_self_convention)]
pub fn into_i32(&self) -> Result<i32, crate::Error> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => match n.to_i32() {
Some(n) => Ok(n),
None => err!("Number does not fit in i32"),
},
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
}
}
}
#[allow(clippy::wrong_self_convention)]
pub fn into_i64(&self) -> Result<i64, crate::Error> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => Ok(*n),
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
}
}
}
}
// //
// Retry methods // Retry methods
// //
@@ -754,3 +794,11 @@ pub fn convert_json_key_lcase_first(src_json: Value) -> Value {
value => value, value => value,
} }
} }
/// Parses the experimental client feature flags string into a HashMap.
pub fn parse_experimental_client_feature_flags(experimental_client_feature_flags: &str) -> HashMap<String, bool> {
let feature_states =
experimental_client_feature_flags.to_lowercase().split(',').map(|f| (f.trim().to_owned(), true)).collect();
feature_states
}