Compare commits

...

666 Commits

Author SHA1 Message Date
Daniel García
0b28ab3be1 Merge pull request #3403 from BlackDex/update-dockerfile-and-rust
Revert setcap, update rust and crates
2023-04-02 15:39:36 +02:00
Daniel García
c5bcc340fa Merge pull request #3405 from BlackDex/fix-multiple-websocket-messages
Fix sending out multiple websocket notifications
2023-04-02 15:24:00 +02:00
BlackDex
bff54fbfdb Fix sending out multiple websocket notifications
For some reason I encountered a strange bug which resulted in sending
out multiple websocket notifications for the exact same user.

Added a `distinct()` for the query to filter out multiple uuid's.
2023-04-02 15:23:36 +02:00
Daniel García
867c6ba056 Merge pull request #3398 from stefan0xC/dont-expect-kdf-memory-or-parallelism
always return KdfMemory and KdfParallelism
2023-04-02 15:22:42 +02:00
Daniel García
d1ecf03f44 Merge pull request #3397 from nikolaevn/feature/add-admin-reinvite-endpoint
support `/users/<uuid>/invite/resend` admin api
2023-04-02 15:21:51 +02:00
BlackDex
fc43608eec Revert setcap, update rust and crates
- Revert #3170 as discussed in #3387
  In hindsight it's better to not have this feature
- Update Dockerfile.j2 for easy version changes.
  Just change it in one place instead of multiple
- Updated to Rust to latest patched version
- Updated crates to latest available
- Pinned mimalloc to an older version, as it breaks on musl builds
2023-04-02 15:19:59 +02:00
Daniel García
15dd05c78d Merge pull request #3390 from BlackDex/fix-abort-pw-reset-on-mail-error
Fix abort on pw reset mail error
2023-04-02 15:19:53 +02:00
Nikolay Nikolaev
aa6f774f65 add check user state 2023-03-31 14:03:37 +03:00
Nikolay Nikolaev
379f885354 add mail check 2023-03-31 13:00:57 +03:00
Stefan Melmuk
39a5f2dbe8 clear kdf memory and parallelism with pbkdf2
when changing back from argon2id to PBKDF2 the unused parameters
should be set to 0.

also fix small bug in _register
2023-03-31 07:31:40 +02:00
Stefan Melmuk
0daaa9b175 always return KdfMemory and KdfParallelism
the client will ignore the value of theses fields in case of `PBKDF2`
(whether they are unset or left from trying out `Argon2id` as KDF).

with `Argon2id` those fields should never be `null` but always in a
valid state. if they are `null` (how would that even happen?) the
client still assumes default values for `Argon2id` (i.e. m=64 and p=4)
and if they are set to something else login will fail anyway.
2023-03-31 01:10:28 +02:00
Nikolay Nikolaev
0c085d21ce fmt 2023-03-30 16:04:35 +03:00
Nikolay Nikolaev
dcaaa430f0 support /users/<uuid>/invite/resend admin api 2023-03-30 15:23:16 +03:00
BlackDex
2cda54ceff Fix password reset issues
There was used a wrong macro to produce an error message when mailing
the user his password was reset failed. It was using `error!()` which
does not return an `Err` and aborts the rest of the code.

This resulted in the users password still being resetted, but not being
notified. This PR fixes this by using `err!()`. Also, do not set the
user object as mutable until it really is needed.

Second, when a user was using the new Argon2id KDF with custom values
like memory and parallelism, that would have rendered the password
incorrect. The endpoint which should return all the data did not
returned all the new Argon2id values.

Fixes #3388

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2023-03-30 09:41:13 +02:00
Daniel García
525e6bb65a Merge pull request #3376 from jjlin/knowndevices-nopad
Decode knowndevice `X-Request-Email` as base64url with no padding
2023-03-27 09:32:25 +02:00
Jeremy Lin
62cebebd3d Decode knowndevice X-Request-Email as base64url with no padding
The clients end up removing the padding characters [1][2].

[1] https://github.com/bitwarden/clients/blob/web-v2023.3.0/libs/common/src/misc/utils.ts#L141-L143
[2] https://github.com/bitwarden/mobile/blob/v2023.3.1/src/Core/Utilities/CoreHelpers.cs#L227-L234
2023-03-27 00:03:54 -07:00
Daniel García
3646f14042 Update web vault to v2023.3.0b 2023-03-26 14:10:51 +02:00
Daniel García
813e889c97 Merge pull request #3366 from BlackDex/some-fixes
Some small fixes and updates
2023-03-25 13:31:59 +01:00
BlackDex
8bcd0ab0c6 Some small fixes and updates
- Updated workflows to use new checkout version
  This probably fixes the curl download for hadolint also.
- Updated crates including Rocket to the latest rc3 :party:
- Applied 2 nightly clippy lints to prevent future clippy issues.
2023-03-25 12:51:42 +01:00
Daniel García
5725d297b4 Merge pull request #3363 from BlackDex/gha-test
Add support for Quay.io and GHCR.io as registries
2023-03-24 17:11:58 +01:00
Daniel García
a428f05e77 Merge pull request #3354 from stefan0xC/bulk-delete-endpoints
add endpoints to bulk delete collections/groups
2023-03-24 17:09:56 +01:00
BlackDex
467ecfdc99 Add support for Quay.io and GHCR.io as registries
- Added support for Quay.io
- Added support for GHCR.io

To enable support for these container image registries the following needs to be added.

As `Actions secrets and variables` - `Secrets`
- `DOCKERHUB_TOKEN` and `DOCKERHUB_USERNAME`
- `QUAY_TOKEN` and `QUAY_USERNAME`

As `Actions secrets and variables` - `Variables` - `Repository Variables`
- `DOCKERHUB_REPO`
- `GHCR_REPO`
- `QUAY_REPO`

The `DOCKERHUB_REPO` currently configured in `Secrets` can be removed if wanted, probably best after this PR has been merged.

If one of the vars/secrets are not configured it will skip that specific registry!
2023-03-23 16:38:27 +01:00
Stefan Melmuk
ed8091a994 don't use assert() in production code
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2023-03-23 00:26:28 +01:00
Stefan Melmuk
56cad93e0f add endpoint to bulk delete collections 2023-03-23 00:26:28 +01:00
Stefan Melmuk
3cf67e0b8d add endpoint to bulk delete groups 2023-03-23 00:26:26 +01:00
Daniel García
5800aceb2d Update web vault to v2023.3.0 and dependencies 2023-03-22 21:30:30 +01:00
Daniel García
729b563160 Merge pull request #3332 from BlackDex/merge-clientip-with-headers
Merge ClientIp with Headers.
2023-03-15 22:28:03 +01:00
Daniel García
6b5618a5fc Merge pull request #3348 from BlackDex/update-rust-and-crates
Update Rust, MSRV and Crates
2023-03-15 22:08:06 +01:00
Daniel García
2aa72eb240 Merge pull request #3329 from jjlin/knowndevices-header
Add support for `/api/devices/knowndevice` with HTTP header params
2023-03-15 22:03:15 +01:00
BlackDex
c8655c4f89 Update Rust, MSRV and Crates
- Updated all the crates
- Updated Rust and MSRV
2023-03-15 20:41:12 +01:00
Jeremy Lin
daaa03d1b3 Add support for /api/devices/knowndevice with HTTP header params
Upstream PR: https://github.com/bitwarden/server/pull/2682
2023-03-11 12:03:05 -08:00
BlackDex
9e5b94924f Merge ClientIp with Headers.
Since we now use the `ClientIp` Guard on a lot more places, it also
increases the size of binary, and the macro generated code because of
this extra Guard. By merging the `ClientIp` Guard with the several
`Header` guards we have it reduces the amount of code generated
(including LLVM IR), but also a small speedup in build time.

I also spotted some small `json!()` optimizations which also reduced the
amount of code generated.
2023-03-11 16:58:32 +01:00
Mathijs van Veluw
f21089900e Merge pull request #3310 from BlackDex/msrv-changes
Upd Crates, Rust, MSRV, GHA and remove Backtrace
2023-03-07 12:06:05 +01:00
BlackDex
0c0e632bc9 Upd Crates, Rust, MSRV, GHA and remove Backtrace
- Changed MSRV to v1.65.
  Discussed this with @dani-garcia, and we will support **N-2**.
  This is/will be the same as for the `time` crate we use.
  Also updated the wiki regarding this https://github.com/dani-garcia/vaultwarden/wiki/Building-binary
- Removed backtrace crate in favor of `std::backtrace` stable since v1.65
- Updated Rust to v1.67.1
- Updated all the crates
- Updated the GHA action versions
- Adjusted the GHA MSRV build to extract the MSRV from `Cargo.toml`
2023-03-07 09:17:42 +01:00
Daniel García
a13a5bd1d8 Merge pull request #3315 from BlackDex/issue-3311
Fix web-vault Member UI show/edit/save
2023-03-06 21:13:34 +01:00
Daniel García
3b34b429f3 Merge pull request #3307 from jjlin/head-routes
Add HEAD routes to avoid spurious error messages
2023-03-06 21:12:54 +01:00
Daniel García
97ffd17789 Merge pull request #3289 from BlackDex/admin-token-hash-support
Admin token Argon2 hashing support
2023-03-06 21:12:41 +01:00
BlackDex
10c5476d31 Fix web-vault Member UI show/edit/save
There was a small bug left in regards to the web-vault v2023.2.0 fixes.
This PR fixes the left items. I think all should be addressed now.
When editing a User, you were not able to see or edit groups, or see
wich collections a user bellonged to.

Fixes #3311
2023-03-06 17:07:21 +01:00
Jeremy Lin
d3626eba2a Add HEAD routes to avoid spurious error messages
Rocket automatically implements a HEAD route when there's a matching GET
route, but relying on this behavior also means a spurious error gets
logged due to <https://github.com/SergioBenitez/Rocket/issues/1098>.

Add explicit HEAD routes for `/` and `/alive` to prevent uptime monitoring
services from generating error messages like `No matching routes for HEAD /`.
With these new routes, `HEAD /` only checks that the server can respond over
the network, while `HEAD /alive` also checks that the database connection is
alive, similar to `GET /alive`.
2023-03-05 09:51:42 -08:00
BlackDex
de157b2654 Admin token Argon2 hashing support
Added support for Argon2 hashing support for the `ADMIN_TOKEN` instead
of only supporting a plain text string.

The hash must be a PHC string which can be generated via the `argon2`
CLI **or** via the also built-in hash command in Vaultwarden.

You can simply run `vaultwarden hash` to generate a hash based upon a
password the user provides them self.

Added a warning during startup and within the admin settings panel is
the `ADMIN_TOKEN` is not an Argon2 hash.

Within the admin environment a user can ignore that warning and it will
not be shown for at least 30 days. After that the warning will appear
again unless the `ADMIN_TOKEN` has be converted to an Argon2 hash.

I have also tested this on my RaspberryPi 2b and there the `Bitwarden`
preset takes almost 4.5 seconds to generate/verify the Argon2 hash.

Using the `OWASP` preset it is below 1 second, which I think should be
fine for low-graded hardware. If it is needed people could use lower
memory settings, but in those cases I even doubt Vaultwarden it self
would run. They can always use the `argon2` CLI and generate a faster hash.
2023-03-04 16:15:30 +01:00
Mathijs van Veluw
337cbfaf22 Merge pull request #3290 from dpinse/test
Fix confirmation for removing 2FA and deauthing sessions in admin panel
2023-03-01 06:38:28 +01:00
Dylan Pinsonneault
f88b6d961e Fix confirmation for removing 2FA and deauthing sessions in admin panel 2023-02-28 20:38:33 -05:00
Daniel García
0426051541 Merge pull request #3281 from BlackDex/fix-web-vault-issues
Fix the web-vault v2023.2.0 API calls
2023-02-28 23:45:59 +01:00
Daniel García
4556f668de Merge pull request #3288 from BlackDex/admin-interface-updates
Some Admin Interface updates
2023-02-28 23:43:01 +01:00
Daniel García
da8225a3bd Merge pull request #3282 from JCBird1012/main
Add confirmation for removing 2FA and deauthing sessions in admin panel
2023-02-28 23:42:47 +01:00
BlackDex
f10e6b6ac2 Some Admin Interface updates
- Updated datatables
- Added NTP Time check
- Added Collections, Groups and Events count for orgs
- Renamed `Items` to `Ciphers`
- Some small style updates
2023-02-28 20:43:22 +01:00
BlackDex
7ec00d3850 Fix the web-vault v2023.2.0 API calls
- Supports the new Collection/Group/User editing UI's
- Support `/partial` endpoint for cipher updating to allow folder and favorite update for read-only ciphers.
- Prevent `Favorite`, `Folder`, `read-only` and `hide-passwords` from being added to the organizational sync.
- Added and corrected some `Object` key's to the output json.

Fixes #3279
2023-02-27 16:37:58 +01:00
Jonathan Elias Caicedo
8f8d7418ed Add confirmation for removing 2FA and deauth sessions in admin panel 2023-02-24 16:24:48 -05:00
Mathijs van Veluw
af6d17b701 Merge pull request #3277 from jjlin/org-vault-display
Fix vault item display in org vault view
2023-02-23 13:45:47 +01:00
Jeremy Lin
61183d001c Fix vault item display in org vault view
In the org vault view, the Bitwarden web vault currently tries to fetch the
groups for an org regardless of whether it claims to have group support.
If this errors out, no vault items are displayed.
2023-02-22 12:17:13 -08:00
Daniel García
024d12db08 Update web vault to v2023.2.0 and dependencies 2023-02-21 22:48:20 +01:00
Daniel García
dc7951efaf Add missing collections/details endpoint, based on the existing one 2023-02-21 21:58:37 +01:00
Daniel García
06e14fea55 Merge branch 'mittler-works-search_user_by_email' 2023-02-21 21:37:28 +01:00
Nils Mittler
0f656b4889 Apply rewording 2023-02-21 21:37:24 +01:00
Nils Mittler
6fa1dc50be Apply Admin Session Lifetime to JWT 2023-02-21 21:37:24 +01:00
Nils Mittler
2bb41367bc Make the admin cookie lifetime adjustable 2023-02-21 21:37:24 +01:00
Misterbabou
20d8886bfa Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-21 21:37:23 +01:00
BlackDex
59ef82b740 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:37:23 +01:00
BlackDex
fc543154c0 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:37:23 +01:00
r3drun3
569b464157 docs: add build status badge in readme 2023-02-21 21:37:23 +01:00
Daniel García
adf83c698d Merge branch 'mittler-works-adjustable_admin_cookie_lifetime' 2023-02-21 21:30:19 +01:00
Misterbabou
8fcbc58ee2 Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-21 21:30:15 +01:00
BlackDex
2dcbb2be59 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:30:14 +01:00
BlackDex
7026e004e1 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:30:14 +01:00
r3drun3
a3084feaee docs: add build status badge in readme 2023-02-21 21:30:14 +01:00
Daniel García
e7d36de784 Merge branch 'Misterbabou-issue-3249' 2023-02-21 21:29:13 +01:00
BlackDex
54cc47b14e Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:29:09 +01:00
BlackDex
fac44888cd Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:29:08 +01:00
r3drun3
9f056523c9 docs: add build status badge in readme 2023-02-21 21:29:08 +01:00
Daniel García
0af1ef387d Merge branch 'BlackDex-issue-3247' 2023-02-21 21:27:35 +01:00
BlackDex
f95f40be15 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:27:31 +01:00
r3drun3
5c859e2e6c docs: add build status badge in readme 2023-02-21 21:27:31 +01:00
Daniel García
03ff5e6ece Merge branch 'BlackDex-fix-client-api-login-checks' 2023-02-21 21:26:57 +01:00
r3drun3
52d696aa74 docs: add build status badge in readme 2023-02-21 21:26:53 +01:00
Daniel García
a4e80712dd Merge branch 'R3DRUN3-new_branch' 2023-02-21 21:26:26 +01:00
Nils Mittler
a947e434f0 Apply rewording 2023-02-20 17:02:14 +01:00
Nils Mittler
2eb4f290a5 Apply Admin Session Lifetime to JWT 2023-02-20 16:51:09 +01:00
Nils Mittler
8ae799a771 Add function to fetch user by email address 2023-02-20 16:39:56 +01:00
Nils Mittler
9a5f3a5015 Make the admin cookie lifetime adjustable 2023-02-20 16:10:30 +01:00
BlackDex
1ca0d6e245 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-19 18:16:06 +01:00
Misterbabou
7f69eebeb1 Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-17 14:17:18 +01:00
BlackDex
32bd9b83a3 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-16 17:29:12 +01:00
r3drun3
477d60de49 docs: add build status badge in readme 2023-02-15 10:15:42 +01:00
Mathijs van Veluw
1ba8275dcb Merge pull request #3234 from BlackDex/update-rust-and-crates
Updated Rust and crates
2023-02-13 12:39:26 +01:00
BlackDex
a0a4994250 Updated Rust and crates
- Updated Rust to v1.67.0
- Updated all crates except for `cookies` and `webauthn`
2023-02-13 08:32:01 +01:00
Daniel García
32dfa41970 Merge pull request #3147 from soruh/main
add support for system mta though sendmail
2023-02-12 19:40:33 +01:00
Daniel García
f92efda0f0 Merge branch 'main' into main 2023-02-12 19:40:04 +01:00
Daniel García
3b0f643e9d Merge pull request #3210 from tessus/feature/kdf-options
add argon2 kdf fields
2023-02-12 19:23:22 +01:00
Daniel García
5bcee24f88 Merge branch 'main' into feature/kdf-options 2023-02-12 19:23:14 +01:00
soruh
9e3d7ea44c add EXE_SUFFIX to sendmail executable when not specified 2023-02-12 18:55:15 +01:00
soruh
8cc6dac893 check if SENDMAIL_COMMAND is valid using 'which' crate 2023-02-12 18:55:15 +01:00
soruh
b7c4316c77 Add support for sendmail as a mail transport 2023-02-12 18:54:59 +01:00
Daniel García
0c295d5e6e Merge pull request #3167 from BlackDex/issue-3166
Fix Javascript issue on non sqlite databases
2023-02-12 18:48:03 +01:00
Daniel García
bc49d1f90d Merge branch 'main' into issue-3166 2023-02-12 18:47:55 +01:00
Daniel García
6f6d9dee83 Merge pull request #3108 from farodin91/allow-editing/unhiding-by-group
allow editing/unhiding by group
2023-02-12 18:47:02 +01:00
Daniel García
cef5dd4a46 Merge branch 'main' into allow-editing/unhiding-by-group 2023-02-12 18:46:53 +01:00
Daniel García
79061c0eb5 Merge pull request #3231 from kpfleming/icon-blacklist-improvements
Generate distinct log messages for regex vs. IP blacklisting.
2023-02-12 18:43:26 +01:00
Daniel García
6e2c3fc1cc Merge branch 'main' into icon-blacklist-improvements 2023-02-12 18:43:19 +01:00
Daniel García
e301fe137f Merge pull request #3228 from BlockListed/fix-domain-description
Fix trailing slash not getting removed from domain
2023-02-12 18:42:55 +01:00
Daniel García
af69c83db2 Merge branch 'main' into fix-domain-description 2023-02-12 18:42:49 +01:00
Daniel García
53fa8da5b1 Merge pull request #3215 from stefan0xC/fix-post-emergency-access
don't nullify key when editing emergency access
2023-02-12 18:42:30 +01:00
Daniel García
c58aac585b Merge branch 'main' into fix-post-emergency-access 2023-02-12 18:42:21 +01:00
Daniel García
8c1117fcbf Merge pull request #3170 from jjlin/cap_net_bind_service
Allow listening on privileged ports (below 1024) as non-root
2023-02-12 18:42:00 +01:00
Daniel García
a6dd4f1206 Merge branch 'main' into cap_net_bind_service 2023-02-12 18:41:45 +01:00
Daniel García
5af1799991 Merge pull request #3145 from dlehammer/spell-jack_mitigation
"Spell-Jacking" mitigation ~ prevent sensitive data leak …
2023-02-12 18:39:54 +01:00
Daniel García
a20a641de3 Merge branch 'main' into spell-jack_mitigation 2023-02-12 18:39:27 +01:00
Daniel García
8abd38573b Merge pull request #3116 from sirux88/admin-password-reset
Admin password reset
2023-02-12 18:38:50 +01:00
Daniel García
78abdf0e9d Merge branch 'main' into admin-password-reset 2023-02-12 18:38:36 +01:00
Daniel García
dc031d8d86 Merge pull request #2561 from BlackDex/re-license
Re-License Vaultwarden to AGPLv3
2023-02-12 18:35:35 +01:00
Daniel García
de6330b09d Merge branch 'main' into re-license 2023-02-12 18:35:09 +01:00
Helmut K. C. Tessarek
68bcc7a4b8 add argon2 kdf fields 2023-02-07 13:52:52 -05:00
BlockListed
c04a1352cb remove warn when sanitizing domain 2023-02-07 18:49:26 +01:00
BlockListed
5d1c11ceba fix trailing slash in configuration builder 2023-02-07 18:42:36 +01:00
BlockListed
a2aa7c9bc2 Revert "fix trailing slash not being removed from domain"
This reverts commit 679bc7a59b.
2023-02-07 18:41:24 +01:00
Jan Jansen
b3a351ccb2 allow editing/unhiding by group
Fixes #2989

Signed-off-by: Jan Jansen <jan.jansen@gdata.de>
2023-02-07 16:20:36 +01:00
BlockListed
679bc7a59b fix trailing slash not being removed from domain 2023-02-07 13:03:28 +01:00
BlockListed
a72d0b518f remove documentation of bug since I'm fixing it 2023-02-07 12:48:48 +01:00
Kevin P. Fleming
6741b25907 Ensure that all results from check_domain_blacklist_reason are cached. 2023-02-07 05:54:06 -05:00
Kevin P. Fleming
24b5784f02 Generate distinct log messages for regex vs. IP blacklisting.
When an icon will not be downloaded due to matching a configured
blacklist, ensure that the log message indicates the type of blacklist
that was matched.
2023-02-07 05:24:23 -05:00
BlockListed
eb9b481eba improve wording of domain description 2023-02-07 08:49:05 +01:00
BlockListed
64edc49392 change description of domain configuration
Vaultwarden send won't work if the domain includes a trailing slash.
This should be documented, as it may lead to confusion amoung users.
2023-02-06 23:19:08 +01:00
sirux88
0d1753ac74 completly hide reset password policy
on email disabled instances
2023-02-05 16:47:23 +01:00
sirux88
a6558f5548 rust lang specific improvements 2023-02-05 16:34:48 +01:00
sirux88
62dfeb80f2 improved security, disabling policy usage on
email-disabled clients and some refactoring
2023-02-04 13:29:57 +01:00
sirux88
26cd5d9643 Replaced wrong mysql column type 2023-02-04 09:23:13 +01:00
Stefan Melmuk
e65fbbfc21 don't nullify key when editing emergency access
the client does not send the key on every update of an emergency access
contact so the field would be emptied on a change of the wait days or access level.
2023-02-01 23:10:09 +01:00
Jeremy Lin
a2162f4d69 Allow listening on privileged ports (below 1024) as non-root
This is done by running `setcap cap_net_bind_service=+ep` on the executable
in the build stage (doing it in the runtime stage creates an extra copy of
the executable that bloats the image). This only works when using the
BuildKit-based builder, since the `COPY` instruction doesn't copy
capabilities on the legacy builder.
2023-02-01 00:35:33 -08:00
BlackDex
c9ed9aa733 Fix Javascript issue on non sqlite databases
When a non sqlite database is used, loading the admin interface fails
because the backup button is not generated.
This PR is solves it by checking if the elements are valid.

Also made some other changes and fixed some eslint errors.
Showing `_post` errors is better now.

Update jquery to latest version.

Fixes #3166
2023-01-26 20:34:25 +01:00
Daniel Hammer
9b20decdc1 "Spell-Jacking" mitigation ~ prevent sensitive data leak from spell checker.
@see https://www.otto-js.com/news/article/chrome-and-edge-enhanced-spellcheck-features-expose-pii-even-your-passwords
2023-01-25 22:35:18 +01:00
sirux88
adaefc8628 fixes for current upstream main 2023-01-25 08:09:26 +01:00
sirux88
c6c45c4c49 working implementation 2023-01-25 08:06:21 +01:00
sirux88
95494083f2 added database migration 2023-01-25 08:06:21 +01:00
Jeremy Lin
686474f815 Disable Hadolint check for consecutive RUN instructions (DL3059)
This check doesn't seem to add enough value to justify the difficulties it
tends to create when generating `RUN` instructions from a template.
2023-01-24 13:11:13 -08:00
Jeremy Lin
2c6bd8c9dc Rename .buildx Dockerfiles to .buildkit
This is a more accurate name, since these Dockerfiles require BuildKit, not Buildx.
2023-01-24 13:11:12 -08:00
Daniel García
9366e31452 Merge pull request #3164 from jjlin/remove-arm32v6-tag
Remove `arm32v6`-specific tag
2023-01-24 21:39:25 +01:00
Jeremy Lin
96ff32fb2f Remove arm32v6-specific tag
This section of code seems to be breaking the Docker release workflow as of a
few days ago, though it's unclear why. This tag only existed to work around
an issue with Docker pulling the wrong image for ARMv6 platforms; that issue
was resolved in Docker 20.10.0, which has been out for a few years now, so it
seems like a reasonable time to drop this tag.
2023-01-24 12:33:25 -08:00
BlackDex
9342fa5744 Re-License Vaultwarden to AGPLv3
This commit prepares Vaultwarden for the Re-Licensing to AGPLv3
Solves #2450
2023-01-24 20:49:11 +01:00
Daniel García
50fc22966c Updated web vault to 2023.1.1 and rust dependencies 2023-01-24 20:39:09 +01:00
Daniel García
4fab4c74ff Merge branch 'BlackDex-update-kdf-config' 2023-01-24 20:06:30 +01:00
BlackDex
e38e1a5d5f Validate note sizes on key-rotation.
We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes #3152
2023-01-24 20:05:09 +01:00
sirux88
cc91ac6cc0 include key into user.set_password 2023-01-24 20:04:05 +01:00
BlackDex
2d8c8e18f7 Update KDF Configuration and processing
- Change default Password Hash KDF Storage from 100_000 to 600_000 iterations
- Update Password Hash when the default iteration value is different
- Validate password_iterations
- Validate client-side KDF to prevent it from being set lower than 100_000
2023-01-24 19:49:12 +01:00
Daniel García
b17e2da2cf Merge branch 'BlackDex-issue-3152' 2023-01-24 19:47:20 +01:00
sirux88
d121cce0d2 include key into user.set_password 2023-01-24 19:47:14 +01:00
Daniel García
0eba7a88fa Merge branch 'sirux88-refactoring-user-setpassword' 2023-01-24 19:46:35 +01:00
BlackDex
34ac16e9d7 Validate note sizes on key-rotation.
We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes #3152
2023-01-24 09:30:10 +01:00
sirux88
906d9e2f1a Merge branch 'refactoring-user-setpassword' of https://github.com/sirux88/vaultwarden into refactoring-user-setpassword 2023-01-14 10:16:56 +01:00
sirux88
623d84aeb5 include key into user.set_password 2023-01-14 10:16:03 +01:00
sirux88
f8122cd2ca include key into user.set_password 2023-01-13 12:10:33 +01:00
Daniel García
9b7e86efc2 Update web vault to 2023.1.0 2023-01-12 19:49:06 +01:00
Daniel García
e7ccfbdd0e Merge branch 'BlackDex-optimize-ciphersync' 2023-01-12 19:19:01 +01:00
BlackDex
acc1474394 Add avatar color support
The new web-vault v2023.1.0 supports a custom color for the avatar.
https://github.com/bitwarden/server/pull/2330

This PR adds this feature.
2023-01-12 19:18:57 +01:00
BlackDex
c90b3031a6 Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 19:18:57 +01:00
BlackDex
aaffb2e007 Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:18:57 +01:00
GeekCorner
e0e95e95e4 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:18:57 +01:00
BlackDex
fa70b440d0 Fix remaning inline format 2023-01-12 19:18:56 +01:00
Rychart Redwerkz
42acb2ebb6 Use more modern meta tag for charset encoding 2023-01-12 19:18:56 +01:00
Daniel García
174bea8d6e Merge branch 'BlackDex-add-avatar-color-feature' 2023-01-12 19:17:22 +01:00
BlackDex
f68a57950b Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 19:17:16 +01:00
BlackDex
f747bf126b Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:17:15 +01:00
GeekCorner
1ca197fd46 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:17:15 +01:00
BlackDex
63d05d929b Fix remaning inline format 2023-01-12 19:17:15 +01:00
Rychart Redwerkz
ef5bf5d326 Use more modern meta tag for charset encoding 2023-01-12 19:17:15 +01:00
Daniel García
9d6e35d803 Merge branch 'BlackDex-update-rust-fix-cve' 2023-01-12 19:16:32 +01:00
BlackDex
0cccdcab83 Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:16:28 +01:00
GeekCorner
6607faa390 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:16:28 +01:00
BlackDex
6fcf18ab51 Fix remaning inline format 2023-01-12 19:16:28 +01:00
Rychart Redwerkz
d122c10573 Use more modern meta tag for charset encoding 2023-01-12 19:16:28 +01:00
Daniel García
ae9553ca1c Merge branch 'BlackDex-add-mfa-icon-to-orgs' 2023-01-12 19:16:16 +01:00
GeekCorner
ff919039c9 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:16:12 +01:00
BlackDex
80eb15d46a Fix remaning inline format 2023-01-12 19:16:11 +01:00
Rychart Redwerkz
c36b870c54 Use more modern meta tag for charset encoding 2023-01-12 19:16:11 +01:00
Daniel García
b7cbca590c Merge branch 'GeekCornerGH-fix/2fa_directory-csp' 2023-01-12 19:15:42 +01:00
BlackDex
606a1bbfcb Fix remaning inline format 2023-01-12 19:15:38 +01:00
Rychart Redwerkz
3e5369c8dd Use more modern meta tag for charset encoding 2023-01-12 19:15:38 +01:00
Daniel García
dd5e4cec73 Merge branch 'BlackDex-fix-remaining-inline' 2023-01-12 19:15:14 +01:00
Rychart Redwerkz
a31a040abd Use more modern meta tag for charset encoding 2023-01-12 19:15:06 +01:00
Daniel García
f0125b95c1 Merge branch 'redwerkz-patch-1' 2023-01-12 19:14:49 +01:00
BlackDex
072f2e24c2 Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 09:45:52 +01:00
BlackDex
36b5350f9b Add avatar color support
The new web-vault v2023.1.0 supports a custom color for the avatar.
https://github.com/bitwarden/server/pull/2330

This PR adds this feature.
2023-01-11 22:20:03 +01:00
BlackDex
c7489c9fdf Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-11 22:13:20 +01:00
BlackDex
3181e4e96e Optimize CipherSyncData for very large vaults
As mentioned in #3111, using a very very large vault causes some issues.
Mainly because of a SQLite limit, but, it could also cause issue on
MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory
allocations.

This PR solves this by removing the need of all the cipher_uuid's just
to gather the correct attachments.

It will use the user_uuid and org_uuid's to get all attachments linked
to both, weither the user has access to them or not. This isn't an
issue, since the matching is done per cipher and the attachment data is
only returned if there is a matching cipher to where the user has access to.

I also modified some code to be able to use `::with_capacity(n)` where
possible. This prevents re-allocations if the `Vec` increases size,
which will happen a lot if there are a lot of ciphers.

According to my tests measuring the time it takes to sync, it seems to
have lowered the duration a bit more.

Fixes #3111
2023-01-11 20:23:53 +01:00
GeekCorner
2ee0d53c5f fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-10 09:41:35 +01:00
Rychart Redwerkz
dfa629ecc7 Use more modern meta tag for charset encoding 2023-01-10 00:24:37 +01:00
BlackDex
92dc48b882 Fix remaning inline format 2023-01-09 20:41:31 +01:00
Daniel García
367e1ce289 Merge pull request #3065 from BlackDex/future-clippy-fixes
Resolve uninlined_format_args clippy warnings
2023-01-09 20:17:06 +01:00
BlackDex
7390f34355 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 20:13:48 +01:00
Daniel García
c47d9f6593 Fix some lints: explicit Arc::clone, and unnecessary return after unreachable! 2023-01-09 19:54:25 +01:00
Daniel García
5399ee8208 Merge branch 'BlackDex-update-libraries' 2023-01-09 19:19:11 +01:00
pjsier
117045e6d3 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:59 +01:00
BlackDex
912ad64555 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:18:58 +01:00
BlackDex
00855ee31d Fix failing large note imports
When importing to Vaultwarden (or Bitwarden) notes larger then 10_000
encrypted characters are invalid. This because it for one isn't
compatible with Bitwarden. And some clients tend to break on very large
notes.

We already added a check for this limit when adding a single cipher, but
this caused issues during import, and could cause a partial imported
vault. Bitwarden does some validations before actually running it
through the import process and generates a special error message which
helps the user indicate which items are invalid during the import.

This PR adds that validation check and returns the same kind of error.
Fixes #3048
2023-01-09 19:18:19 +01:00
pjsier
c18a273b4a Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:18 +01:00
pjsier
ca24a4adf1 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:18 +01:00
BlackDex
a263aaa481 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:18:18 +01:00
Rychart Redwerkz
0a20ba0020 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:18:09 +01:00
Jeremy Lin
6541600af6 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:17:46 +01:00
Daniel García
525979d5d9 Merge branch 'BlackDex-issue-3048' 2023-01-09 19:17:30 +01:00
pjsier
7dd1959eba Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:17:13 +01:00
pjsier
e266b39254 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:17:13 +01:00
BlackDex
e935989fee Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:17:13 +01:00
Rychart Redwerkz
25c401f64d Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:17:03 +01:00
Jeremy Lin
18b72da657 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:16:47 +01:00
Daniel García
e8e6c89927 Merge branch 'BlackDex-future-clippy-fixes' 2023-01-09 19:16:35 +01:00
pjsier
fd5f657334 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:16:30 +01:00
Rychart Redwerkz
da9605f2d2 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:16:30 +01:00
pjsier
7030de32d5 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:16:30 +01:00
Jeremy Lin
b67c5b77be Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:16:29 +01:00
BlackDex
d30878c4ea Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:12:51 +01:00
BlackDex
6be26f0a38 Fix failing large note imports
When importing to Vaultwarden (or Bitwarden) notes larger then 10_000
encrypted characters are invalid. This because it for one isn't
compatible with Bitwarden. And some clients tend to break on very large
notes.

We already added a check for this limit when adding a single cipher, but
this caused issues during import, and could cause a partial imported
vault. Bitwarden does some validations before actually running it
through the import process and generates a special error message which
helps the user indicate which items are invalid during the import.

This PR adds that validation check and returns the same kind of error.
Fixes #3048
2023-01-09 19:11:58 +01:00
Daniel García
34a6bfaefa Merge branch 'stapelkai-main' 2023-01-09 19:11:31 +01:00
Jeremy Lin
1c8749eb4d Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:11:27 +01:00
Andrés Maldonado
1198c36a2b Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-09 19:11:27 +01:00
BlackDex
41e6c1a383 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 19:11:27 +01:00
BlackDex
0042c3e4a7 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 19:11:27 +01:00
pjsier
724190f262 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:11:26 +01:00
BlackDex
6867d23ca2 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 19:11:26 +01:00
BlackDex
de26af0c2d Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 19:11:26 +01:00
Alex Martel
3f223a7514 Remove patched multer-rs 2023-01-09 19:11:25 +01:00
Daniel García
23f5a62d61 Merge branch 'jjlin-json-response' 2023-01-09 19:11:00 +01:00
Andrés Maldonado
81e2054f59 Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-09 19:10:57 +01:00
BlackDex
f9337effa5 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 19:10:56 +01:00
BlackDex
2972904eb8 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 19:10:56 +01:00
pjsier
bdd918b4d4 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:10:56 +01:00
BlackDex
88085fe17b Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 19:10:56 +01:00
BlackDex
2020a302d0 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 19:10:55 +01:00
Alex Martel
ab2dd0f300 Remove patched multer-rs 2023-01-09 19:10:55 +01:00
BlackDex
8e6fd4b4a1 Update dependencies and MSRV
- Updated dependencies.
  This includes a janked openssl crate version we currently use.
- Updated MSRV to v1.61.0 because hashbrown/cached has this version restriction.
2023-01-09 19:10:11 +01:00
Daniel García
988d24927e Merge branch 'am97-main' 2023-01-09 18:25:41 +01:00
BlackDex
e945d16fcf Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 18:25:36 +01:00
BlackDex
f1c0aa4f83 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 18:25:36 +01:00
pjsier
68362d06b3 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:25:36 +01:00
BlackDex
f65c0e2ac8 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:25:36 +01:00
BlackDex
0f588ced03 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:25:35 +01:00
Alex Martel
b0f03bb49c Remove patched multer-rs 2023-01-09 18:25:35 +01:00
Daniel García
5063661028 Merge branch 'BlackDex-optimize-config-loading-messages' 2023-01-09 18:25:25 +01:00
BlackDex
7e66ab78ff Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 18:25:18 +01:00
pjsier
665e275dc5 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:25:18 +01:00
BlackDex
a6da728cca Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:25:17 +01:00
BlackDex
04e02d7f9f Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:25:17 +01:00
Alex Martel
7c739dd58e Remove patched multer-rs 2023-01-09 18:25:17 +01:00
Daniel García
05a552910c Merge branch 'BlackDex-update-notifications' 2023-01-09 18:24:00 +01:00
pjsier
c990837066 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:23:56 +01:00
BlackDex
57aec37507 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:23:56 +01:00
BlackDex
0c5b4476ad Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:56 +01:00
Alex Martel
17141147a8 Remove patched multer-rs 2023-01-09 18:23:55 +01:00
Daniel García
193c2fa860 Merge branch 'pjsier-fix/log-file-permissions-3055' 2023-01-09 18:23:28 +01:00
BlackDex
6d01aaa80f Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:23:24 +01:00
BlackDex
ad60eaa0f3 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:23 +01:00
Alex Martel
d878face07 Remove patched multer-rs 2023-01-09 18:23:23 +01:00
Daniel García
8bf8388cd6 Merge branch 'BlackDex-issue-3003' 2023-01-09 18:23:11 +01:00
BlackDex
b4db853bcb Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:07 +01:00
Alex Martel
5ee94c0ba9 Remove patched multer-rs 2023-01-09 18:23:07 +01:00
Daniel García
f108349547 Merge branch 'BlackDex-remove-inline-js' 2023-01-09 18:22:55 +01:00
Alex Martel
d25e1ab94b Remove patched multer-rs 2023-01-09 18:22:50 +01:00
Daniel García
79fee269ee Merge branch 'manofthepeace-removePatchedMulter' 2023-01-09 18:22:39 +01:00
Rychart Redwerkz
ffe362f856 Merge pull request #2 from stapelkai/update-viewport-meta-tag
Remove `shrink-to-fit=no`
2023-01-09 02:18:29 +01:00
Rychart Redwerkz
04bb15a802 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-08 23:18:55 +01:00
Jeremy Lin
4d9d649db9 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-07 10:41:28 -08:00
Andrés Maldonado
2897c24e83 Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-03 12:51:44 +01:00
BlackDex
5964dc95f0 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-02 18:18:28 +01:00
BlackDex
613b2519ed Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2022-12-31 22:17:16 +01:00
BlackDex
996b60e43d Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2022-12-31 20:39:53 +01:00
Alex Martel
a6d09407b9 Remove patched multer-rs 2022-12-29 12:09:53 -05:00
pjsier
f2e9ddef4e Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-12-29 07:21:04 -06:00
BlackDex
ca417d3257 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2022-12-29 12:45:18 +01:00
Daniel García
10dadfca06 Update web vault to 2022.12.0 2022-12-18 20:37:01 +01:00
Daniel García
bf73a8235f Merge branch 'BlackDex-fix-yubikey-panic' 2022-12-18 20:32:10 +01:00
BlackDex
67a584c1d4 Disable groups by default and Some optimizations
- Put groups support behind a feature flag, and disabled by default.
  The reason is that it has some known issues, but we want to keep
  optimizing this feature. Putting it behind a feature flag could help
  some users, and the developers into optimizing this feature without to
  much trouble.

Further:

- Updates Rust to v1.66.0
- Updated GHA workflows
- Updated Alpine to 3.17
- Updated jquery to v3.6.2
- Moved jdenticon.js to load at the bottom, fixes an issue on chromium
- Added autocomplete attribute to admin login password field
- Added some extra CSP options (Tested this on Safari, Firefox, Chrome, Bitwarden Desktop)
- Moved uppercase convertion from runtime to compile-time using `paste`
  for building the environment variables, lowers heap allocations.
2022-12-18 20:32:06 +01:00
BlackDex
8e5f03972e Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-18 20:32:06 +01:00
Daniel García
d8abf8f98f Merge branch 'BlackDex-some-optimizations' 2022-12-18 20:31:22 +01:00
BlackDex
cb348d2e05 Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-18 20:31:17 +01:00
Daniel García
aceb111024 Merge branch 'BlackDex-issue-2985' 2022-12-18 20:25:46 +01:00
BlackDex
b60a4a68c7 Fix a panic during Yubikey register/login
The yubico crate uses blocking reqwest, and we called the `verify` from
a async thread. To prevent issues we need to wrap it within a
`spawn_blocking`.
2022-12-18 17:57:35 +01:00
BlackDex
8b6dfe48b7 Disable groups by default and Some optimizations
- Put groups support behind a feature flag, and disabled by default.
  The reason is that it has some known issues, but we want to keep
  optimizing this feature. Putting it behind a feature flag could help
  some users, and the developers into optimizing this feature without to
  much trouble.

Further:

- Updates Rust to v1.66.0
- Updated GHA workflows
- Updated Alpine to 3.17
- Updated jquery to v3.6.2
- Moved jdenticon.js to load at the bottom, fixes an issue on chromium
- Added autocomplete attribute to admin login password field
- Added some extra CSP options (Tested this on Safari, Firefox, Chrome, Bitwarden Desktop)
- Moved uppercase convertion from runtime to compile-time using `paste`
  for building the environment variables, lowers heap allocations.
2022-12-16 14:52:42 +01:00
BlackDex
6154e03c05 Fix recover-2fa not working.
When audit logging was introduced there entered a small bug preventing
the recover-2fa from working.

This PR fixes that by add a new headers check to extract the device-type
when possible and use that for the logging.

Fixes #2985
2022-12-15 15:57:30 +01:00
Daniel García
d0b53a6a3d Update web vault to v2022.11.2 2022-12-12 23:11:46 +01:00
Daniel García
317aa679cf Merge branch 'BlackDex-issue-2975' 2022-12-12 22:56:32 +01:00
BlackDex
8d1bc2e539 Fix org export (again)
It looks like Bitwarden, in-the-end, didn't changed the export feature
on v2022.11.0, and now have put in on v2023.1.0.

This patch now changes that to the same version.
Before those new clients are being released, we should see if they
changed that again, and adjust where needed.
2022-12-12 22:56:14 +01:00
BlackDex
50c46f6e9a Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 22:56:10 +01:00
Helmut K. C. Tessarek
4f1928778a use 32x32 favicon for consistency 2022-12-12 22:56:09 +01:00
Helmut K. C. Tessarek
5fcba3d7f5 use black favicon for /admin 2022-12-12 22:56:09 +01:00
Helmut K. C. Tessarek
4db42b07c4 Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:56:09 +01:00
BlackDex
cd3e2d7a5a Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:56:09 +01:00
Daniel García
d139e22042 Merge branch 'BlackDex-fix-org-export' 2022-12-12 22:55:56 +01:00
BlackDex
892296e6d5 Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 22:55:17 +01:00
Helmut K. C. Tessarek
992ef399ed use 32x32 favicon for consistency 2022-12-12 22:55:17 +01:00
Helmut K. C. Tessarek
5afba46743 use black favicon for /admin 2022-12-12 22:55:16 +01:00
Helmut K. C. Tessarek
df0aa7949e Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:55:16 +01:00
BlackDex
353d2e6e01 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:55:16 +01:00
Daniel García
f9375bb215 Merge branch 'BlackDex-replace-ctrlc-crate' 2022-12-12 22:55:06 +01:00
Helmut K. C. Tessarek
8d04ff66e7 use 32x32 favicon for consistency 2022-12-12 22:55:02 +01:00
Helmut K. C. Tessarek
e649b11511 use black favicon for /admin 2022-12-12 22:55:02 +01:00
Helmut K. C. Tessarek
bda19bdddf Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:55:01 +01:00
BlackDex
99fd92df21 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:55:01 +01:00
Daniel García
1210310063 Merge branch 'tessus-fix/admin-icon' 2022-12-12 22:54:49 +01:00
Helmut K. C. Tessarek
b093384385 Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-12 22:54:45 +01:00
BlackDex
cec45ae9bd Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:54:45 +01:00
Daniel García
e6dd584dd6 Merge branch 'tessus-fix/env-template' 2022-12-12 22:54:34 +01:00
BlackDex
7cc74dabaf Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-12 22:54:30 +01:00
Daniel García
2336f102f9 Merge branch 'BlackDex-issue-2929' 2022-12-12 22:53:48 +01:00
BlackDex
cebe0f6442 Remove ctrlc crate and some updates
- Removed ctrlc crate and use the tokio provided ctrl_c function.
- Updated some crates.
2022-12-12 12:58:48 +01:00
BlackDex
d9c0c23819 Revert collection queries back to left_join
Using the `inner_join` seems to cause issues, even though i have tested
it. Strangely it does cause issues. Reverting it back to `left_join`
seems to solve the issue for me.

Fixes #2975
2022-12-12 12:21:48 +01:00
BlackDex
aa355a96f9 Fix org export (again)
It looks like Bitwarden, in-the-end, didn't changed the export feature
on v2022.11.0, and now have put in on v2023.1.0.

This patch now changes that to the same version.
Before those new clients are being released, we should see if they
changed that again, and adjust where needed.
2022-12-12 11:17:34 +01:00
BlackDex
4a85dd2480 Increase privacy of masked config
This changes the masking function to hide a bit more information from
the generated support string. It will still keep showing the `://` for
example, and `,`, but other characters will be hidden.

Also did some small changes on some key's which all showed up as
`Internal` on the Settings page.

Fixes #2929
2022-12-10 17:55:59 +01:00
Helmut K. C. Tessarek
213909baa5 use 32x32 favicon for consistency 2022-12-09 19:09:35 -05:00
Helmut K. C. Tessarek
6915a60332 use black favicon for /admin 2022-12-09 17:32:59 -05:00
Helmut K. C. Tessarek
52a50e9ade Improve comments
- The first one was not a proper sentence.
- The second one mixed passive and active form in the secon d part of the sentence.
2022-12-09 16:31:40 -05:00
Daniel García
b7c9a346c1 Merge branch 'stefan0xC-use-custom-404-page' 2022-12-08 20:43:38 +01:00
BlackDex
2d90c6ac24 Fix managers and groups link
This PR should fix the managers and group link.
Although i think there might be a cleaner sollution, there are a lot of
other items to fix here which we should do in time.

But for now, with theh group support already merged, this fix should at
least help solving issue #2932.

Fixes #2932
2022-12-08 20:43:34 +01:00
Daniel García
7f7b5447fd Merge branch 'BlackDex-issue-2932-take2' 2022-12-08 20:43:15 +01:00
BlackDex
142f7bb50d Fix managers and groups link
This PR should fix the managers and group link.
Although i think there might be a cleaner sollution, there are a lot of
other items to fix here which we should do in time.

But for now, with theh group support already merged, this fix should at
least help solving issue #2932.

Fixes #2932
2022-12-08 12:37:21 +01:00
Stefan Melmuk
d209df9e10 use a custom 404 page
to customize the 404 page you can copy the handlebar template
`src/static/templates/404.hbs` to the TEMPLATES_FOLDER (defaults to
`data/templates/`)
2022-12-05 00:08:46 +01:00
Daniel García
1b56f4266b Merge branch 'BlackDex-sql-debugging' 2022-12-04 23:17:52 +01:00
BlackDex
d6dc6070f3 Fix admin repost warning.
Currently when you login into the admin, and then directly hit the save
button, it will come with a re-post/re-submit warning.
This has to do with the `window.location.reload()` function, which
triggers the admin login POST again.

By changing the way to reload the page, we prevent this repost.
2022-12-04 23:17:49 +01:00
BlackDex
d66323b742 Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-04 23:17:48 +01:00
BlackDex
7b09d74b1f Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:17:48 +01:00
BlackDex
c0e3c2c5e1 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:17:48 +01:00
Daniel García
06189a58fe Merge branch 'BlackDex-fix-admin-repost' 2022-12-04 23:16:54 +01:00
BlackDex
f402dd81bb Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-04 23:16:50 +01:00
BlackDex
c885bbc947 Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:16:50 +01:00
BlackDex
63fb0e5a57 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:16:49 +01:00
Daniel García
37d0792a7d Merge branch 'BlackDex-issue-2937' 2022-12-04 23:15:08 +01:00
BlackDex
c8040d2f63 Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 23:15:03 +01:00
BlackDex
dbcad65b68 Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:15:03 +01:00
Daniel García
226da67bc0 Merge branch 'BlackDex-update-deps' 2022-12-04 23:14:41 +01:00
BlackDex
fee2b5c3fb Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-04 23:14:37 +01:00
Daniel García
6bbb3d53ae Merge branch 'BlackDex-emergency-access-cleanup' 2022-12-04 23:14:02 +01:00
BlackDex
610b183cef Update dependencies for Rust and Admin interface.
- Updated Rust deps and one small change regarding chrono
- Updated bootstrap 5 css
- Updated datatables
- Replaced identicon.js with jdenticon.
  identicon.js is unmaintained ( https://github.com/stewartlord/identicon.js/issues/52 )
  The icon's are very different, but nice. It also doesn't need custom
  code to find and update the icons our selfs.
2022-12-04 18:38:46 +01:00
BlackDex
1b64b9e164 Add dev-only query logging support
This PR adds query logging support as an optional feature.
It is only allowed during development/debug builds, and will abort when
used during a `--release` build.

For this feature to be fully activated you also need to se an
environment variable `QUERY_LOGGER=1` to activate the debug log-level
for this crate, else there will be no output.

The reason for this PR is that sometimes it is useful to be able to see
the generated queries, like when debugging an issue, or trying to
optimize a query. Currently i always added this code when needed, but
having this a part of the code could benifit other developers too who
maybe need this.
2022-12-03 18:36:46 +01:00
BlackDex
b022be9ba8 Fix admin repost warning.
Currently when you login into the admin, and then directly hit the save
button, it will come with a re-post/re-submit warning.
This has to do with the `window.location.reload()` function, which
triggers the admin login POST again.

By changing the way to reload the page, we prevent this repost.
2022-12-03 17:28:25 +01:00
BlackDex
7f11363725 Limit Cipher Note encrypted string size
As discussed in #2937, this will limit the amount of encrypted
characters to 10.000 characters, same as Bitwarden.
This will not break current ciphers which exceed this limit, but it will prevent those
ciphers from being updated.

Fixes #2937
2022-12-02 16:25:11 +01:00
BlackDex
4aa6dd22bb Cleanups and Fixes for Emergency Access
- Several cleanups and code optimizations for Emergency Access
- Fixed a race-condition regarding jobs for Emergency Access
- Some other small changes like `allow(clippy::)` removals

Fixes #2925
2022-12-02 09:44:23 +01:00
Daniel García
8feed2916f Update web vault to v2022.11.1 2022-12-01 22:53:47 +01:00
Daniel García
59eaa0aa0d Merge branch 'stefan0xC-forward-to-admin-login' 2022-12-01 22:41:00 +01:00
Stefan Melmuk
d5e54cb576 only check sqlite parent if there could be one 2022-12-01 22:38:59 +01:00
Stefan Melmuk
8837660ba7 check if sqlite folder exists
instead of creating the parent folders to a sqlite database
vaultwarden should just exit if it does not.

this should fix issues like #2835 when a wrongly configured
`DATABASE_URL` falls back to using sqlite
2022-12-01 22:38:59 +01:00
BlackDex
464a489b44 Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 22:38:59 +01:00
BlackDex
7035700c8d Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-12-01 22:38:59 +01:00
Daniel García
23c2921690 Merge branch 'stefan0xC-check-sqlite-folder' 2022-12-01 22:36:01 +01:00
BlackDex
7d506f3633 Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 22:35:57 +01:00
BlackDex
b186813049 Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-12-01 22:35:57 +01:00
Daniel García
bfa82225da Merge pull request #2940 from BlackDex/update-logos
Update Vaultwarden Logo's
2022-12-01 22:10:37 +01:00
Daniel García
ffa2044563 Merge pull request #2868 from BlackDex/impl-events
Add Organizational event logging feature
2022-12-01 22:09:27 +01:00
BlackDex
d57b69952d Update Vaultwarden Logo's
Updated the logo's so the `V` is better visible.
Also the cog it self is better now, the previous version wasn't fully round.
These versions also are used with the PR to update the web-vault and use these logo's.

Also updated the images in the static folder.
2022-12-01 12:32:24 +01:00
Stefan Melmuk
5a13efefd3 only check sqlite parent if there could be one 2022-11-28 22:54:03 +01:00
Stefan Melmuk
2f9d7060bd check if sqlite folder exists
instead of creating the parent folders to a sqlite database
vaultwarden should just exit if it does not.

this should fix issues like #2835 when a wrongly configured
`DATABASE_URL` falls back to using sqlite
2022-11-28 22:54:02 +01:00
Stefan Melmuk
0aa33a2cb4 don't use param for passing the redirect info
revert some changes and also rename catcher to `admin_login` to make its
function clearer

Co-authored-by: BlackDex <black.dex@gmail.com>
2022-11-28 18:21:30 +01:00
Stefan Melmuk
fa7dbedd5d redirect to admin login page when forward fails
currently, if the admin guard fails the user will get a 404 page.
and when the session times out after 20 minutes post methods will
give the reason "undefined" as a response while generating the support
string will fail without any user feedback.

this commit changes the error handling on admin pages

* by removing the reliance on Rockets forwarding and making the login
  page an explicit route that can be redirected to from all admin pages

* by removing the obsolete and mostly unused Referer struct we can
  redirect the user back to the requested admin page directley

* by providing an error message for json requests the
  `get_diagnostics_config` and all post methods can return a more
  comprehensible message and the user can be alerted

* the `admin_url()` function can be simplified because rfc2616 has been
  obsoleted by rfc7231 in 2014 (and also by the recently released
  rfc9110) which allows relative urls in the Location header.

  c.f. https://www.rfc-editor.org/rfc/rfc7231#section-7.1.2 and
  https://www.rfc-editor.org/rfc/rfc9110#section-10.2.2
2022-11-28 16:46:06 +01:00
BlackDex
2ea9b66943 Add Organizational event logging feature
This PR adds event/audit logging support for organizations.
By default this feature is disabled, since it does log a lot and adds
extra database transactions.

All events are touched except a few, since we do not support those
features (yet), like SSO for example.

This feature is tested with multiple clients and all database types.

Fixes #229
2022-11-27 23:36:34 +01:00
Daniel García
f3beaea9e9 Merge pull request #2933 from stefan0xC/fix-manager-issue
allow managers to set groups of a collection
2022-11-27 22:02:10 +01:00
Daniel García
39ae2f1f76 Merge pull request #2928 from karbobc/settings-description
Update settings description
2022-11-27 22:01:54 +01:00
Daniel García
366b1050ec Merge pull request #2921 from BlackDex/issue-2909
Prevent DNS leak when icon regex is configured
2022-11-27 22:00:54 +01:00
Daniel García
b3aab7a6ad Merge pull request #2920 from BlackDex/issue-2889
Added missing `register` endpoint to `identity`
2022-11-27 22:00:23 +01:00
Daniel García
aa8d050d6b Merge pull request #2919 from BlackDex/issue-2828
Fully remove DuckDuckGo email service.
2022-11-27 21:59:51 +01:00
Daniel García
5200f0e98d Merge pull request #2918 from BlackDex/issue-2761
Set "Bypass admin page security" as read-only
2022-11-27 21:59:39 +01:00
Daniel García
5f4abb1b7f Merge pull request #2911 from skid9000/patch-1
Update config comment to reflect rfc8314.
2022-11-27 21:59:18 +01:00
Daniel García
dfe1e30d1b Merge pull request #2910 from samueltardieu/const-crypto
Use constant size generic parameter for random bytes generation
2022-11-27 21:58:27 +01:00
Stefan Melmuk
e27a5be47a allow managers to set groups of a collection
fixes #2932
2022-11-23 15:47:45 +01:00
Karbob
56786a18f1 Update settings description
Update description to `admin login requests`.
2022-11-22 22:12:06 +08:00
BlackDex
0d2399d485 Prevent DNS leak when icon regex is configured
When a icon blacklist regex was configured to not check for a domain, it
still did a DNS lookup first. This could cause a DNS leakage for these
regex blocked domains.

This PR resolves this issue by first checking the regex, and afterwards
the other checks.

Fixes #2909
2022-11-14 17:25:44 +01:00
BlackDex
5bfc7cfde3 Added missing register endpoint to identity
In the upcomming web-vault and other clients they changed the register
endpoint from `/api/accounts/register` to `/identity/register`.

This PR adds the new endpoint to already be compatible with the new
clients.

Fixes #2889
2022-11-14 17:22:37 +01:00
BlackDex
723f0cbc1e Fully remove DuckDuckGo email service.
The DuckDuckGo email service is not supported for self-hosted servers.
This option is already hidden via the latest web-vault.

This PR also removes some server side headers.

Fixes #2828
2022-11-14 17:19:30 +01:00
BlackDex
b141f789f6 Set "Bypass admin page security" as read-only
It was possible to disable the admin security via the admin interface.
This is kinda insecure as mentioned in #2761.

This PR set this value as read-only and admin's need to set the correct ENV variable.
Currently saved settings which do override this are still valid though.
If an admin want's this removed, they either need to reset the config,
or change the value in the `config.json` file.

Fixes #2761
2022-11-14 17:18:25 +01:00
Samuel Tardieu
7445ee40f8 Remove get_random_64()
Its uses are replaced by get_randm_bytes() or encode_random_bytes().
2022-11-13 10:03:06 +01:00
Skid
4a9a0f7e64 Update .env.template
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-11-12 22:39:41 +01:00
Skid
63aad2e5d2 Update config comment to reflect rfc8314. 2022-11-12 22:07:03 +01:00
Samuel Tardieu
d0baa23f9a Use constant size generic parameter for random bytes generation
All uses of `get_random()` were in the form of:

  `&get_random(vec![0u8; SIZE])`

with `SIZE` being a constant.

Building a `Vec` is unnecessary for two reasons. First, it uses a
very short-lived dynamic memory allocation. Second, a `Vec` is a
resizable object, which is useless in those context when random
data have a fixed size and will only be read.

`get_random_bytes()` takes a constant as a generic parameter and
returns an array with the requested number of random bytes.

Stack safety analysis: the random bytes will be allocated on the
caller stack for a very short time (until the encoding function has
been called on the data). In some cases, the random bytes take
less room than the `Vec` did (a `Vec` is 24 bytes on a 64 bit
computer). The maximum used size is 180 bytes, which makes it
for 0.008% of the default stack size for a Rust thread (2MiB),
so this is a non-issue.

Also, most of the uses of those random bytes are to encode them
using an `Encoding`. The function `crypto::encode_random_bytes()`
generates random bytes and encode them with the provided
`Encoding`, leading to code deduplication.

`generate_id()` has also been converted to use a constant generic
parameter as well since the length of the requested String is always
a constant.
2022-11-11 11:59:27 +01:00
Daniel García
7a7673103f Merge branch 'BlackDex-org-export-fixes' 2022-11-09 22:40:05 +01:00
GeekCorner
05d4788d1d fix: removed a double space 2022-11-09 22:40:01 +01:00
BlackDex
6f0dea1b56 Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-09 22:40:00 +01:00
BlackDex
439ef44973 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:40:00 +01:00
Daniel García
2a525b42cb Merge branch 'GeekCornerGH-fix-double-space-email' 2022-11-09 22:38:38 +01:00
BlackDex
aee91acfdc Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-09 22:38:34 +01:00
BlackDex
17388ec43e Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:38:34 +01:00
Daniel García
bdc1cd13a7 Merge branch 'BlackDex-add-knowndevice-endpoint' 2022-11-09 22:37:53 +01:00
BlackDex
42db4b5c77 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-09 22:37:48 +01:00
Daniel García
53da073274 Merge branch 'BlackDex-update-rust-and-deps' 2022-11-09 22:37:19 +01:00
BlackDex
b010dde661 Update Rust version, deps and workflow
- Update Rust to v1.65.0
- Update dependencies
- Updated workflow files
- Added some extra clippy checks
- Fixed some clippy checks
2022-11-08 14:03:31 +01:00
BlackDex
c9ec389b24 Support Org Export for v2022.11 clients
Since v2022.9.x the org export uses a different endpoint.
But, since v2022.11.x this endpoint will return a different format.
See: https://github.com/bitwarden/clients/pull/3641 and https://github.com/bitwarden/server/pull/2316

To support both version in the case of users having an older client
either web-vault or cli this PR checks the version and responds using
the correct format. If no version can be determined it will use the new
format as a default.
2022-11-07 17:13:34 +01:00
GeekCorner
baa2841b04 fix: removed a double space 2022-11-07 08:39:56 +01:00
BlackDex
6af5c86081 Add /devices/knowndevice endpoint
Added a new endpoint which the currently beta client for at least
Android v2022.10.1 seems to be calling, and crashes with the response we
currently provide

Fixes #2890
Fixes #2891
Fixes #2892
2022-11-06 18:07:09 +01:00
Daniel García
f60a6929a9 Update dependencies 2022-10-26 21:42:38 +02:00
Daniel García
2aa97fa121 Update web vault to v2022.10.2 2022-10-26 21:42:37 +02:00
Jeremy Lin
b59809af46 Sync global_domains.json to bitwarden/server@7c783c9 (Atlassian) 2022-10-26 21:42:37 +02:00
Stefan Melmuk
ed24d51d3e validate cron expressions on startup 2022-10-26 21:42:36 +02:00
Stefan Melmuk
870f0d0932 validate billing_email on save 2022-10-26 21:42:36 +02:00
GeekCorner
31b77bf178 feat: Bump web-vault to v2022.10.1 2022-10-23 18:34:12 +02:00
Daniel García
b525f9aa4c Merge pull request #2724 from dani-garcia/diesel2
Update diesel to 2.0.2
2022-10-23 00:49:57 +02:00
Daniel García
8409b31d6b Update to diesel2 2022-10-23 00:49:23 +02:00
Daniel García
b878495d64 Update sponsors 2022-10-23 00:49:06 +02:00
Daniel García
945b85da2f Merge pull request #2852 from BlackDex/update-github-workflows
Update github workflows
2022-10-21 18:42:47 +02:00
BlackDex
d4577d161e Update github workflows
Updated all the actions where possible.
This should at least for the most actions mean that they are using the
latest github actions core and should partially resolve the issue #2847

The only actions left are the actions from `actions-rs`.
2022-10-21 13:21:57 +02:00
Daniel García
3c8e1c3ca9 Add liberapay funding option 2022-10-20 21:07:26 +02:00
Daniel García
88dba8c4dd Merge pull request #2846 from MFijak/group_support_diff
Group support | applied .diff
2022-10-20 21:04:46 +02:00
MFijak
21bc3bfd53 group support 2022-10-20 15:31:53 +02:00
Daniel García
4cb5122e90 Merge pull request #2844 from jjlin/healthcheck
Take `ROCKET_ADDRESS` into account in the Docker healthcheck
2022-10-20 12:26:09 +02:00
Jeremy Lin
0a2a8be0ff Take ROCKET_ADDRESS into account in the Docker healthcheck 2022-10-20 01:04:09 -07:00
Daniel García
720a046610 Merge pull request #2804 from stefan0xC/verify-on-invite
verify email on registration by invite
2022-10-19 23:09:47 +02:00
Stefan Melmuk
64ae5d4f81 verify email on registration via invite link
if `SIGNUPS_VERIFY` is enabled new users that have been invited have
their onboarding flow interrupted because they have to first verify
their mail address before they can join an organization.

we can skip the extra verication of the email address when signing up
because a valid invitation token already means that the email address is
working and we don't allow invited users to signup with a different
address.

unfortunately, this is not possible with emergency access invitations
at the moment as they are handled differently.
2022-10-19 22:44:17 +02:00
Daniel García
ff7e22c08a Merge pull request #2840 from jjlin/global-domains
Sync global_domains.json
2022-10-19 21:56:22 +02:00
Jeremy Lin
0c267d073f Sync global_domains.json to bitwarden/server@ea300b2 (Amazon) 2022-10-19 12:33:04 -07:00
Daniel García
bbc6470f65 Merge branch 'BlackDex-fix-password-hint' 2022-10-19 20:40:24 +02:00
Stefan Melmuk
23f1f8a576 allow registration without invite link
if signups are allowed invited users should be able to complete their
registration even when they don't have the invite link at hand.
2022-10-19 20:39:14 +02:00
Stefan Melmuk
0e6f6e612a use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-19 20:39:13 +02:00
Stefan Melmuk
4d1b860dad attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-19 20:39:13 +02:00
Stefan Melmuk
6576914e55 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:39:07 +02:00
Daniel García
12075639f3 Merge branch 'stefan0xC-allow-registration-without-invite-link' 2022-10-19 20:23:30 +02:00
Stefan Melmuk
3b9bfe55d0 use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-19 20:23:24 +02:00
Stefan Melmuk
a0c6a7c0de attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-19 20:23:24 +02:00
Stefan Melmuk
a2d716aec3 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:23:24 +02:00
Daniel García
c1c60e3b68 Merge branch 'stefan0xC-email-attach-images' 2022-10-19 20:22:03 +02:00
Stefan Melmuk
ed6e852904 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-19 20:21:58 +02:00
Daniel García
a54065420c Merge branch 'stefan0xC-fix-invitation-of-new-users' 2022-10-19 20:20:14 +02:00
Stefan Melmuk
aa5a05960e allow registration without invite link
if signups are allowed invited users should be able to complete their
registration even when they don't have the invite link at hand.
2022-10-18 12:49:07 +02:00
BlackDex
f41ba2a60f Fix master password hint update not working.
- The Master Password Hint input has changed it's location to the
password update form. This PR updates the the code to process this.

- Also changed the `ProfileData` struct to exclude `Culture` and
`MasterPasswordHint`, since both are not used at all, and when not
defined they will also not be allocated.

Fixes #2833
2022-10-17 17:23:21 +02:00
Stefan Melmuk
2215cfefb9 fix invitations of new users when mail is disabled
If you add a new user that has already been Invited to another
organization they will be Accepted automatically. This should not be
possible because they cannot be Confirmed until they have completed
their registration. It is also not necessary because their invitation
will be accepted automatically once they register.
2022-10-15 16:19:26 +02:00
Stefan Melmuk
4289663a16 use static_files() for email attachments
Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2022-10-15 04:59:33 +02:00
Stefan Melmuk
ea19c2250e attach images to email
Set SMTP_EMBED_IMAGES option to false if you don't want to attach images
to the mail.

NOTE: If you have customized the template files `email_header.hbs` and
`email_footer.hbs` you can replace `{url}/vw_static/` to `{img_url}`
to support both URL schemes
2022-10-15 04:59:31 +02:00
Daniel García
638766b346 Update web-vault to 2022.10.0 and dependencies 2022-10-14 18:21:01 +02:00
Daniel García
d1ff136552 Merge branch 'stefan0xC-check-data-folder-permissions' 2022-10-14 17:56:48 +02:00
Jeremy Lin
46ec11de12 Update CSP for DuckDuckGo email forwarding
Upstream PR: https://github.com/bitwarden/clients/pull/3630
2022-10-14 17:56:42 +02:00
Jeremy Lin
4283a49e0b Reformat CSP header for readability 2022-10-14 17:56:42 +02:00
Jeremy Lin
1e32db8c41 Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-14 17:56:42 +02:00
Stefan Melmuk
0f944ec7e2 fix link of license badge
master branch has been renamed to main.
2022-10-14 17:56:41 +02:00
Daniel García
736dbc9553 Merge branch 'jjlin-csp' 2022-10-14 17:56:03 +02:00
Jeremy Lin
b4a38f1f63 Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-14 17:56:00 +02:00
Stefan Melmuk
646186fe38 fix link of license badge
master branch has been renamed to main.
2022-10-14 17:55:59 +02:00
Daniel García
c2725916f4 Merge branch 'jjlin-creation-date' 2022-10-14 17:55:31 +02:00
Stefan Melmuk
fd334e2b7d fix link of license badge
master branch has been renamed to main.
2022-10-14 17:55:27 +02:00
Daniel García
f9feca1ce4 Merge branch 'stefan0xC-fix-link-in-license-badge' 2022-10-14 17:54:57 +02:00
Stefan Melmuk
677fd2ff32 fix link of license badge
master branch has been renamed to main.
2022-10-12 20:18:18 +02:00
Jeremy Lin
f49eb8eb4d Add CreationDate to cipher response JSON
Upstream PR: https://github.com/bitwarden/server/pull/2142
2022-10-12 00:17:09 -07:00
Jeremy Lin
b0e0d68632 Update CSP for DuckDuckGo email forwarding
Upstream PR: https://github.com/bitwarden/clients/pull/3630
2022-10-11 21:39:12 -07:00
Jeremy Lin
f3c8c16d79 Reformat CSP header for readability 2022-10-11 21:39:02 -07:00
Stefan Melmuk
2dd5086916 more verbose permission denied error
be a bit more verbose about why a file could not be created when it is
caused by a permission denied error.
2022-10-12 01:31:10 +02:00
Stefan Melmuk
7532072d50 add check if data folder is a directory 2022-10-12 01:26:28 +02:00
Daniel García
382e6107fe Update dependencies 2022-10-09 17:40:45 +02:00
Daniel García
e6c6609e19 8bit Solutions LLC. -> Bitwarden, Inc. 2022-10-09 17:13:46 +02:00
Daniel García
4cb5918950 Update web vault to v2022.9.2 2022-10-09 17:13:32 +02:00
Daniel García
55030f3687 Merge branch 'stefan0xC-return-token-expired-message' 2022-10-09 16:22:33 +02:00
Stefan Melmuk
ef4072e4ff improve spelling of minimum expiration hours check
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-10-09 16:21:13 +02:00
Stefan Melmuk
c78d383ed1 make invitation expiration time configurable
configure the number of hours after which organization invites,
emergency access invites, email verification emails and account deletion
requests expire (defaults to 5 days or 120 hours and must be atleast 1)
2022-10-09 16:21:13 +02:00
Stefan Melmuk
5b96270874 return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-09 16:21:12 +02:00
Stefan Melmuk
2c0742387b return CaptchaBypassToken and register object 2022-10-09 16:21:12 +02:00
Stefan Melmuk
1704d14f29 v2022.9.2 expects a json response when registering 2022-10-09 16:21:12 +02:00
Stefan Melmuk
2d7ffbf378 allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:21:11 +02:00
Daniel García
dfd63f85c0 Merge branch 'stefan0xC-configure-expirations' 2022-10-09 16:20:07 +02:00
Stefan Melmuk
cd0c49eaf6 return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-09 16:19:33 +02:00
Stefan Melmuk
080e38d227 return CaptchaBypassToken and register object 2022-10-09 16:19:32 +02:00
Stefan Melmuk
1a664fba6a v2022.9.2 expects a json response when registering 2022-10-09 16:19:32 +02:00
Stefan Melmuk
c915ef815d allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:19:32 +02:00
Daniel García
adea4ec54d Merge branch 'stefan0xC-update-to-v2022.9.2' 2022-10-09 16:17:16 +02:00
Stefan Melmuk
387b5eb2dd allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-10-09 16:17:11 +02:00
Daniel García
6337af59ed Merge branch 'stefan0xC-allow-removal-of-invited-owners' 2022-10-09 16:13:57 +02:00
Stefan Melmuk
475c7b8f16 return more descriptive JWT validation messages 2022-10-09 13:55:22 +02:00
Stefan Melmuk
ac120be1c6 improve spelling of minimum expiration hours check
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-10-09 05:50:43 +02:00
Stefan Melmuk
b70316e6d3 make invitation expiration time configurable
configure the number of hours after which organization invites,
emergency access invites, email verification emails and account deletion
requests expire (defaults to 5 days or 120 hours and must be atleast 1)
2022-10-08 18:37:16 +02:00
Stefan Melmuk
0a0f620d0b return "Object" for consistency
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
2022-10-08 10:27:33 +02:00
Stefan Melmuk
9132cc4a30 return CaptchaBypassToken and register object 2022-10-07 08:06:55 +02:00
Stefan Melmuk
e50edcadfb v2022.9.2 expects a json response when registering 2022-10-07 03:00:52 +02:00
Stefan Melmuk
2685099720 allow the removal of non-confirmed owners
ensure user_to_edit and user_to_delete are actually confirmed users,
before checking if they are the last owner of an organization.
2022-09-27 10:21:23 +02:00
Daniel García
6fa6eb18e8 Remove unused value in config endpoint 2022-09-25 19:22:05 +02:00
Daniel García
bb79396f0e Merge branch 'stefan0xC-catch-404-errors' 2022-09-25 19:05:12 +02:00
BlackDex
da9fd6b7d0 Fix organization vault export
Since v2022.9.x it seems they changed the export endpoint and way of working.
This PR fixes this by adding the export endpoint.

Also, it looks like the clients can't handle uppercase first JSON key's.
Because of this there now is a function which converts all the key's to lowercase first.

I have an issue reported at Bitwarden if this is expected behavior: https://github.com/bitwarden/clients/issues/3606

Fixes #2760
Fixes #2764
2022-09-25 19:04:56 +02:00
BlackDex
5b8067ef77 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 19:04:53 +02:00
BlackDex
9eabcd5cae Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 19:04:48 +02:00
Aaron
d6e0d4cbbd fix: update warning and success case verbiage 2022-09-25 19:04:48 +02:00
Aaron
e5e6db2688 fix: tooltip typo 2022-09-25 19:04:48 +02:00
BlackDex
186fe24484 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 19:04:47 +02:00
Daniel García
5da96d36e6 Merge branch 'BlackDex-fix-org-export' 2022-09-25 19:03:55 +02:00
BlackDex
f4b1071e23 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 19:03:44 +02:00
BlackDex
18291b6533 Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 19:03:33 +02:00
Aaron
8095cb68bb fix: update warning and success case verbiage 2022-09-25 19:03:33 +02:00
Aaron
04cd751556 fix: tooltip typo 2022-09-25 19:03:33 +02:00
BlackDex
7ce2372f51 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 19:03:32 +02:00
Daniel García
aebda93afe Merge branch 'BlackDex-update-libraries-and-rust' 2022-09-25 19:01:30 +02:00
BlackDex
2b7b1141eb Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-25 18:59:26 +02:00
Aaron
1ff4ff72bf fix: update warning and success case verbiage 2022-09-25 18:59:25 +02:00
Aaron
d27e91a9b0 fix: tooltip typo 2022-09-25 18:59:25 +02:00
BlackDex
7cf063b196 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:59:25 +02:00
Daniel García
642f04d493 Merge branch 'BlackDex-add-send-api-v2' 2022-09-25 18:58:48 +02:00
Aaron
fc6e65e4b0 fix: update warning and success case verbiage 2022-09-25 18:58:41 +02:00
Aaron
db5c98ec3b fix: tooltip typo 2022-09-25 18:58:40 +02:00
BlackDex
73c64af27e Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:58:40 +02:00
Daniel García
b3f7db813f Merge branch 'djbrownbear-fix-diagnostic-typo' 2022-09-25 18:55:10 +02:00
BlackDex
59660ff087 Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-25 18:55:04 +02:00
Daniel García
69a69e8e04 Merge branch 'BlackDex-update-workflow' 2022-09-25 18:54:54 +02:00
BlackDex
1094f359c3 Update libraries and Rust version
- Updated to Rust v1.64.0
- Updated all libararies
- Updated multer-rs to be based upon the latest version
- Updated Dockerfiles to match the Rust version
2022-09-25 16:44:34 +02:00
Stefan Melmuk
102ee3f871 add api_not_found catcher for 404 errors in /api 2022-09-25 10:59:01 +02:00
Stefan Melmuk
acb5ab08a8 add not_found catcher for 404 errors 2022-09-25 04:02:16 +02:00
BlackDex
ae59472d9a Fix organization vault export
Since v2022.9.x it seems they changed the export endpoint and way of working.
This PR fixes this by adding the export endpoint.

Also, it looks like the clients can't handle uppercase first JSON key's.
Because of this there now is a function which converts all the key's to lowercase first.

I have an issue reported at Bitwarden if this is expected behavior: https://github.com/bitwarden/clients/issues/3606

Fixes #2760
Fixes #2764
2022-09-24 18:27:13 +02:00
BlackDex
5a07b193dc Add support for send v2 API endpoints
This PR adds support for the Send v2 API.
It should prevent 404 errors which could cause some issues with some
configurations on some reverse proxies.

In the long run, we can probably remove the old file upload API, but for
now lets leave it there, since Bitwarden also still has this endpoint in
the code.

Might fixes #2753
2022-09-22 19:40:04 +02:00
Aaron
fd2edb9adc fix: update warning and success case verbiage 2022-09-16 10:32:36 -07:00
Aaron
1d074f7b3f fix: tooltip typo 2022-09-15 15:36:21 -07:00
BlackDex
81984c4bce Update build workflow
Currently the branch protection is set on specific workflows which needs
to be run every time a PR is created (or a push).

Because it isn't possible to tell the branch protection only to do it's
job if specific files are touched or not, we just need to make sure
these jobs are always started.

Also, because we now check the builds for an MSRV, and the title would
change all the time, that would cause the branch protection to be
updated everytime the MSRV would change. This is now also addressed by
naming that job 'msrv' instead of the version number.
2022-09-15 16:51:52 +02:00
Daniel García
9c891baad1 Merge pull request #2739 from BlackDex/fix-restore-revoke
Rename/Fix revoke/restore endpoints
2022-09-12 17:12:23 +02:00
Daniel García
b050c60807 Merge pull request #2738 from BlackDex/issue-2737
Fix issue 2737, unable to create org
2022-09-12 17:11:43 +02:00
BlackDex
e47a2fd0f3 Rename/Fix revoke/restore endpoints
In web-vault v2022.9.x it seems the endpoints changed.
 - activate > restore
 - deactivate > revoke

This PR adds those endpoints and renames the functions.
It also keeps the previous endpoints for now to be compatible with
previous vault verions for now, just in case.
2022-09-12 16:08:36 +02:00
BlackDex
42b9cc73ac Fix issue 2737, unable to create org
There was a small oversight on upgrading to v2022.9.0 web-vault version.
It seems the call to the /plans/ endpoint doesn't provide authentication anymore.

Removed this check and it seems to work again.

Fixes #2737
2022-09-12 14:10:54 +02:00
Daniel García
edca4248aa Use optional env as this variable isn't defined during CI 2022-09-08 18:01:27 +02:00
Daniel García
b1b6bc9be0 Update web vault to 2022.9.0 2022-09-08 17:46:02 +02:00
Daniel García
818b254cef Implement config endpoint 2022-09-08 17:38:00 +02:00
Daniel García
ddfac5e34b Merge branch 'BlackDex-web-vault-v2022.9-support' into main 2022-09-08 16:30:46 +02:00
Daniel García
8b5c945bad Merge branch 'web-vault-v2022.9-support' of https://github.com/BlackDex/vaultwarden into BlackDex-web-vault-v2022.9-support 2022-09-08 16:30:41 +02:00
Daniel García
50c5eb9c50 Merge branch 'BlackDex-vw-admin-updates' into main 2022-09-08 16:30:31 +02:00
BlackDex
94be67eac1 Added support for web-vault v2022.9
- The new web-vault version supports fastmail.com anon email, add the
  correct api host to support it.
- Removed Firefox Relay, this seems only to be supported on SaaS.
- Added a function to the two-factor api to prevent 404 errors.
2022-09-07 20:48:48 +02:00
BlackDex
5a05139efe Change the handling of login errors.
Previously FlashMessage was used to provide an error message during login.
This PR changes that flow to not use redirect for this, but renders the HTML and responds using the correct status code where needed. This should solve some issues which were reported in the past.

Thanks to @RealOrangeOne, for initiating this with a PR.

Fixes #2448
Fixes #2712
Closes #2715

Co-authored-by: Jake Howard <git@theorangeone.net>
2022-09-06 17:27:20 +02:00
Daniel García
a62dc102fb Update web vault to 2022.8.1 and cargo dependencies 2022-09-04 23:18:27 +02:00
Daniel García
518d74ce21 Merge branch 'BlackDex-org-user-revoke-access' into main 2022-09-04 23:04:22 +02:00
Daniel García
7598997deb Merge branch 'org-user-revoke-access' of https://github.com/BlackDex/vaultwarden into BlackDex-org-user-revoke-access 2022-09-04 23:04:15 +02:00
Daniel García
3c876dc202 Merge branch 'Fvbor-patch-1' into main 2022-09-04 22:49:19 +02:00
BlackDex
1722742ab3 Add Org user revoke feature
This PR adds a the new v2022.8.x revoke feature which allows an
organization owner or admin to revoke access for one or more users.

This PR also fixes several permissions and policy checks which were faulty.

- Modified some functions to use DB Count features instead of iter/count aftwards.
- Rearanged some if statements (faster matching or just one if instead of nested if's)
- Added and fixed several policy checks where needed
- Some small updates on some response models
- Made some functions require an enum instead of an i32
2022-08-20 16:42:36 +02:00
Hagen Tasche
d9c0eb3cfc Update two external Links to prevent tabnabbing
Added noopener to prevent tabnabbing
2022-08-17 08:14:19 +02:00
Hagen Tasche
0d990e1dc0 Open Externallink in new Tab
The link to the backup documentation was opened in the active tab.
With this change it will open in a new tab and prevent tabnabbing
2022-08-17 08:00:46 +02:00
Daniel García
60ed5ff99d Merge pull request #2675 from BlackDex/patch-multer
Fix uploads from mobile clients (and dep updates)
2022-08-04 23:47:48 +02:00
BlackDex
5b98bd66ee Fix uploads from mobile clients (and dep updates)
This patch fixes the file upload send by the mobile clients.
It resolves #2644 by always providing a `Content-Type` even though one
isn't set in this specific case.

I do hope it will be fixed upstream by either Bitwarden by fixing the
client. Or Rocket by allowing to override this somehow.

Until then, we can use this patched version of multer-rs.

Issue @ Rocket: https://github.com/SergioBenitez/Rocket/issues/2299
Issue @ Bitwarden: https://github.com/bitwarden/mobile/issues/2018

Also updated some dependencies.
2022-08-04 23:28:45 +02:00
Daniel García
abd20777fe Merge pull request #2665 from BlackDex/update-deps-and-alpine-base 2022-08-01 23:48:14 +02:00
BlackDex
7f0d0cf8a4 Update MSRV to 1.60.0
The latest version of chrono-tz needs 1.60.0 because of phf.
Since chrono-tz has updated timezone information i do think it is
usefull in some cases around the world.
2022-08-01 16:21:06 +02:00
BlackDex
6e23a573fb Update deps and Alpine image
- Updated deps
- Updated Alpine images to 3.16
- Removed dumb-init, not needed anymore
- Some small shellcheck tweaks on the start/healthcheck scripts
2022-07-31 15:45:31 +02:00
Daniel García
ce9d93003c Merge pull request #2650 from BlackDex/mitigate-mobile-client-uploads
Mitigate attachment/send upload issues
2022-07-27 17:39:07 +02:00
BlackDex
abfa868423 Mitigate attachment/send upload issues
This PR attends to mitigate (not fix) #2644.
There seems to be an issue when uploading files either as attachment or
via send via the mobile (Android) client.

The binary data gets transfered correctly to Vaultwarden (Checked via
Wireshark), but the data is not parsed correctly for some reason.

Since the parsing is not done by Vaultwarden it self, i think we should
at least try to prevent saving the data and letting users think all
fine.

Further investigation is needed to actually fix this issue.
This is just a quick patch.
2022-07-27 17:12:04 +02:00
Daniel García
331f6c08fe Merge branch 'BlackDex-update-github-actions' into main 2022-07-22 16:00:45 +02:00
Daniel García
c0efd3d419 Merge branch 'update-github-actions' of https://github.com/BlackDex/vaultwarden into BlackDex-update-github-actions 2022-07-22 16:00:40 +02:00
Daniel García
1385d75972 Merge branch 'BlackDex-fix-2622-persistent-volume-check' into main 2022-07-22 16:00:28 +02:00
BlackDex
9a787dd105 Fix persistent folder check within containers
The previous persistent folder check worked by checking if a file
exists. If you used a bind-mount, then this file is not there. But when
using a docker/podman volume those files are copied, and caused the
container to not start.

This change checks the `/proc/self/mountinfo` for a specific patern to
see if the data folder is persistent or not.

Fixes #2622
2022-07-20 13:29:39 +02:00
BlackDex
0dcc435bb4 Update build workflow for CI
Because we want to support MSRV, we also need to run a CI for this.
This PR adds checks for the MSRV and rust-toolchain defined versions.

It will also run all cargo test, clippy and fmt checks no matter the outcome of the previous job.
This will help when there are multiple issues, like clippy errors and formatting.
Previously it would show only the first failed check and stopped.

It will also output a nice step summary with some details on which checks have failed.
Or it will output a success message.
2022-07-19 23:17:49 +02:00
Daniel García
f1a67663d1 Merge pull request #2624 from BlackDex/fix-2623-csp-icon-redirect
Fix issue with CSP and icon redirects
2022-07-18 00:40:59 +02:00
BlackDex
0f95bdc9bb Fix issue with CSP and icon redirects
When using anything else but the `internal` icon service it would
trigger an CSP block because the redirects were not allowed.

This PR fixes #2623 by dynamically adding the needed CSP strings.
This should also work with custom services.

For Google i needed to add an extra check because that does a redirect
it self to there gstatic.com domain.
2022-07-17 16:21:03 +02:00
Daniel García
a0eab35768 Update web vault to 2022.6.2 2022-07-15 19:15:22 +02:00
Daniel García
027c87dd07 Merge branch 'BlackDex-update-dep-fix-issue-2516' into main 2022-07-15 19:14:21 +02:00
Daniel García
f2b31352fe Merge branch 'update-dep-fix-issue-2516' of https://github.com/BlackDex/vaultwarden into BlackDex-update-dep-fix-issue-2516 2022-07-15 19:14:14 +02:00
Daniel García
c9376e3126 Remove read_file and read_file_string and replace them with the std alternatives 2022-07-15 19:13:26 +02:00
Daniel García
7cbcad0e38 Merge branch 'BlackDex-more-clippy-checks' into main 2022-07-15 19:06:09 +02:00
Daniel García
e167798449 Merge branch 'more-clippy-checks' of https://github.com/BlackDex/vaultwarden into BlackDex-more-clippy-checks 2022-07-15 19:05:54 +02:00
Daniel García
fc5928772b Move around comments 2022-07-15 19:05:38 +02:00
Daniel García
8263bdd21d Merge branch 'ruifung-main' into main 2022-07-15 19:03:49 +02:00
BlackDex
3c1d4254e7 Update deps and fix file-uploads
- Update deps. One of them is multer-rs which fixes #2516
- Changed MSRV to `1.59.0`, since that is the correct MSRV currently.
  It could be lower, but that would mean removing the `strip` option.
2022-07-15 16:03:57 +02:00
BlackDex
55d7c48b1d Add more clippy checks for better code/readability
A bit inspired by @paolobarbolini from this commit at lettre https://github.com/lettre/lettre/pull/784 .
I added a few more clippy lints here, and fixed the resulted issues.

Overall i think this could help in preventing future issues, and maybe
even peformance problems. It also makes some code a bit more clear.

We could always add more if we want to, i left a few out which i think
arn't that huge of an issue. Some like the `unused_async` are nice,
which resulted in a few `async` removals.

Some others are maybe a bit more estatic, like `string_to_string`, but i
think it looks better to use `clone` in those cases instead of `to_string` while they already are a string.
2022-07-10 16:39:38 +02:00
Yip Rui Fung
bf623eed7f Use if let instead of a match with empty block. 2022-07-09 11:43:00 +08:00
Yip Rui Fung
84bcac0112 Apply rustfmt.
Because apparently CLion's default formatting is not the same as rustfmt for some reason.
2022-07-09 10:49:51 +08:00
Yip Rui Fung
31595888ea Use match to avoid ownership issues on the TempFile / file_path variables in closures. 2022-07-09 10:33:27 +08:00
Yip Rui Fung
5c38b2c4eb Remove option and use unwrap_or_else to fall back to copy behavior. 2022-07-09 08:53:00 +08:00
Yip Rui Fung
ebe9162af9 Add option to make file uploads use move_copy_to instead of persist_to
This is to support scenarios where the attachments and sends folder are to be stored on a separate device from the tmp_folder (i.e. fuse-mounted S3 storage), due to having the tmp_dir on the same device being undesirable.

Example being fuse-mounted S3 storage with the reasoning that because S3 basically requires a copy+delete operations to rename files, it's inefficient to rename files on device, if it's even allowed.
2022-07-09 01:19:00 +08:00
Daniel García
b64cf27038 Upgrade dependencies and swap lettre to async transport 2022-07-06 23:57:37 +02:00
Daniel García
0c4e79cff6 Update web vault to v2022.6.0 2022-07-06 23:35:02 +02:00
Daniel García
5b9129a086 Merge remote-tracking branch 'origin/dependabot/cargo/openssl-src-111.22.01.1.1q' into main 2022-07-06 23:30:49 +02:00
Daniel García
93d4a12834 Update the rest of the files leftover from #2595 by running make 2022-07-06 23:27:48 +02:00
Daniel García
bf3e2dc652 Merge branch 'nneul-patch-1' into main 2022-07-06 23:26:54 +02:00
dependabot[bot]
0d0e98d783 Bump openssl-src from 111.21.0+1.1.1p to 111.22.0+1.1.1q
Bumps [openssl-src](https://github.com/alexcrichton/openssl-src-rs) from 111.21.0+1.1.1p to 111.22.0+1.1.1q.
- [Release notes](https://github.com/alexcrichton/openssl-src-rs/releases)
- [Commits](https://github.com/alexcrichton/openssl-src-rs/commits)

---
updated-dependencies:
- dependency-name: openssl-src
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-06 20:16:56 +00:00
Nathan Neulinger
5a55cfbb9b Update Dockerfile.j2 2022-07-06 08:56:17 -05:00
Nathan Neulinger
ac93b8a6b9 Update Dockerfile.buildx.alpine 2022-07-06 08:54:36 -05:00
Nathan Neulinger
93786d9ebd Update Dockerfile.buildx 2022-07-06 08:54:19 -05:00
Nathan Neulinger
a6dbb580c9 Update Dockerfile.alpine 2022-07-06 08:53:58 -05:00
Nathan Neulinger
e62678abdb Update Dockerfile 2022-07-06 08:53:18 -05:00
Daniel García
af50eae604 Merge pull request #2586 from jjlin/password-hint-config
Add `password_hints_allowed` config option
2022-07-01 16:31:56 +02:00
Jeremy Lin
cb4f6aa7f6 Pin a specific version of Rust
The latest version (1.62.0) that was just released includes Clippy changes
(https://github.com/rust-lang/rust-clippy/issues/9014) that break the build.
2022-06-30 23:56:33 -07:00
Jeremy Lin
5e13b1a7cb Add password_hints_allowed config option
Disabling password hints is mainly useful for admins who are concerned that
their users might provide password hints that are too revealing.
2022-06-30 20:46:17 -07:00
Daniel García
60b339f450 Update included web vault to v2022.5.2 2022-06-26 22:04:45 +02:00
Daniel García
f71c779860 Merge branch 'BlackDex-log-level-adjustment' into main 2022-06-26 21:54:54 +02:00
Daniel García
221a11de9b Merge branch 'log-level-adjustment' of https://github.com/BlackDex/vaultwarden into BlackDex-log-level-adjustment 2022-06-26 21:54:48 +02:00
Daniel García
794483c10d Merge branch 'BlackDex-fix-issue-2570' into main 2022-06-26 21:54:27 +02:00
Daniel García
c9934ccdb7 Merge branch 'fix-issue-2570' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-issue-2570 2022-06-26 21:54:22 +02:00
Daniel García
54729f3c1e Merge branch 'BlackDex-optimize-icon-html-parsing' into main 2022-06-26 21:54:10 +02:00
Daniel García
f1a86acb98 Merge branch 'optimize-icon-html-parsing' of https://github.com/BlackDex/vaultwarden into BlackDex-optimize-icon-html-parsing 2022-06-26 21:54:03 +02:00
Daniel García
6b6ea3c8bf Merge branch 'BlackDex-fix-issue-2566' into main 2022-06-26 21:53:06 +02:00
Daniel García
bf403fee7d Merge branch 'fix-issue-2566' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-issue-2566 2022-06-26 21:52:59 +02:00
Daniel García
5cd920cf6f Merge branch 'BlackDex-allow-firefox-relay' into main 2022-06-26 21:51:50 +02:00
BlackDex
45d3b479bc Small change in log-level for better debugging
Regarding some recent issues with sending attachments, but previously
also some changes to the API for example which could cause a `400` error
it just returned that there is something wrong, but not to much details
on what exactly.

To help with getting a bit more detailed information, we should set the
log-level for `_` to at least `Warn`.
2022-06-26 14:49:26 +02:00
BlackDex
c7a752b01d Update dep's and small improvements on favicons
- Updated dependencies (html5gum for favicon downloading)
  * Also openssl, time, jsonwebtoken and r2d2
- Small optimizations on downloading favicons.
  It now only emits tokens/tags which needs to be parsed, all others are
  being skipped. This prevents unneeded items within the for-loop being
  parsed.
2022-06-25 11:29:08 +02:00
BlackDex
099d359628 Fix identicons not always working
Fixes #2570
Reverted the `defer` option for these scripts, seems to cause some
issues in some situations.
2022-06-22 16:38:16 +02:00
BlackDex
006a2aacbb Allow FireFox relay in CSP.
This PR is needed for https://github.com/dani-garcia/bw_web_builds/pull/71
Without this the web-vault will refuse to make calls to the FireFox Relay API.

Also fixed a small issue with the pre-commit config.
2022-06-22 16:30:31 +02:00
BlackDex
b71d9dd53e Fix for issue #2566
This PR fixes #2566
If Organizational syncs returned a FolderId it would cause the web-vault
to hide the cipher because there is a FolderId set. Upstream seems to
not return FolderId and Favorite. When set to null/false it will behave
the same.

In this PR I have added a new CipherSyncType enum to select which type
of sync to execute, and return an empty list for both Folders and Favorites if this is for Orgs.
This also reduces the database load a bit since it will not execute those queries.
2022-06-21 17:36:07 +02:00
Daniel García
887e320e7f Merge pull request #2555 from jjlin/global-domains
Sync global_domains.json
2022-06-15 20:44:35 +02:00
Daniel García
d7c18fd86e Merge pull request #2556 from binlab/patch-1
A little depreciation change
2022-06-15 20:44:14 +02:00
Daniel García
7566f3db3e Merge pull request #2543 from BlackDex/update-and-fixes
Updated deps and misc fixes and updates
2022-06-15 20:43:26 +02:00
BlackDex
5d05ec58be Updated deps and misc fixes and updates
- Updated some Rust dependencies
- Fixed an issue with CSP header, this was not configured correctly
- Prevent sending CSP and Frame headers for the MFA connector.html files.
  Else some clients will fail to handle these protocols.
- Add `unsafe-inline` for `script-src` only to the CSP for the Admin Interface
- Updated JavaScript and CSS files for the Admin interface
- Changed the layout for showing overridden settings, better visible now.
- Made the version check cachable to prevent hitting the Github API rate limits
- Hide the `database_url` as if it is a password in the Admin Interface
  Else for MariaDB/MySQL or PostgreSQL this was plain text.
- Fixed an issue that pressing enter on the SMTP Test would save the config.
  resolves #2542
- Prevent user names larger then 50 characters
  resolves #2419
2022-06-14 14:51:51 +02:00
Mark
d9a452f558 A little depreciation change 2022-06-13 13:56:41 +03:00
Jeremy Lin
dec03b3dc0 Sync global_domains.json to bitwarden/server@194b76c (HealthCare.gov) 2022-06-12 20:15:20 -07:00
Jeremy Lin
85950bdc0b Sync global_domains.json to bitwarden/server@496c9a5 (Proton) 2022-06-12 20:14:30 -07:00
Daniel García
f95bd3bb04 Update pico-args 2022-06-04 19:16:36 +02:00
BlackDex
e33b8fab34 Re-Base, Update crates and small change. 2022-06-04 19:14:14 +02:00
Daniel García
b00fbf153e Fix clippy lint and remove unused log 2022-06-04 19:13:58 +02:00
Daniel García
0de5919a16 Fix incorrect pings sent, and respond to pings from the client 2022-06-04 19:13:58 +02:00
Daniel García
699777be9e use dashmap in icons blacklist regex 2022-06-04 19:13:58 +02:00
Daniel García
16ff49d712 Move to job_scheduler_ng 2022-06-04 19:13:57 +02:00
Daniel García
54c78cf06d Migrate old ws crate to tungstenite, which is async and also removes over 20 old dependencies 2022-06-04 19:13:39 +02:00
Daniel García
303eaabeea Merge branch 'paolobarbolini-lettre-improvements' into main 2022-06-04 19:13:12 +02:00
Daniel García
6b6f5b8d04 Merge branch 'lettre-improvements' of https://github.com/paolobarbolini/vaultwarden into paolobarbolini-lettre-improvements 2022-06-04 19:10:51 +02:00
Daniel García
0c18a7e306 Merge branch 'paolobarbolini-lettre-rc7' into main 2022-06-04 19:09:11 +02:00
Daniel García
a23a38080b Merge branch 'lettre-rc7' of https://github.com/paolobarbolini/vaultwarden into paolobarbolini-lettre-rc7 2022-06-04 19:09:03 +02:00
Daniel García
316ca66a4b Merge branch 'Lowaiz-add_disabled_member_to_json_user' into main 2022-06-04 19:08:23 +02:00
Daniel García
2f71a01877 Merge branch 'add_disabled_member_to_json_user' of https://github.com/Lowaiz/vaultwarden into Lowaiz-add_disabled_member_to_json_user 2022-06-04 19:08:15 +02:00
Daniel García
d5cfbfc71d Update web vault to v2022.05.0 2022-06-04 19:07:15 +02:00
Paolo Barbolini
12612da75e Remove manual IDN handling 2022-06-04 19:02:51 +02:00
Paolo Barbolini
68ec5f2a18 Use MultiPart::alternative_plain_html instead of manual impl 2022-06-04 14:53:27 +02:00
Paolo Barbolini
00670450df Bump lettre to 0.10.0-rc.7 2022-06-04 14:47:26 +02:00
Lyonel Martinez
dbd95e08e9 Adding "UserEnabled" and "CreatedAt" member to the json output of a User in the admin/users and admin/users/<ID> web routes. 2022-06-02 15:13:58 +02:00
Daniel García
3713f2d134 Merge pull request #2507 from BlackDex/fix-persisten-volume-check
Fix persistent volume check
2022-05-28 14:56:47 +02:00
BlackDex
a85a250dfd Fix persistent volume check
It seemed there were some issues building the cross-platform images.
This PR fixes #2501 so building the containers will work again.
2022-05-28 09:31:09 +02:00
Daniel García
5845ed2c92 Merge pull request #2501 from BlackDex/add-persistent-volume-check-docker
Add a persistent volume check.
2022-05-27 19:41:42 +02:00
BlackDex
40ed505581 Add a persistent volume check.
This will add a persistent volume check to make sure when running
containers someone is using a volume for persistent storage.

This check can be bypassed if someone configures
`I_REALLY_WANT_VOLATILE_STORAGE=true` as an environment variable.

This should prevent issues like #2493 .
2022-05-26 09:39:56 +02:00
Daniel García
bf0b8d9968 Merge pull request #2491 from BlackDex/issue-2490
Fix armv6 issue with bullseye images
2022-05-24 15:46:34 +02:00
Daniel García
d0a7437dbd Merge pull request #2489 from fox34/update-env-template
Add TMP_FOLDER to .env.template
2022-05-24 15:33:22 +02:00
BlackDex
21b433c5d7 Fix armv6 issue with bullseye images
It looks like the armv6 bullseye images are missing a symlink to the
dynamic linker. The previous buster images had this symlink there,
bullseye does not.

This PR fixes adds that symlink again for only the Debian armv6 build.

Resolves #2490
2022-05-24 15:25:51 +02:00
fox34
7c89bc619a Add TMP_FOLDER to .env.template 2022-05-24 09:38:16 +02:00
Daniel García
0d3daa9fc6 Remove recommendation to set ROCKET_CLI_COLORS to off
The value is now a boolean so setting it to off will cause an error
2022-05-23 20:19:29 +02:00
Daniel García
0c1f0bad17 Merge branch 'BlackDex-update-rust-version-dockerfile' into main 2022-05-21 19:17:29 +02:00
Daniel García
72cf59fa54 Merge branch 'update-rust-version-dockerfile' of https://github.com/BlackDex/vaultwarden into BlackDex-update-rust-version-dockerfile 2022-05-21 19:17:24 +02:00
Daniel García
527bc1f625 Merge branch 'BlackDex-fix-upload-limits-and-logging' into main 2022-05-21 19:17:15 +02:00
BlackDex
2168d09421 Update Rust version in Dockerfile
Updated Rust from v1.60 to v1.61 for building the images.
Also made the rust version fixed for the Alpine build images to prevent
those images being build with a newer version when released.
2022-05-21 17:46:14 +02:00
BlackDex
1c266031d7 Fix upload limits and disable color logs
The limits for uploading files were to small in regards to the allowed
maximum filesize of the Bitwarden clients including the web-vault.
Changed both `data-form` (used for Send) and `file` (used for
attachments) to be 525MB, this is the same size we already check our selfs.

Also changed the `json` limit to be 20MB, this should allow very large
imports with 4000/5000+ items depending on if there are large notes or not.

And, also disabled Rocket from outputting colors, these colors were also
send to the log files and syslog. I think this changed in Rocket 0.5rc
somewhere, need to look a bit further into that maybe.
2022-05-21 17:28:29 +02:00
Daniel García
b636d20c64 Update web vault to v2.28.1 2022-05-11 22:19:22 +02:00
Daniel García
2a9ca88c2a Dependency updates 2022-05-11 22:03:07 +02:00
Daniel García
b9c434addb Merge branch 'jjlin-db-conn-init' into main 2022-05-11 21:36:11 +02:00
Daniel García
451ad47327 Merge branch 'db-conn-init' of https://github.com/jjlin/vaultwarden into jjlin-db-conn-init 2022-05-11 21:36:00 +02:00
Daniel García
7f61dd5fe3 Merge branch 'BlackDex-sql-optimizations' into main 2022-05-11 21:33:43 +02:00
BlackDex
3ca85028ea Improve sync speed and updated dep. versions
Improved sync speed by resolving the N+1 query issues.
Solves #1402 and Solves #1453

With this change there is just one query done to retreive all the
important data, and matching is done in-code/memory.

With a very large database the sync time went down about 3 times.

Also updated misc crates and Github Actions versions.
2022-05-06 17:01:02 +02:00
Jeremy Lin
542a73cc6e Switch to a single config option for database connection init
The main pro is less config options, while the main con is less clarity in
what the defaults are for the various database types.
2022-04-29 00:26:49 -07:00
Jeremy Lin
78d07e2fda Add default connection-scoped pragmas for SQLite
`PRAGMA busy_timeout = 5000` tells SQLite to keep trying for up to 5000 ms
when there is lock contention, rather than aborting immediately. This should
hopefully prevent the vast majority of "database is locked" panics observed
since the async transition.

`PRAGMA synchronous = NORMAL` trades better performance for a small potential
loss in durability (the default is `FULL`). The SQLite docs recommend `NORMAL`
as "a good choice for most applications running in WAL mode".
2022-04-26 17:55:19 -07:00
Jeremy Lin
b617ffd2af Add support for database connection init statements
This is probably mainly useful for running connection-scoped pragma statements.
2022-04-26 17:50:20 -07:00
Daniel García
3abf173d89 Merge pull request #2433 from jjlin/meta-apis
Add `/api/{alive,now,version}` endpoints
2022-04-24 18:36:08 +02:00
Jeremy Lin
df8aeb10e8 Add /api/{alive,now,version} endpoints
The added endpoints work the same as in their upstream implementations.

Upstream also implements `/api/ip`. This seems to include the server's public
IP address (the one that should be hidden behind Cloudflare), which doesn't
seem like a great idea.
2022-04-23 23:47:49 -07:00
Daniel García
26ad06df7c Update web vault to 2.28.0 and dependencies 2022-04-23 18:18:15 +02:00
Daniel García
37fff3ef4a Merge pull request #2400 from jjlin/global-domains
Sync global_domains.json
2022-03-30 22:02:44 +02:00
Jeremy Lin
28c5e63bf5 Sync global_domains.json to bitwarden/server@3521ccb (Just Eat Takeaway.com) 2022-03-29 11:41:43 -07:00
Daniel García
a07c213b3e Merge pull request #2398 from BlackDex/remove-u2f
Remove u2f implementation
2022-03-27 18:43:09 +02:00
Daniel García
ed72741f48 Merge pull request #2397 from BlackDex/fix-mimalloc-build
Fix building mimalloc on armv6
2022-03-27 18:42:59 +02:00
BlackDex
fb0c23b71f Remove u2f implementation
For a while now WebAuthn has replaced u2f.
And since web-vault v2.27.0 the connector files for u2f have been removed.
Also, on the official bitwarden server the endpoint to `/two-factor/get-u2f` results in a 404.

- Removed all u2f code except the migration code from u2f to WebAuthn
2022-03-27 17:25:04 +02:00
BlackDex
d98f95f536 Fix building mimalloc on armv6
The armv6 builds need a specific location for the libatomic.a file.
This commit fixes that by adding a RUSTFLAGS argument for this.

Also removed the `link-arg=-s` since this is now already done during via the release profile
And removed the CFLAGS for armv7, this is already fixed by default in the blackdex/rust-musl images.
2022-03-27 14:45:50 +02:00
Daniel García
6643e83b61 Disable mimalloc in arm for now 2022-03-26 20:11:46 +01:00
Daniel García
7b742009a1 Update web vault to 2.27.0 and dependencies 2022-03-26 16:35:54 +01:00
Daniel García
649e2b48f3 Merge branch 'Wonderfall-x-xss-protection' into main 2022-03-26 16:18:48 +01:00
Daniel García
81f0c2b0e8 Merge branch 'x-xss-protection' of https://github.com/Wonderfall/vaultwarden into Wonderfall-x-xss-protection 2022-03-26 16:18:34 +01:00
Daniel García
80d8aa7239 Merge branch 'BlackDex-misc-updates-202203' into main 2022-03-26 16:18:24 +01:00
Wonderfall
27d4b713f6 disable legacy X-XSS-Protection feature
Obsolete in every modern browser, unsafe, and replaced by CSP
2022-03-21 15:29:01 +01:00
BlackDex
b0faaf2527 Several updates and fixes
- Removed all `thread::sleep` and use `tokio::time::sleep` now.
  This solves an issue with updating to Bullseye ( Resolves #1998 )
- Updated all Debian images to Bullseye
- Added MiMalloc feature and enabled it by default for Alpine based images
  This increases performance for the Alpine images because the default
  memory allocator for MUSL based binaries isn't that fast
- Updated `dotenv` to `dotenvy` a maintained and updated fork
- Fixed an issue with a newer jslib (not fully released yet)
  That version uses a different endpoint for `prelogin` Resolves #2378 )
2022-03-20 18:51:24 +01:00
Daniel García
8d06d9c111 Merge pull request #2354 from BlackDex/multi-account-login
Update login API code and update crates to fix CVE
2022-03-13 15:46:49 +01:00
BlackDex
c4d565b15b Update login API code
- Updated jsonwebtoken to latest version
- Trim `username` received from the login form ( Fixes #2348 )
- Make uuid and user_uuid a combined primary key for the devices table ( Fixes #2295 )
- Updated crates including regex which contains a CVE ( https://blog.rust-lang.org/2022/03/08/cve-2022-24713.html )
2022-03-12 18:45:45 +01:00
Daniel García
06f8e69c70 Update web vault to 2.26.1 2022-02-27 22:21:36 +01:00
Daniel García
7db52374cd Merge branch 'BlackDex-async-updates' into async 2022-02-27 21:51:19 +01:00
Daniel García
843f205f6f Merge branch 'async-updates' of https://github.com/BlackDex/vaultwarden into BlackDex-async-updates 2022-02-27 21:50:33 +01:00
Daniel García
2ff51ae77e formatting 2022-02-27 21:37:24 +01:00
Daniel García
2b75d81a8b Ignore unused field 2022-02-27 21:37:24 +01:00
Daniel García
cad0dcbed1 await the mutex in db_run and use block_in_place for it's contents 2022-02-27 21:37:24 +01:00
BlackDex
19b8388950 Upd Dockerfiles, crates. Fixed rust 2018 idioms
- Updated crates
- Fixed Dockerfiles to build using the rust stable version
- Enabled warnings for rust 2018 idioms and fixed them.
2022-02-27 21:37:23 +01:00
BlackDex
87e08b9e50 Async/Awaited all db methods
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.

Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
2022-02-27 21:37:23 +01:00
Daniel García
0b7d6bf6df Update to rocket 0.5 and made code async, missing updating all db calls, that are currently blocking 2022-02-27 21:36:31 +01:00
Daniel García
89fe05b6cc Merge branch 'taylorwmj-main' into main 2022-02-27 21:22:24 +01:00
Daniel García
d73d74e78f Merge branch 'main' of https://github.com/taylorwmj/vaultwarden into taylorwmj-main 2022-02-27 21:22:15 +01:00
Daniel García
9a682b7a45 Merge branch 'TinfoilSubmarine-custom-env-path' into main 2022-02-27 21:21:46 +01:00
Daniel García
94201ca133 Merge branch 'custom-env-path' of https://github.com/TinfoilSubmarine/vaultwarden into TinfoilSubmarine-custom-env-path 2022-02-27 21:21:38 +01:00
Daniel García
99f9e7252a Merge branch 'jaen-add-ip-to-send-unauthorized-message' into main 2022-02-27 21:19:21 +01:00
BlackDex
42136a7097 Favicon, SMTP and misc updates
Favicon:
- Replaced HTML tokenizer, much faster now.
- Caching the domain blacklist function.
- Almost all functions are async now.
- Fixed bug on minimizing data to parse
- Changed maximum icon download size to 5MB to match Bitwarden
- Added `apple-touch-icon.png` as a second fallback besides `favicon.ico`

SMTP:
- Deprecated SMTP_SSL and SMTP_EXPLICIT_TLS, replaced with SMTP_SECURITY

Misc:
- Fixed issue when `resolv.conf` contains errors and trust-dns panics (Fixes #2283)
- Updated Javscript and CSS files for admin interface
- Fixed an issue with the /admin interface which did not cleared the login cookie correctly
- Prevent websocket notifications during org import, this caused a lot of traffic, and slowed down the import.
  This is also the same as Bitwarden which does not trigger this refresh via websockets.

Rust:
- Updated to use v1.59
- Use the new `strip` option and enabled to strip `debuginfo`
- Enabled `lto` with `thin`
- Removed the strip RUN from the alpine armv7, this is now done automatically
2022-02-26 13:56:42 +01:00
taylorwmj
9bb4c38bf9 Added autofocus to pw field on admin login page 2022-02-22 20:44:29 -06:00
BlackDex
5f01db69ff Update async to prepare for main merge
- Changed nightly to stable in Dockerfile and Workflow
- Updated Dockerfile to use stable and updated ENV's
- Removed 0.0.0.0 as default addr it now uses ROCKET_ADDRESS or the default
- Updated Github Workflow actions to the latest versions
- Updated Hadolint version
- Re-orderd the Cargo.toml file a bit and put libs together which are linked
- Updated some libs
- Updated .dockerignore file
2022-02-22 20:00:33 +01:00
Joel Beckmeyer
c59a7f4a8c document ENV_FILE variable usage 2022-02-16 14:42:12 -05:00
Joel Beckmeyer
8295688bed Add support for custom .env file path 2022-02-16 09:25:37 -05:00
Tomek Mańko
9713a3a555 Add IP address to missing/invalid password message for Sends 2022-02-13 13:13:42 +01:00
Daniel García
d781981bbd formatting 2022-01-30 22:26:19 +01:00
Daniel García
5125fdb882 Ignore unused field 2022-01-30 22:26:19 +01:00
Daniel García
fd9693b961 await the mutex in db_run and use block_in_place for it's contents 2022-01-30 22:26:19 +01:00
BlackDex
f38926d666 Upd Dockerfiles, crates. Fixed rust 2018 idioms
- Updated crates
- Fixed Dockerfiles to build using the rust stable version
- Enabled warnings for rust 2018 idioms and fixed them.
2022-01-30 22:26:18 +01:00
BlackDex
775d07e9a0 Async/Awaited all db methods
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.

Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
2022-01-30 22:26:18 +01:00
Daniel García
2d5f172e77 Update to rocket 0.5 and made code async, missing updating all db calls, that are currently blocking 2022-01-30 22:25:54 +01:00
Daniel García
08f0de7b46 Dependency updates 2022-01-30 22:24:42 +01:00
164 changed files with 22658 additions and 12588 deletions

View File

@@ -3,13 +3,18 @@ target
# Data folder
data
# Misc
.env
.env.template
.gitattributes
.gitignore
rustfmt.toml
# IDE files
.vscode
.idea
.editorconfig
*.iml
# Documentation
@@ -19,9 +24,17 @@ data
*.yml
*.yaml
# Docker folders
# Docker
hooks
tools
Dockerfile
.dockerignore
docker/**
!docker/healthcheck.sh
!docker/start.sh
# Web vault
web-vault
web-vault
# Vaultwarden Resources
resources

View File

@@ -1,8 +1,14 @@
# shellcheck disable=SC2034,SC2148
## Vaultwarden Configuration File
## Uncomment any of the following lines to change the defaults
##
## Be aware that most of these settings will be overridden if they were changed
## in the admin interface. Those overrides are stored within DATA_FOLDER/config.json .
##
## By default, Vaultwarden expects for this file to be named ".env" and located
## in the current working directory. If this is not the case, the environment
## variable ENV_FILE can be set to the location of this file prior to starting
## Vaultwarden.
## Main data folder
# DATA_FOLDER=data
@@ -24,11 +30,21 @@
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
## Individual folders, these override %DATA_FOLDER%
# RSA_KEY_FILENAME=data/rsa_key
# ICON_CACHE_FOLDER=data/icon_cache
# ATTACHMENTS_FOLDER=data/attachments
# SENDS_FOLDER=data/sends
# TMP_FOLDER=data/tmp
## Templates data folder, by default uses embedded templates
## Check source code to see the format
@@ -65,11 +81,34 @@
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Number of days to retain events stored in the database.
## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EVENTS_DAYS_RETAIN=
## BETA FEATURE: Groups
## Controls whether group support is enabled for organizations
## This setting applies to organizations.
## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Job scheduler settings
##
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
## and are always in terms of UTC time (regardless of your local time zone settings).
##
## The schedule format is a bit different from crontab as crontab does not contains seconds.
## You can test the the format here: https://crontab.guru, but remove the first digit!
## SEC MIN HOUR DAY OF MONTH MONTH DAY OF WEEK
## "0 30 9,12,15 1,15 May-Aug Mon,Wed,Fri"
## "0 30 * * * * "
## "0 30 1 * * * "
##
## How often (in ms) the job scheduler thread checks for jobs that need running.
## Set to 0 to globally disable scheduled jobs.
# JOB_POLL_INTERVAL_MS=30000
@@ -87,12 +126,16 @@
# INCOMPLETE_2FA_SCHEDULE="30 * * * * *"
##
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
## Defaults to hourly (3 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 3 * * * *"
##
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
## Defaults to hourly (7 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 7 * * * *"
##
## Cron schedule of the job that cleans old events from the event table.
## Defaults to daily. Set blank to disable this job. Also without EVENTS_DAYS_RETAIN set, this job will not start.
# EVENT_CLEANUP_SCHEDULE="0 10 0 * * *"
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
@@ -102,12 +145,10 @@
# LOG_TIMESTAMP_FORMAT="%Y-%m-%d %H:%M:%S.%3f"
## Logging to file
## It's recommended to also set 'ROCKET_CLI_COLORS=off'
# LOG_FILE=/path/to/log
## Logging to Syslog
## This requires extended logging
## It's recommended to also set 'ROCKET_CLI_COLORS=off'
# USE_SYSLOG=false
## Log level
@@ -120,7 +161,7 @@
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents vaultwarden from automatically enabling it on start.
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
@@ -218,9 +259,13 @@
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Token for the admin interface, preferably use a long random string
## One option is to use 'openssl rand -base64 48'
## Token for the admin interface, preferably an Argon2 PCH string
## Vaultwarden has a built-in generator by calling `vaultwarden hash`
## For details see: https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page#secure-the-admin_token
## If not set, the admin panel is disabled
## New Argon2 PHC string
# ADMIN_TOKEN='$argon2id$v=19$m=65540,t=3,p=4$MmeKRnGK5RW5mJS7h3TOL89GrpLPXJPAtTK8FTqj9HM$DqsstvoSAETl9YhnsXbf43WeaUwJC6JhViIvuPoig78'
## Old plain text string (Will generate warnings in favor of Argon2)
# ADMIN_TOKEN=Vy2VyYTTsKPv8W5aEOWUbB/Bt3DEKePbHmI4m9VcemUMS2rEviDowNAFqYi1xjmp
## Enable this to bypass the admin panel security. This option is only
@@ -232,6 +277,10 @@
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
## The number of hours after which an organization invite token, emergency access invite token,
## email verification token and deletion request token will expire (must be at least 1)
# INVITATION_EXPIRATION_HOURS=120
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
@@ -253,9 +302,12 @@
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Controls the PBBKDF password iterations to apply on the server
## The change only applies when the password is changed
# PASSWORD_ITERATIONS=100000
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=350000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
@@ -267,7 +319,7 @@
## It's recommended to configure this value, otherwise certain functionality might not work,
## like attachment downloads, email links and U2F.
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
# DOMAIN=https://bw.domain.tld:8443
# DOMAIN=https://vw.domain.tld:8443
## Allowed iframe ancestors (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors
@@ -282,11 +334,14 @@
## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2.
# LOGIN_RATELIMIT_MAX_BURST=10
## Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in.
## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in.
# ADMIN_RATELIMIT_SECONDS=300
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
## Set the lifetime of admin sessions to this value (in minutes).
# ADMIN_SESSION_LIFETIME=20
## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP
## You can generate it here: https://upgrade.yubico.com/getapikey/
@@ -325,19 +380,23 @@
# ROCKET_WORKERS=10
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## Mail specific settings, set SMTP_HOST and SMTP_FROM to enable the mail service.
## Mail specific settings, set SMTP_FROM and either SMTP_HOST or USE_SENDMAIL to enable the mail service.
## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory
# SMTP_HOST=smtp.domain.tld
# SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Vaultwarden
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default.
# SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here.
# SMTP_SECURITY=starttls # ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption (port 25)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15
# Whether to send mail via the `sendmail` command
# USE_SENDMAIL=false
# Which sendmail command to use. The one found in the $PATH is used if not specified.
# SENDMAIL_COMMAND="/path/to/sendmail"
## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections.
## Possible values: ["Plain", "Login", "Xoauth2"].
## Multiple options need to be separated by a comma ','.
@@ -348,6 +407,9 @@
## but might need to be changed in case it trips some anti-spam filters
# HELO_NAME=
## Embed images as email attachments
# SMTP_EMBED_IMAGES=false
## SMTP debugging
## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!

1
.github/FUNDING.yml vendored
View File

@@ -1,2 +1,3 @@
github: dani-garcia
liberapay: dani-garcia
custom: ["https://paypal.me/DaniGG"]

View File

@@ -8,8 +8,9 @@ on:
- "migrations/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
- "rustfmt.toml"
- "diesel.toml"
pull_request:
paths:
- ".github/workflows/build.yml"
@@ -17,131 +18,185 @@ on:
- "migrations/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
- "rustfmt.toml"
- "diesel.toml"
jobs:
build:
runs-on: ubuntu-20.04
timeout-minutes: 120
# Make warnings errors, this is to prevent warnings slipping through.
# This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes.
env:
RUSTFLAGS: "-D warnings"
CARGO_REGISTRIES_CRATES_IO_PROTOCOL: git # Use the old git protocol until it is stable probably in 1.68 or 1.69. MSRV needs to be at this before removed.
strategy:
fail-fast: false
matrix:
channel:
- nightly
target-triple:
- x86_64-unknown-linux-gnu
include:
- target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu
features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
channel: nightly
os: ubuntu-20.04
ext: ""
- "rust-toolchain" # The version defined in rust-toolchain
- "msrv" # The supported MSRV
name: Build and Test ${{ matrix.channel }}
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }}
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
- name: "Checkout"
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # v3.5.0
# End Checkout the repo
# Install musl-tools when needed
- name: Install musl tools
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends musl-dev musl-tools cmake
if: matrix.target-triple == 'x86_64-unknown-linux-musl'
# End Install musl-tools when needed
# Install dependencies
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
if: startsWith( matrix.os, 'ubuntu' )
- name: "Install dependencies Ubuntu"
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config
# End Install dependencies
# Enable Rust Caching
- uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
# End Enable Rust Caching
# Determine rust-toolchain version
- name: Init Variables
id: toolchain
shell: bash
run: |
if [[ "${{ matrix.channel }}" == 'rust-toolchain' ]]; then
RUST_TOOLCHAIN="$(cat rust-toolchain)"
elif [[ "${{ matrix.channel }}" == 'msrv' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'rust-version.*"(\K.*?)(?=")' Cargo.toml)"
else
RUST_TOOLCHAIN="${{ matrix.channel }}"
fi
echo "RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" | tee -a "${GITHUB_OUTPUT}"
# End Determine rust-toolchain version
# Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@fc3253060d0c959bea12a59f10f8391454a0b02d # master @ 2023-03-21 - 06:36 GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
profile: minimal
target: ${{ matrix.target-triple }}
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
components: clippy, rustfmt
# End Uses the rust-toolchain file to determine version
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@fc3253060d0c959bea12a59f10f8391454a0b02d # master @ 2023-03-21 - 06:36 GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
# End Install the MSRV channel to be used
# Enable Rust Caching
- uses: Swatinem/rust-cache@6fd3edff6979b79f87531400ad694fb7f2c84b1f # v2.2.1
# End Enable Rust Caching
# Show environment
- name: "Show environment"
run: |
rustc -vV
cargo -vV
# End Show environment
# Run cargo tests (In release mode to speed up future builds)
# First test all features together, afterwards test them separately.
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Test single features
# 0: sqlite
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[0] != '' }}
# 1: mysql
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[1] != '' }}
# 2: postgresql
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[2] != '' }}
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc"
id: test_sqlite_mysql_postgresql_mimalloc
if: $${{ always() }}
run: |
cargo test --release --features sqlite,mysql,postgresql,enable_mimalloc
- name: "test features: sqlite,mysql,postgresql"
id: test_sqlite_mysql_postgresql
if: $${{ always() }}
run: |
cargo test --release --features sqlite,mysql,postgresql
- name: "test features: sqlite"
id: test_sqlite
if: $${{ always() }}
run: |
cargo test --release --features sqlite
- name: "test features: mysql"
id: test_mysql
if: $${{ always() }}
run: |
cargo test --release --features mysql
- name: "test features: postgresql"
id: test_postgresql
if: $${{ always() }}
run: |
cargo test --release --features postgresql
# End Run cargo tests
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: clippy
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
- name: "clippy features: sqlite,mysql,postgresql,enable_mimalloc"
id: clippy
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
run: |
cargo clippy --release --features sqlite,mysql,postgresql,enable_mimalloc -- -D warnings
# End Run cargo clippy
# Run cargo fmt
- name: '`cargo fmt`'
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: fmt
args: --all -- --check
# Run cargo fmt (Only run on rust-toolchain defined version)
- name: "check formatting"
id: formatting
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
run: |
cargo fmt --all -- --check
# End Run cargo fmt
# Build the binary
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: build
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Check for any previous failures, if there are stop, else continue.
# This is useful so all test/clippy/fmt actions are done, and they can all be addressed
- name: "Some checks failed"
if: ${{ failure() }}
run: |
echo "### :x: Checks Failed!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "|Job|Status|" >> $GITHUB_STEP_SUMMARY
echo "|---|------|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql)|${{ steps.test_sqlite_mysql_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite)|${{ steps.test_sqlite.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (mysql)|${{ steps.test_mysql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (postgresql)|${{ steps.test_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|clippy (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.clippy.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|fmt|${{ steps.formatting.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Please check the failed jobs and fix where needed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
exit 1
# Check for any previous failures, if there are stop, else continue.
# This is useful so all test/clippy/fmt actions are done, and they can all be addressed
- name: "All checks passed"
if: ${{ success() }}
run: |
echo "### :tada: Checks Passed!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Build the binary to upload to the artifacts
- name: "build features: sqlite,mysql,postgresql"
if: ${{ matrix.channel == 'rust-toolchain' }}
run: |
cargo build --release --features sqlite,mysql,postgresql
# End Build the binary
# Upload artifact to Github Actions
- name: Upload artifact
uses: actions/upload-artifact@27121b0bdffd731efa15d66772be8dc71245d074 # v2.2.4
- name: "Upload artifact"
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
name: vaultwarden
path: target/release/vaultwarden
# End Upload artifact to Github Actions

View File

@@ -1,33 +1,30 @@
name: Hadolint
on:
push:
paths:
- "docker/**"
pull_request:
paths:
- "docker/**"
on: [
push,
pull_request
]
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-20.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # v3.5.0
# End Checkout the repo
# Download hadolint
# Download hadolint - https://github.com/hadolint/hadolint/releases
- name: Download hadolint
shell: bash
run: |
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint
env:
HADOLINT_VERSION: 2.7.0
HADOLINT_VERSION: 2.12.0
# End Download hadolint
# Test Dockerfiles

View File

@@ -24,21 +24,22 @@ jobs:
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@f75dd6564bb646f95277dc8c3b80612e46a4a1ea # v3.4.1
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0
with:
cancel_others: 'true'
# Only run this when not creating a tag
if: ${{ startsWith(github.ref, 'refs/heads/') }}
docker-build:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 120
needs: skip_check
# Start a local docker registry to be used to generate multi-arch images.
services:
@@ -47,11 +48,23 @@ jobs:
ports:
- 5000:5000
env:
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
# Use BuildKit (https://docs.docker.com/build/buildkit/) for better
# build performance and the ability to copy extended file attributes
# (e.g., for executable capabilities) across build phases.
DOCKER_BUILDKIT: 1
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
# The *_REPO variables need to be configured as repository variables
# Append `/settings/variables/actions` to your repo url
# DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
# Check for Docker hub credentials in secrets
HAVE_DOCKERHUB_LOGIN: ${{ vars.DOCKERHUB_REPO != '' && secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# GHCR_REPO needs to be 'ghcr.io/<user>/<repo>'
# Check for Github credentials in secrets
HAVE_GHCR_LOGIN: ${{ vars.GHCR_REPO != '' && github.repository_owner != '' && secrets.GITHUB_TOKEN != '' }}
# QUAY_REPO needs to be 'quay.io/<user>/<repo>'
# Check for Quay.io credentials in secrets
HAVE_QUAY_LOGIN: ${{ vars.QUAY_REPO != '' && secrets.QUAY_USERNAME != '' && secrets.QUAY_TOKEN != '' }}
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
strategy:
matrix:
@@ -60,17 +73,10 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # v3.5.0
with:
fetch-depth: 0
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # v1.10.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Determine Docker Tag
- name: Init Variables
id: vars
@@ -78,42 +84,152 @@ jobs:
run: |
# Check which main tag we are going to build determined by github.ref
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "DOCKER_TAG=${GITHUB_REF#refs/*/}" | tee -a "${GITHUB_OUTPUT}"
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
echo "set-output name=DOCKER_TAG::testing"
echo "::set-output name=DOCKER_TAG::testing"
echo "DOCKER_TAG=testing" | tee -a "${GITHUB_OUTPUT}"
fi
# End Determine Docker Tag
- name: Build Debian based images
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }}
# Debian
# Docker Hub
- name: Build Debian based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' }}
if: ${{ matrix.base_image == 'debian' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Push Debian based images
- name: Push Debian based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' }}
if: ${{ matrix.base_image == 'debian' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Build Alpine based images
# GitHub Container Registry
- name: Build Debian based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' && env.HAVE_GHCR_LOGIN == 'true' }}
- name: Push Debian based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' && env.HAVE_GHCR_LOGIN == 'true' }}
# Quay.io
- name: Build Debian based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' && env.HAVE_QUAY_LOGIN == 'true' }}
- name: Push Debian based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' && env.HAVE_QUAY_LOGIN == 'true' }}
# Alpine
# Docker Hub
- name: Build Alpine based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' }}
if: ${{ matrix.base_image == 'alpine' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Push Alpine based images
- name: Push Alpine based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' }}
if: ${{ matrix.base_image == 'alpine' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
# GitHub Container Registry
- name: Build Alpine based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' && env.HAVE_GHCR_LOGIN == 'true' }}
- name: Push Alpine based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' && env.HAVE_GHCR_LOGIN == 'true' }}
# Quay.io
- name: Build Alpine based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' && env.HAVE_QUAY_LOGIN == 'true' }}
- name: Push Alpine based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' && env.HAVE_QUAY_LOGIN == 'true' }}

View File

@@ -3,5 +3,9 @@ ignored:
- DL3008
# disable explicit version for apk install
- DL3018
# disable check for consecutive `RUN` instructions
- DL3059
trustedRegistries:
- docker.io
- ghcr.io
- quay.io

View File

@@ -1,16 +1,20 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
rev: v4.4.0
hooks:
- id: check-yaml
- id: check-json
- id: check-toml
- id: mixed-line-ending
args: ["--fix=no"]
- id: end-of-file-fixer
exclude: "(.*js$|.*css$)"
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
- id: check-symlinks
- id: forbid-submodules
- repo: local
hooks:
- id: fmt
@@ -25,14 +29,16 @@ repos:
description: Test the package for errors.
entry: cargo test
language: system
args: ["--features", "sqlite,mysql,postgresql", "--"]
types: [rust]
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$)
pass_filenames: false
- id: cargo-clippy
name: cargo clippy
description: Lint Rust sources
entry: cargo clippy
language: system
args: ["--features", "sqlite,mysql,postgresql", "--", "-D", "warnings"]
types: [rust]
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|clippy.toml|.*\.rs$)
pass_filenames: false

3512
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,16 +3,17 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.60"
rust-version = "1.66.1"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
readme = "README.md"
license = "GPL-3.0-only"
license = "AGPL-3.0-only"
publish = false
build = "build.rs"
[features]
# default = ["sqlite"]
# Empty to keep compatibility, prefer to set USE_SYSLOG=true
enable_syslog = []
mysql = ["diesel/mysql", "diesel_migrations/mysql"]
@@ -20,135 +21,158 @@ postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"]
# Enable to use a vendored and statically linked openssl
vendored_openssl = ["openssl/vendored"]
# Enable MiMalloc memory allocator to replace the default malloc
# This can improve performance for Alpine builds
enable_mimalloc = ["mimalloc"]
# This is a development dependency, and should only be used during development!
# It enables the usage of the diesel_logger crate, which is able to output the generated queries.
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run.
query_logger = ["diesel_logger"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
unstable = []
[target."cfg(not(windows))".dependencies]
syslog = "4.0.1"
# Logging
syslog = "6.0.1" # Needs to be v4 until fern is updated
[dependencies]
# Web framework for nightly with a focus on ease-of-use, expressibility, and speed.
rocket = { version = "=0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "=0.5.0-dev"
# Logging
log = "0.4.17"
fern = { version = "0.6.2", features = ["syslog-6"] }
tracing = { version = "0.1.37", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
# HTTP client
reqwest = { version = "0.11.8", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.7", default-features = false }
# Used for custom short lived cookie jar
cookie = "0.15.1"
cookie_store = "0.15.1"
bytes = "1.1.0"
url = "2.2.2"
# Lazy initialization
once_cell = "1.17.1"
# multipart/form-data support
multipart = { version = "0.18.0", features = ["server"], default-features = false }
# Numerical libraries
num-traits = "0.2.15"
num-derive = "0.3.3"
# WebSockets library
ws = { version = "0.11.1", package = "parity-ws" }
# Web framework
rocket = { version = "0.5.0-rc.3", features = ["tls", "json"], default-features = false }
# MessagePack library
rmpv = "1.0.0"
# WebSockets libraries
tokio-tungstenite = "0.18.0"
rmpv = "1.0.0" # MessagePack library
# Concurrent hashmap implementation
chashmap = "2.2.2"
# Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.4.0"
# Async futures
futures = "0.3.28"
tokio = { version = "1.27.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.132", features = ["derive"] }
serde_json = "1.0.73"
# Logging
log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] }
serde = { version = "1.0.159", features = ["derive"] }
serde_json = "1.0.95"
# A safe, extensible ORM and Query builder
diesel = { version = "1.4.8", features = [ "chrono", "r2d2"] }
diesel_migrations = "1.4.0"
diesel = { version = "2.0.3", features = ["chrono", "r2d2"] }
diesel_migrations = "2.0.0"
diesel_logger = { version = "0.2.0", optional = true }
# Bundled SQLite
libsqlite3-sys = { version = "0.22.2", features = ["bundled"], optional = true }
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.25.2", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.8.4"
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.16.20"
# UUID generation
uuid = { version = "0.8.2", features = ["v4"] }
uuid = { version = "1.3.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.19", features = ["serde"] }
chrono-tz = "0.6.1"
time = "0.2.27"
chrono = { version = "0.4.24", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.1"
time = "0.3.20"
# Job scheduler
job_scheduler = "1.2.1"
job_scheduler_ng = "2.0.4"
# TOTP library
totp-lite = "1.0.3"
# Data encoding library
data-encoding = "2.3.2"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.3.3"
# JWT library
jsonwebtoken = "7.2.0"
jsonwebtoken = "8.3.0"
# U2F library
u2f = "0.2.0"
webauthn-rs = "0.3.1"
# TOTP library
totp-lite = "2.0.0"
# Yubico Library
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false }
# WebAuthn libraries
webauthn-rs = "0.3.2"
# Lazy initialization
once_cell = "1.9.0"
# Numerical libraries
num-traits = "0.2.14"
num-derive = "0.3.3"
# Handling of URL's for WebAuthn and favicons
url = "2.3.1"
# Email libraries
tracing = { version = "0.1.29", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
lettre = { version = "0.10.0-rc.4", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
lettre = { version = "0.10.3", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.2.0" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
# Template library
handlebars = { version = "4.1.6", features = ["dir_source"] }
# HTML Template library
handlebars = { version = "4.3.6", features = ["dir_source"] }
# For favicon extraction from main website
html5ever = "0.25.1"
markup5ever_rcdom = "0.1.0"
regex = { version = "1.5.4", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.1.1"
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.16", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
# Used by U2F, JWT and Postgres
openssl = "0.10.38"
# Favicon extraction libraries
html5gum = "0.5.2"
regex = { version = "1.7.3", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.2.0"
bytes = "1.4.0"
# URL encoding library
percent-encoding = "2.1.0"
# Punycode conversion
idna = "0.2.3"
# Cache function results (Used for version check and favicon fetching)
cached = "0.42.0"
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.2"
cookie_store = "0.19.0"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.48"
# CLI argument parsing
pico-args = "0.4.2"
# Logging panics to logfile instead stderr only
backtrace = "0.3.63"
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.6"
governor = "0.3.2"
paste = "1.0.12"
governor = "0.5.1"
[patch.crates-io]
# Use newest ring
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
# Check client versions for specific features.
semver = "1.0.17"
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
# to any issues or PRs for almost a year (as of April 2021). This hopefully
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
# In particular, `cron` has since implemented parsing of some common syntax
# that wasn't previously supported (https://github.com/zslayton/cron/pull/64).
job_scheduler = { git = 'https://github.com/jjlin/job_scheduler', rev = 'ee023418dbba2bfe1e30a5fd7d937f9e33739806' }
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "=0.1.34", features = ["secure"], default-features = false, optional = true }
libmimalloc-sys = "=0.1.30"
which = "4.4.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.0"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.2.0"
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations
[profile.release]
strip = "debuginfo"
lto = "thin"
# Always build argon2 using opt-level 3
# This is a huge speed improvement during testing
[profile.dev.package.argon2]
opt-level = 3
# A little bit of a speedup
[profile.dev]
split-debuginfo = "unpacked"

View File

@@ -1,5 +1,5 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@@ -7,17 +7,15 @@
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
@@ -72,7 +60,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -3,16 +3,18 @@
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
---
[![Build](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml/badge.svg)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml)
[![ghcr.io](https://img.shields.io/badge/ghcr.io-download-blue)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/Quay.io-download-blue)](https://quay.io/repository/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/master/LICENSE.txt)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.**
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor Bitwarden, Inc.**
#### ⚠️**IMPORTANT**⚠️: When using this server, please report any bugs or suggestions to us directly (look at the bottom of this page for ways to get in touch), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels.
@@ -23,12 +25,13 @@ Image is based on [Rust implementation of Bitwarden API](https://github.com/dani
Basically full implementation of Bitwarden API is provided including:
* Organizations support
* Attachments
* Attachments and Send
* Vault API support
* Serving the static files for Vault interface
* Website icons API
* Authenticator and U2F support
* YubiKey and Duo support
* Emergency Access
## Installation
Pull the docker image and mount a volume from the host for persistent storage:
@@ -39,7 +42,7 @@ docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server
```
This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS.
**IMPORTANT**: Most modern web browsers, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
@@ -49,41 +52,43 @@ If you have an available domain name, you can get HTTPS certificates with [Let's
See the [vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki) for more information on how to configure and run the vaultwarden server.
## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please [use the forum](https://vaultwarden.discourse.group/).
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please use [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [the forum](https://vaultwarden.discourse.group/).
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure there aren't any similar issues open, though!
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure you are on the latest version and there aren't any similar issues open, though!
If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org](https://matrix.to/#/#vaultwarden:matrix.org) room on Matrix. Feel free to join us!
### Sponsors
Thanks for your contribution to the project!
<!--
<table>
<tr>
<td align="center">
<a href="https://github.com/netdadaltd">
<img src="https://avatars.githubusercontent.com/u/77323954?s=75&v=4" width="75px;" alt="netdadaltd"/>
<a href="https://github.com/username">
<img src="https://avatars.githubusercontent.com/u/725423?s=75&v=4" width="75px;" alt="username"/>
<br />
<sub><b>netDada Ltd.</b></sub>
<sub><b>username</b></sub>
</a>
</td>
</tr>
</table>
<br/>
-->
<table>
<tr>
<td align="center">
<a href="https://github.com/Gyarbij" style="width: 75px">
<sub><b>Chono N</b></sub>
<a href="https://github.com/themightychris" style="width: 75px">
<sub><b>Chris Alfano</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/themightychris">
<sub><b>Chris Alfano</b></sub>
<a href="https://github.com/numberly" style="width: 75px">
<sub><b>Numberly</b></sub>
</a>
</td>
</tr>

View File

@@ -1,2 +0,0 @@
[global.limits]
json = 10485760 # 10 MiB

View File

@@ -9,20 +9,25 @@ fn main() {
println!("cargo:rustc-cfg=mysql");
#[cfg(feature = "postgresql")]
println!("cargo:rustc-cfg=postgresql");
#[cfg(feature = "query_logger")]
println!("cargo:rustc-cfg=query_logger");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!(
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
#[cfg(all(not(debug_assertions), feature = "query_logger"))]
compile_error!("Query Logging is only allowed during development, it is not intented for production usage!");
// Support $BWRS_VERSION for legacy compatibility, but default to $VW_VERSION.
// If neither exist, read from git.
let maybe_vaultwarden_version =
env::var("VW_VERSION").or_else(|_| env::var("BWRS_VERSION")).or_else(|_| version_from_git_info());
if let Ok(version) = maybe_vaultwarden_version {
println!("cargo:rustc-env=VW_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
println!("cargo:rustc-env=VW_VERSION={version}");
println!("cargo:rustc-env=CARGO_PKG_VERSION={version}");
}
}
@@ -47,29 +52,29 @@ fn version_from_git_info() -> Result<String, std::io::Error> {
// the current commit doesn't have an associated tag
let exact_tag = run(&["git", "describe", "--abbrev=0", "--tags", "--exact-match"]).ok();
if let Some(ref exact) = exact_tag {
println!("cargo:rustc-env=GIT_EXACT_TAG={}", exact);
println!("cargo:rustc-env=GIT_EXACT_TAG={exact}");
}
// The last available tag, equal to exact_tag when
// the current commit is tagged
let last_tag = run(&["git", "describe", "--abbrev=0", "--tags"])?;
println!("cargo:rustc-env=GIT_LAST_TAG={}", last_tag);
println!("cargo:rustc-env=GIT_LAST_TAG={last_tag}");
// The current branch name
let branch = run(&["git", "rev-parse", "--abbrev-ref", "HEAD"])?;
println!("cargo:rustc-env=GIT_BRANCH={}", branch);
println!("cargo:rustc-env=GIT_BRANCH={branch}");
// The current git commit hash
let rev = run(&["git", "rev-parse", "HEAD"])?;
let rev_short = rev.get(..8).unwrap_or_default();
println!("cargo:rustc-env=GIT_REV={}", rev_short);
println!("cargo:rustc-env=GIT_REV={rev_short}");
// Combined version
if let Some(exact) = exact_tag {
Ok(exact)
} else if &branch != "main" && &branch != "master" {
Ok(format!("{}-{} ({})", last_tag, rev_short, branch))
Ok(format!("{last_tag}-{rev_short} ({branch})"))
} else {
Ok(format!("{}-{}", last_tag, rev_short))
Ok(format!("{last_tag}-{rev_short}"))
}
}

View File

@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1
# The cross-built images have the build arch (`amd64`) embedded in the image
# manifest, rather than the target arch. For example:
#

View File

@@ -2,40 +2,42 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.58-buster" %}
{% set rust_version = "1.68.2" %}
{% set debian_version = "bullseye" %}
{% set alpine_version = "3.17" %}
{% set build_stage_base_image = "rust:%s-%s" % (rust_version, debian_version) %}
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-nightly-2022-01-23" %}
{% set runtime_stage_base_image = "alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-stable-%s" % rust_version %}
{% set runtime_stage_base_image = "alpine:%s" % alpine_version %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-nightly-2022-01-23" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-stable-%s" % rust_version %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:%s" % alpine_version %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% elif "armv6" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-nightly-2022-01-23" %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-stable-%s" % rust_version %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:%s" % alpine_version %}
{% set package_arch_target = "arm-unknown-linux-musleabi" %}
{% elif "arm64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-nightly-2022-01-23" %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:3.15" %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-stable-%s" % rust_version %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:%s" % alpine_version %}
{% set package_arch_target = "aarch64-unknown-linux-musl" %}
{% endif %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %}
{% set runtime_stage_base_image = "debian:%s-slim" % debian_version %}
{% elif "arm64" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:%s" % debian_version %}
{% set package_arch_name = "arm64" %}
{% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:%s" % debian_version %}
{% set package_arch_name = "armel" %}
{% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:%s" % debian_version %}
{% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
{% set package_cross_compiler = "arm-linux-gnueabihf" %}
@@ -50,7 +52,7 @@
{% else %}
{% set package_arch_target_param = "" %}
{% endif %}
{% if "buildx" in target_file %}
{% if "buildkit" in target_file %}
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
{% else %}
{% set mount_rust_cache = "" %}
@@ -59,8 +61,8 @@
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_version = "2.25.1b" %}
{% set vault_image_digest = "sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba" %}
{% set vault_version = "v2023.3.0b" %}
{% set vault_image_digest = "sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
@@ -70,21 +72,19 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" vaultwarden/web-vault:v{{ vault_version }}
# $ docker pull vaultwarden/web-vault:{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" vaultwarden/web-vault:{{ vault_version }}
# [vaultwarden/web-vault@{{ vault_image_digest }}]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" vaultwarden/web-vault@{{ vault_image_digest }}
# [vaultwarden/web-vault:v{{ vault_version }}]
# [vaultwarden/web-vault:{{ vault_version }}]
#
FROM vaultwarden/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
@@ -93,39 +93,29 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
{# {% if "alpine" not in target_file and "buildx" in target_file %}
# Debian based Buildx builds can use some special apt caching to speedup building.
# By default Debian based images have some rules to keep docker builds clean, we need to remove this.
# See: https://hub.docker.com/r/docker/dockerfile
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
{% endif %} #}
# Create CARGO_HOME folder and don't download rust docs
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
{% if "alpine" in target_file %}
ENV RUSTFLAGS='-C link-arg=-s'
{% if "armv7" in target_file %}
{#- https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html -#}
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
{% if "armv6" in target_file %}
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/{{ package_arch_target }}/lib/libatomic.a'
{% endif %}
{% elif "arm" in target_file %}
#
# Install required build libs for {{ package_arch_name }} architecture.
# hadolint ignore=DL3059
# Install build dependencies for the {{ package_arch_name }} architecture
RUN dpkg --add-architecture {{ package_arch_name }} \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
libc6-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
libmariadb3{{ package_arch_prefix }} \
libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
libmariadb3{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libssl-dev{{ package_arch_prefix }} \
#
# Make sure cargo has the right target config
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
@@ -137,16 +127,13 @@ ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}" \
OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% elif "amd64" in target_file %}
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libmariadb-dev \
libpq-dev
{% endif %}
# Creates a dummy project used to grab dependencies
@@ -163,7 +150,12 @@ RUN {{ mount_rust_cache -}} rustup target add {{ package_arch_target }}
{% endif %}
# Configure the DB ARG as late as possible to not invalidate the cached layers above
{% if "alpine" in target_file %}
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% else %}
ARG DB=sqlite,mysql,postgresql
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -180,30 +172,22 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %}
{% if "armv7" in target_file %}
# hadolint ignore=DL3059
RUN musl-strip target/{{ package_arch_target }}/release/vaultwarden
{% endif %}
{% endif %}
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM {{ runtime_stage_base_image }}
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
{%- if "alpine" in runtime_stage_base_image %} \
SSL_CERT_DIR=/etc/ssl/certs
{% endif %}
{% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
{% endif %}
@@ -211,26 +195,30 @@ RUN [ "cross-build-start" ]
RUN mkdir /data \
{% if "alpine" in runtime_stage_base_image %}
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
{% else %}
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
{% if "armv6" in target_file and "alpine" not in target_file %}
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
{% endif -%}
{% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
{% endif %}
@@ -241,7 +229,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
{% if package_arch_target is defined %}
COPY --from=build /app/target/{{ package_arch_target }}/release/vaultwarden .
@@ -254,6 +241,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -8,8 +8,8 @@ all: $(OBJECTS)
%/Dockerfile.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx: Dockerfile.j2 render_template
%/Dockerfile.buildkit: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
%/Dockerfile.buildkit.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,19 +36,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libpq-dev
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -81,29 +75,27 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
FROM debian:bullseye-slim
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
@@ -115,7 +107,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/vaultwarden .
@@ -124,6 +115,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-nightly-2022-01-23 as build
FROM blackdex/rust-musl:x86_64-musl-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,17 +71,16 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.15
FROM alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
@@ -93,11 +88,10 @@ ENV ROCKET_ENV="staging" \
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
VOLUME /data
@@ -107,7 +101,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
@@ -116,6 +109,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,19 +36,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libpq-dev
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -81,29 +75,27 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
FROM debian:bullseye-slim
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
@@ -115,7 +107,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/vaultwarden .
@@ -124,6 +115,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-nightly-2022-01-23 as build
FROM blackdex/rust-musl:x86_64-musl-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add x86_64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,17 +71,16 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.15
FROM alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
@@ -93,11 +88,10 @@ ENV ROCKET_ENV="staging" \
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
VOLUME /data
@@ -107,7 +101,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
@@ -116,6 +109,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# hadolint ignore=DL3059
# Install build dependencies for the arm64 architecture
RUN dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
gcc-aarch64-linux-gnu \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
libmariadb3:arm64 \
libpq-dev:arm64 \
libpq5:arm64 \
libssl-dev:arm64 \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:buster
FROM balenalib/aarch64-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +128,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
@@ -148,6 +136,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-nightly-2022-01-23 as build
FROM blackdex/rust-musl:aarch64-musl-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN rustup target add aarch64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,33 +71,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
FROM balenalib/aarch64-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -111,7 +103,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
@@ -120,6 +111,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# hadolint ignore=DL3059
# Install build dependencies for the arm64 architecture
RUN dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
gcc-aarch64-linux-gnu \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
libmariadb3:arm64 \
libpq-dev:arm64 \
libpq5:arm64 \
libssl-dev:arm64 \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:buster
FROM balenalib/aarch64-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +128,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
@@ -148,6 +136,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-nightly-2022-01-23 as build
FROM blackdex/rust-musl:aarch64-musl-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,33 +71,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
FROM balenalib/aarch64-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -111,7 +103,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
@@ -120,6 +111,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armel architecture
RUN dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
gcc-arm-linux-gnueabi \
libc6-dev:armel \
libpq5:armel \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
libmariadb3:armel \
libpq-dev:armel \
libpq5:armel \
libssl-dev:armel \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,35 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:buster
FROM balenalib/rpi-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +132,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
@@ -148,6 +140,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-nightly-2022-01-23 as build
FROM blackdex/rust-musl:arm-musleabi-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,12 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +55,8 @@ COPY ./build.rs ./build.rs
RUN rustup target add arm-unknown-linux-musleabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,33 +73,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
FROM balenalib/rpi-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -111,7 +105,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
@@ -120,6 +113,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armel architecture
RUN dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
gcc-arm-linux-gnueabi \
libc6-dev:armel \
libpq5:armel \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
libmariadb3:armel \
libpq-dev:armel \
libpq5:armel \
libssl-dev:armel \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,35 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:buster
FROM balenalib/rpi-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +132,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
@@ -148,6 +140,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-nightly-2022-01-23 as build
FROM blackdex/rust-musl:arm-musleabi-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,12 +36,12 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -58,7 +55,8 @@ COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-musleabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -75,33 +73,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
FROM balenalib/rpi-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -111,7 +105,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
@@ -120,6 +113,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armhf architecture
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
gcc-arm-linux-gnueabihf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
libmariadb3:armhf \
libpq-dev:armhf \
libpq5:armhf \
libssl-dev:armhf \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:buster
FROM balenalib/armv7hf-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +128,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
@@ -148,6 +136,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-nightly-2022-01-23 as build
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,13 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -59,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN rustup target add armv7-unknown-linux-musleabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -76,35 +71,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# hadolint ignore=DL3059
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.15
FROM balenalib/armv7hf-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -114,7 +103,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
@@ -123,6 +111,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM rust:1.58-buster as build
FROM rust:1.68.2-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,26 +36,23 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armhf architecture
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
gcc-arm-linux-gnueabihf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
libmariadb3:armhf \
libpq-dev:armhf \
libpq5:armhf \
libssl-dev:armhf \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
@@ -71,7 +65,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,35 +94,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:buster
FROM balenalib/armv7hf-debian:bullseye
ENV ROCKET_ENV="staging" \
ROCKET_PORT=80 \
ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -139,7 +128,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
@@ -148,6 +136,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.25.1b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.1b
# [vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba]
# $ docker pull vaultwarden/web-vault:v2023.3.0b
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2023.3.0b
# [vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba
# [vaultwarden/web-vault:v2.25.1b]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee
# [vaultwarden/web-vault:v2023.3.0b]
#
FROM vaultwarden/web-vault@sha256:9b82318d553d72f091e8755f5aff80eed495f90bbe5b0703522953480f5c2fba as vault
FROM vaultwarden/web-vault@sha256:aa6ba791911a815ea570ec2ddc59992481c6ba8fbb65eed4f7074b463430d3ee as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-nightly-2022-01-23 as build
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.68.2 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -39,13 +36,10 @@ ENV DEBIAN_FRONTEND=noninteractive \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -59,7 +53,8 @@ COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-musleabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -76,35 +71,29 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# hadolint ignore=DL3059
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.15
FROM balenalib/armv7hf-alpine:3.17
ENV ROCKET_ENV="staging" \
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
ROCKET_WORKERS=10 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
dumb-init \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
@@ -114,7 +103,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
@@ -123,6 +111,4 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -2,8 +2,8 @@
# Use the value of the corresponding env var (if present),
# or a default value otherwise.
: ${DATA_FOLDER:="data"}
: ${ROCKET_PORT:="80"}
: "${DATA_FOLDER:="data"}"
: "${ROCKET_PORT:="80"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json
@@ -45,9 +45,13 @@ if [ -r "${CONFIG_FILE}" ]; then
fi
fi
addr="${ROCKET_ADDRESS}"
if [ -z "${addr}" ] || [ "${addr}" = '0.0.0.0' ] || [ "${addr}" = '::' ]; then
addr='localhost'
fi
base_path="$(get_base_path "${DOMAIN}")"
if [ -n "${ROCKET_TLS}" ]; then
s='s'
fi
curl --insecure --fail --silent --show-error \
"http${s}://localhost:${ROCKET_PORT}${base_path}/alive" || exit 1
"http${s}://${addr}:${ROCKET_PORT}${base_path}/alive" || exit 1

View File

@@ -9,15 +9,15 @@ fi
if [ -d /etc/vaultwarden.d ]; then
for f in /etc/vaultwarden.d/*.sh; do
if [ -r $f ]; then
. $f
if [ -r "${f}" ]; then
. "${f}"
fi
done
elif [ -d /etc/bitwarden_rs.d ]; then
echo "### You are using the old /etc/bitwarden_rs.d script directory, please migrate to /etc/vaultwarden.d ###"
for f in /etc/bitwarden_rs.d/*.sh; do
if [ -r $f ]; then
. $f
if [ -r "${f}" ]; then
. "${f}"
fi
done
fi

View File

@@ -1,3 +1,5 @@
#!/usr/bin/env bash
# The default Debian-based images support these arches for all database backends.
arches=(
amd64
@@ -5,7 +7,9 @@ arches=(
armv7
arm64
)
export arches
if [[ "${DOCKER_TAG}" == *alpine ]]; then
distro_suffix=.alpine
fi
export distro_suffix

View File

@@ -1,7 +1,8 @@
#!/bin/bash
#!/usr/bin/env bash
echo ">>> Building images..."
# shellcheck source=arches.sh
source ./hooks/arches.sh
if [[ -z "${SOURCE_COMMIT}" ]]; then
@@ -23,10 +24,10 @@ LABELS=(
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
org.opencontainers.image.created="$(date --utc --iso-8601=seconds)"
org.opencontainers.image.documentation="https://github.com/dani-garcia/vaultwarden/wiki"
org.opencontainers.image.licenses="GPL-3.0-only"
org.opencontainers.image.licenses="AGPL-3.0-only"
org.opencontainers.image.revision="${SOURCE_COMMIT}"
org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}"
org.opencontainers.image.url="https://hub.docker.com/r/${DOCKER_REPO#*/}"
org.opencontainers.image.url="https://github.com/dani-garcia/vaultwarden"
org.opencontainers.image.version="${SOURCE_VERSION}"
)
LABEL_ARGS=()
@@ -34,9 +35,9 @@ for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}")
done
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildkit as template
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
buildx_suffix=.buildx
buildkit_suffix=.buildkit
fi
set -ex
@@ -45,6 +46,6 @@ for arch in "${arches[@]}"; do
docker build \
"${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
-f "docker/${arch}/Dockerfile${buildkit_suffix}${distro_suffix}" \
.
done

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -ex

View File

@@ -1,5 +1,6 @@
#!/bin/bash
#!/usr/bin/env bash
# shellcheck source=arches.sh
source ./hooks/arches.sh
export DOCKER_CLI_EXPERIMENTAL=enabled
@@ -41,7 +42,7 @@ LOCAL_REPO="${LOCAL_REGISTRY}/${REPO}"
echo ">>> Pushing images to local registry..."
for arch in ${arches[@]}; do
for arch in "${arches[@]}"; do
docker_image="${DOCKER_REPO}:${DOCKER_TAG}-${arch}"
local_image="${LOCAL_REPO}:${DOCKER_TAG}-${arch}"
docker tag "${docker_image}" "${local_image}"
@@ -71,9 +72,9 @@ tags=("${DOCKER_REPO}:${DOCKER_TAG}")
# to make it easier for users to track the latest release.
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
if [[ "${DOCKER_TAG}" == *alpine ]]; then
tags+=(${DOCKER_REPO}:alpine)
tags+=("${DOCKER_REPO}:alpine")
else
tags+=(${DOCKER_REPO}:latest)
tags+=("${DOCKER_REPO}:latest")
fi
fi
@@ -91,10 +92,10 @@ declare -A arch_to_platform=(
[arm64]="linux/arm64"
)
platforms=()
for arch in ${arches[@]}; do
for arch in "${arches[@]}"; do
platforms+=("${arch_to_platform[$arch]}")
done
platforms="$(join "," "${platforms[@]}")"
platform="$(join "," "${platforms[@]}")"
# Run the build, pushing the resulting images and multi-arch manifest list to
# Docker Hub. The Dockerfile is read from stdin to avoid sending any build
@@ -104,46 +105,7 @@ docker buildx build \
--network host \
--build-arg LOCAL_REPO="${LOCAL_REPO}" \
--build-arg DOCKER_TAG="${DOCKER_TAG}" \
--platform "${platforms}" \
--platform "${platform}" \
"${tag_args[@]}" \
--push \
- < ./docker/Dockerfile.buildx
# Add an extra arch-specific tag for `arm32v6`; Docker can't seem to properly
# auto-select that image on ARMv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Note that we use `arm32v6` instead of `armv6` to be consistent with the
# existing vaultwarden tags, which adhere to the naming conventions of the
# Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6).
# Unfortunately, these per-arch repo names aren't always consistent with the
# corresponding platform (OS/arch/variant) IDs, particularly in the case of
# 32-bit ARM arches (e.g., `linux/arm/v6` is used, not `linux/arm32/v6`).
#
# TODO: It looks like this issue should be fixed starting in Docker 20.10.0,
# so this step can be removed once fixed versions are in wider distribution.
#
# Tags:
#
# testing => testing-arm32v6
# testing-alpine => <ignored>
# x.y.z => x.y.z-arm32v6, latest-arm32v6
# x.y.z-alpine => <ignored>
#
if [[ "${DOCKER_TAG}" != *alpine ]]; then
image="${DOCKER_REPO}":"${DOCKER_TAG}"
# Fetch the multi-arch manifest list and find the digest of the armv6 image.
filter='.manifests|.[]|select(.platform.architecture=="arm" and .platform.variant=="v6")|.digest'
digest="$(docker manifest inspect "${image}" | jq -r "${filter}")"
# Pull the armv6 image by digest, retag it, and repush it.
docker pull "${DOCKER_REPO}"@"${digest}"
docker tag "${DOCKER_REPO}"@"${digest}" "${image}"-arm32v6
docker push "${image}"-arm32v6
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
docker tag "${image}"-arm32v6 "${DOCKER_REPO}:latest"-arm32v6
docker push "${DOCKER_REPO}:latest"-arm32v6
fi
fi

View File

@@ -0,0 +1,4 @@
-- First remove the previous primary key
ALTER TABLE devices DROP PRIMARY KEY;
-- Add a new combined one
ALTER TABLE devices ADD PRIMARY KEY (uuid, user_uuid);

View File

@@ -0,0 +1,3 @@
DROP TABLE `groups`;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE `groups` (
uuid CHAR(36) NOT NULL PRIMARY KEY,
organizations_uuid VARCHAR(40) NOT NULL REFERENCES organizations (uuid),
name VARCHAR(100) NOT NULL,
access_all BOOLEAN NOT NULL,
external_id VARCHAR(300) NULL,
creation_date DATETIME NOT NULL,
revision_date DATETIME NOT NULL
);
CREATE TABLE groups_users (
groups_uuid CHAR(36) NOT NULL REFERENCES `groups` (uuid),
users_organizations_uuid VARCHAR(36) NOT NULL REFERENCES users_organizations (uuid),
UNIQUE (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid VARCHAR(40) NOT NULL REFERENCES collections (uuid),
groups_uuid CHAR(36) NOT NULL REFERENCES `groups` (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
UNIQUE (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid CHAR(36) NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid CHAR(36),
org_uuid CHAR(36),
cipher_uuid CHAR(36),
collection_uuid CHAR(36),
group_uuid CHAR(36),
org_user_uuid CHAR(36),
act_user_uuid CHAR(36),
device_type INTEGER,
ip_address TEXT,
event_date DATETIME NOT NULL,
policy_uuid CHAR(36),
provider_uuid CHAR(36),
provider_user_uuid CHAR(36),
provider_org_uuid CHAR(36),
UNIQUE (uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color VARCHAR(7);

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

View File

@@ -0,0 +1,4 @@
-- First remove the previous primary key
ALTER TABLE devices DROP CONSTRAINT devices_pkey;
-- Add a new combined one
ALTER TABLE devices ADD PRIMARY KEY (uuid, user_uuid);

View File

@@ -0,0 +1,3 @@
DROP TABLE groups;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE groups (
uuid CHAR(36) NOT NULL PRIMARY KEY,
organizations_uuid VARCHAR(40) NOT NULL REFERENCES organizations (uuid),
name VARCHAR(100) NOT NULL,
access_all BOOLEAN NOT NULL,
external_id VARCHAR(300) NULL,
creation_date TIMESTAMP NOT NULL,
revision_date TIMESTAMP NOT NULL
);
CREATE TABLE groups_users (
groups_uuid CHAR(36) NOT NULL REFERENCES groups (uuid),
users_organizations_uuid VARCHAR(36) NOT NULL REFERENCES users_organizations (uuid),
PRIMARY KEY (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid VARCHAR(40) NOT NULL REFERENCES collections (uuid),
groups_uuid CHAR(36) NOT NULL REFERENCES groups (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
PRIMARY KEY (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid CHAR(36) NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid CHAR(36),
org_uuid CHAR(36),
cipher_uuid CHAR(36),
collection_uuid CHAR(36),
group_uuid CHAR(36),
org_user_uuid CHAR(36),
act_user_uuid CHAR(36),
device_type INTEGER,
ip_address TEXT,
event_date TIMESTAMP NOT NULL,
policy_uuid CHAR(36),
provider_uuid CHAR(36),
provider_user_uuid CHAR(36),
provider_org_uuid CHAR(36),
UNIQUE (uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color TEXT;

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

View File

@@ -0,0 +1,23 @@
-- Create new devices table with primary keys on both uuid and user_uuid
CREATE TABLE devices_new (
uuid TEXT NOT NULL,
created_at DATETIME NOT NULL,
updated_at DATETIME NOT NULL,
user_uuid TEXT NOT NULL,
name TEXT NOT NULL,
atype INTEGER NOT NULL,
push_token TEXT,
refresh_token TEXT NOT NULL,
twofactor_remember TEXT,
PRIMARY KEY(uuid, user_uuid),
FOREIGN KEY(user_uuid) REFERENCES users(uuid)
);
-- Transfer current data to new table
INSERT INTO devices_new SELECT * FROM devices;
-- Drop the old table
DROP TABLE devices;
-- Rename the new table to the original name
ALTER TABLE devices_new RENAME TO devices;

View File

@@ -0,0 +1,3 @@
DROP TABLE groups;
DROP TABLE groups_users;
DROP TABLE collections_groups;

View File

@@ -0,0 +1,23 @@
CREATE TABLE groups (
uuid TEXT NOT NULL PRIMARY KEY,
organizations_uuid TEXT NOT NULL REFERENCES organizations (uuid),
name TEXT NOT NULL,
access_all BOOLEAN NOT NULL,
external_id TEXT NULL,
creation_date TIMESTAMP NOT NULL,
revision_date TIMESTAMP NOT NULL
);
CREATE TABLE groups_users (
groups_uuid TEXT NOT NULL REFERENCES groups (uuid),
users_organizations_uuid TEXT NOT NULL REFERENCES users_organizations (uuid),
UNIQUE (groups_uuid, users_organizations_uuid)
);
CREATE TABLE collections_groups (
collections_uuid TEXT NOT NULL REFERENCES collections (uuid),
groups_uuid TEXT NOT NULL REFERENCES groups (uuid),
read_only BOOLEAN NOT NULL,
hide_passwords BOOLEAN NOT NULL,
UNIQUE (collections_uuid, groups_uuid)
);

View File

@@ -0,0 +1 @@
DROP TABLE event;

View File

@@ -0,0 +1,19 @@
CREATE TABLE event (
uuid TEXT NOT NULL PRIMARY KEY,
event_type INTEGER NOT NULL,
user_uuid TEXT,
org_uuid TEXT,
cipher_uuid TEXT,
collection_uuid TEXT,
group_uuid TEXT,
org_user_uuid TEXT,
act_user_uuid TEXT,
device_type INTEGER,
ip_address TEXT,
event_date DATETIME NOT NULL,
policy_uuid TEXT,
provider_uuid TEXT,
provider_user_uuid TEXT,
provider_org_uuid TEXT,
UNIQUE (uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color TEXT;

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

93
resources/404.svg Normal file
View File

@@ -0,0 +1,93 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="500"
height="222"
viewBox="0 0 500 222"
version="1.1"
id="svg5"
xml:space="preserve"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14, custom)"
sodipodi:docname="404.svg"
inkscape:export-filename="404.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="px"
showgrid="false"
inkscape:zoom="1.3791767"
inkscape:cx="284.59007"
inkscape:cy="214.25826"
inkscape:window-width="1916"
inkscape:window-height="1038"
inkscape:window-x="0"
inkscape:window-y="18"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
showguides="false" /><defs
id="defs2"><mask
id="holes"><rect
x="-60"
y="-60"
width="120"
height="120"
fill="#ffffff"
id="rect3296" /><circle
id="hole"
cy="-40"
r="3"
cx="0" /><use
transform="rotate(72)"
xlink:href="#hole"
id="use3299" /><use
transform="rotate(144)"
xlink:href="#hole"
id="use3301" /><use
transform="rotate(-144)"
xlink:href="#hole"
id="use3303" /><use
transform="rotate(-72)"
xlink:href="#hole"
id="use3305" /></mask></defs><g
inkscape:label="Ebene 1"
inkscape:groupmode="layer"
id="layer1"><rect
style="fill:none;fill-opacity:0.5;stroke:none;stroke-width:0.74;stroke-opacity:1"
id="rect681"
width="666"
height="222"
x="0"
y="0" /><text
xml:space="preserve"
style="font-size:128px;line-height:1.25;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7;stroke-width:1"
x="249.9375"
y="134.8125"
id="text3425"><tspan
id="tspan3423"
x="249.9375"
y="134.8125"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:128px;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7;stroke-width:1"
sodipodi:role="line">404</tspan></text><text
xml:space="preserve"
style="font-size:26.6667px;line-height:1.25;font-family:'Open Sans';-inkscape-font-specification:'Open Sans';text-align:center;text-anchor:middle"
x="249.04297"
y="194.68582"
id="text4067"><tspan
sodipodi:role="line"
id="tspan4065"
x="249.04295"
y="194.68582"
style="font-size:26.6667px;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:0.7">Return to the web vault?</tspan></text></g></svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.7 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.4 KiB

After

Width:  |  Height:  |  Size: 5.2 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

View File

@@ -1 +1 @@
nightly-2022-01-23
1.68.2

View File

@@ -1,7 +1,4 @@
version = "Two"
edition = "2018"
edition = "2021"
max_width = 120
newline_style = "Unix"
use_small_heuristics = "Off"
struct_lit_single_line = false
overflow_delimited_expr = true

View File

@@ -3,16 +3,17 @@ use serde::de::DeserializeOwned;
use serde_json::Value;
use std::env;
use rocket::serde::json::Json;
use rocket::{
http::{Cookie, Cookies, SameSite, Status},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
response::{content::Html, Flash, Redirect},
Route,
form::Form,
http::{Cookie, CookieJar, MediaType, SameSite, Status},
request::{FromRequest, Outcome, Request},
response::{content::RawHtml as Html, Redirect},
Catcher, Route,
};
use rocket_contrib::json::Json;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
api::{core::log_event, ApiResult, EmptyResult, JsonResult, Notify, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
@@ -30,9 +31,9 @@ pub fn routes() -> Vec<Route> {
}
routes![
admin_login,
get_users_json,
get_user_json,
get_user_by_mail_json,
post_admin_login,
admin_page,
invite_user,
@@ -52,10 +53,19 @@ pub fn routes() -> Vec<Route> {
organizations_overview,
delete_organization,
diagnostics,
get_diagnostics_config
get_diagnostics_config,
resend_user_invite,
]
}
pub fn catchers() -> Vec<Catcher> {
if !CONFIG.disable_admin_token() && !CONFIG.is_admin_token_set() {
catchers![]
} else {
catchers![admin_login]
}
}
static DB_TYPE: Lazy<&str> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| match t {
@@ -76,30 +86,24 @@ fn admin_disabled() -> &'static str {
const COOKIE_NAME: &str = "VW_ADMIN";
const ADMIN_PATH: &str = "/admin";
const DT_FMT: &str = "%Y-%m-%d %H:%M:%S %Z";
const BASE_TEMPLATE: &str = "admin/base";
const ACTING_ADMIN_USER: &str = "vaultwarden-admin-00000-000000000000";
fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
}
struct Referer(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for Referer {
type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
Outcome::Success(Referer(request.headers().get_one("Referer").map(str::to_string)))
}
}
#[derive(Debug)]
struct IpHeader(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for IpHeader {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for IpHeader {
type Error = ();
fn from_request(req: &'a Request<'r>) -> Outcome<Self, Self::Error> {
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
if req.headers().get_one(&CONFIG.ip_header()).is_some() {
Outcome::Success(IpHeader(Some(CONFIG.ip_header())))
} else if req.headers().get_one("X-Client-IP").is_some() {
@@ -114,35 +118,36 @@ impl<'a, 'r> FromRequest<'a, 'r> for IpHeader {
}
}
/// Used for `Location` response headers, which must specify an absolute URI
/// (see https://tools.ietf.org/html/rfc2616#section-14.30).
fn admin_url(referer: Referer) -> String {
// If we get a referer use that to make it work when, DOMAIN is not set
if let Some(mut referer) = referer.0 {
if let Some(start_index) = referer.find(ADMIN_PATH) {
referer.truncate(start_index + ADMIN_PATH.len());
return referer;
}
}
if CONFIG.domain_set() {
// Don't use CONFIG.domain() directly, since the user may want to keep a
// trailing slash there, particularly when running under a subpath.
format!("{}{}{}", CONFIG.domain_origin(), CONFIG.domain_path(), ADMIN_PATH)
} else {
// Last case, when no referer or domain set, technically invalid but better than nothing
ADMIN_PATH.to_string()
}
fn admin_url() -> String {
format!("{}{}", CONFIG.domain_origin(), admin_path())
}
#[get("/", rank = 2)]
fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> {
#[derive(Responder)]
enum AdminResponse {
#[response(status = 200)]
Ok(ApiResult<Html<String>>),
#[response(status = 401)]
Unauthorized(ApiResult<Html<String>>),
#[response(status = 429)]
TooManyRequests(ApiResult<Html<String>>),
}
#[catch(401)]
fn admin_login(request: &Request<'_>) -> ApiResult<Html<String>> {
if request.format() == Some(&MediaType::JSON) {
err_code!("Authorization failed.", Status::Unauthorized.code);
}
let redirect = request.segments::<std::path::PathBuf>(0..).unwrap_or_default().display().to_string();
render_admin_login(None, Some(redirect))
}
fn render_admin_login(msg: Option<&str>, redirect: Option<String>) -> ApiResult<Html<String>> {
// If there is an error, show it
let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg()));
let msg = msg.map(|msg| format!("Error: {msg}"));
let json = json!({
"page_content": "admin/login",
"version": VERSION,
"error": msg,
"redirect": redirect,
"urlpath": CONFIG.domain_path()
});
@@ -154,25 +159,25 @@ fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> {
#[derive(FromForm)]
struct LoginForm {
token: String,
redirect: Option<String>,
}
#[post("/", data = "<data>")]
fn post_admin_login(
data: Form<LoginForm>,
mut cookies: Cookies,
ip: ClientIp,
referer: Referer,
) -> Result<Redirect, Flash<Redirect>> {
fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp) -> Result<Redirect, AdminResponse> {
let data = data.into_inner();
let redirect = data.redirect;
if crate::ratelimit::check_limit_admin(&ip.ip).is_err() {
return Err(Flash::error(Redirect::to(admin_url(referer)), "Too many requests, try again later."));
return Err(AdminResponse::TooManyRequests(render_admin_login(
Some("Too many requests, try again later."),
redirect,
)));
}
// If the token is invalid, redirect to login page
if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip);
Err(Flash::error(Redirect::to(admin_url(referer)), "Invalid admin token, please try again."))
Err(AdminResponse::Unauthorized(render_admin_login(Some("Invalid admin token, please try again."), redirect)))
} else {
// If the token received is valid, generate JWT and save it as a cookie
let claims = generate_admin_claims();
@@ -180,19 +185,36 @@ fn post_admin_login(
let cookie = Cookie::build(COOKIE_NAME, jwt)
.path(admin_path())
.max_age(time::Duration::minutes(20))
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict)
.http_only(true)
.finish();
cookies.add(cookie);
Ok(Redirect::to(admin_url(referer)))
if let Some(redirect) = redirect {
Ok(Redirect::to(format!("{}{}", admin_path(), redirect)))
} else {
Err(AdminResponse::Ok(render_admin_page()))
}
}
}
fn _validate_token(token: &str) -> bool {
match CONFIG.admin_token().as_ref() {
None => false,
Some(t) if t.starts_with("$argon2") => {
use argon2::password_hash::PasswordVerifier;
match argon2::password_hash::PasswordHash::new(t) {
Ok(h) => {
// NOTE: hash params from `ADMIN_TOKEN` are used instead of what is configured in the `Argon2` instance.
argon2::Argon2::default().verify_password(token.trim().as_ref(), &h).is_ok()
}
Err(e) => {
error!("The configured Argon2 PHC in `ADMIN_TOKEN` is invalid: {e}");
false
}
}
}
Some(t) => crate::crypto::ct_eq(t.trim(), token.trim()),
}
}
@@ -200,34 +222,16 @@ fn _validate_token(token: &str) -> bool {
#[derive(Serialize)]
struct AdminTemplateData {
page_content: String,
version: Option<&'static str>,
page_data: Option<Value>,
config: Value,
can_backup: bool,
logged_in: bool,
urlpath: String,
}
impl AdminTemplateData {
fn new() -> Self {
Self {
page_content: String::from("admin/settings"),
version: VERSION,
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
page_data: None,
}
}
fn with_data(page_content: &str, page_data: Value) -> Self {
fn new(page_content: &str, page_data: Value) -> Self {
Self {
page_content: String::from(page_content),
version: VERSION,
page_data: Some(page_data),
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
}
@@ -238,20 +242,28 @@ impl AdminTemplateData {
}
}
#[get("/", rank = 1)]
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
let text = AdminTemplateData::new().render()?;
fn render_admin_page() -> ApiResult<Html<String>> {
let settings_json = json!({
"config": CONFIG.prepare_json(),
"can_backup": *CAN_BACKUP,
});
let text = AdminTemplateData::new("admin/settings", settings_json).render()?;
Ok(Html(text))
}
#[get("/")]
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
render_admin_page()
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct InviteData {
email: String,
}
fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn) {
async fn get_user_or_404(uuid: &str, conn: &mut DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn).await {
Ok(user)
} else {
err_code!("User doesn't exist", Status::NotFound.code);
@@ -259,128 +271,187 @@ fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
}
#[post("/invite", data = "<data>")]
fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> JsonResult {
async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner();
let email = data.email.clone();
if User::find_by_mail(&data.email, &conn).is_some() {
if User::find_by_mail(&data.email, &mut conn).await.is_some() {
err_code!("User already exists", Status::Conflict.code)
}
let mut user = User::new(email);
// TODO: After try_blocks is stabilized, this can be made more readable
// See: https://github.com/rust-lang/rust/issues/31436
(|| {
async fn _generate_invite(user: &User, conn: &mut DbConn) -> EmptyResult {
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
} else {
let invitation = Invitation::new(user.email.clone());
invitation.save(&conn)?;
let invitation = Invitation::new(&user.email);
invitation.save(conn).await
}
}
user.save(&conn)
})()
.map_err(|e| e.with_code(Status::InternalServerError.code))?;
_generate_invite(&user, &mut conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
user.save(&mut conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
Ok(Json(user.to_json(&conn)))
Ok(Json(user.to_json(&mut conn).await))
}
#[post("/test/smtp", data = "<data>")]
fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
let data: InviteData = data.into_inner();
if CONFIG.mail_enabled() {
mail::send_test(&data.email)
mail::send_test(&data.email).await
} else {
err!("Mail is not enabled")
}
}
#[get("/logout")]
fn logout(mut cookies: Cookies, referer: Referer) -> Redirect {
cookies.remove(Cookie::named(COOKIE_NAME));
Redirect::to(admin_url(referer))
fn logout(cookies: &CookieJar<'_>) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
Redirect::to(admin_path())
}
#[get("/users")]
fn get_users_json(_token: AdminToken, conn: DbConn) -> Json<Value> {
let users = User::get_all(&conn);
let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect();
async fn get_users_json(_token: AdminToken, mut conn: DbConn) -> Json<Value> {
let users = User::get_all(&mut conn).await;
let mut users_json = Vec::with_capacity(users.len());
for u in users {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
users_json.push(usr);
}
Json(Value::Array(users_json))
}
#[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn);
let dt_fmt = "%Y-%m-%d %H:%M:%S %Z";
let users_json: Vec<Value> = users
.iter()
.map(|u| {
let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt));
usr["last_active"] = match u.last_active(&conn) {
Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)),
None => json!("Never"),
};
usr
})
.collect();
async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&mut conn).await;
let mut users_json = Vec::with_capacity(users.len());
for u in users {
let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&mut conn).await {
Some(dt) => json!(format_naive_datetime_local(&dt, DT_FMT)),
None => json!("Never"),
};
users_json.push(usr);
}
let text = AdminTemplateData::with_data("admin/users", json!(users_json)).render()?;
let text = AdminTemplateData::new("admin/users", json!(users_json)).render()?;
Ok(Html(text))
}
#[get("/users/<uuid>")]
fn get_user_json(uuid: String, _token: AdminToken, conn: DbConn) -> JsonResult {
let user = get_user_or_404(&uuid, &conn)?;
#[get("/users/by-mail/<mail>")]
async fn get_user_by_mail_json(mail: String, _token: AdminToken, mut conn: DbConn) -> JsonResult {
if let Some(u) = User::find_by_mail(&mail, &mut conn).await {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
Ok(Json(user.to_json(&conn)))
#[get("/users/<uuid>")]
async fn get_user_json(uuid: String, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(&uuid, &mut conn).await?;
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
}
#[post("/users/<uuid>/delete")]
fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&uuid, &conn)?;
user.delete(&conn)
async fn delete_user(uuid: String, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&uuid, &mut conn).await?;
// Get the user_org records before deleting the actual user
let user_orgs = UserOrganization::find_any_state_by_user(&uuid, &mut conn).await;
let res = user.delete(&mut conn).await;
for user_org in user_orgs {
log_event(
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
user_org.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
)
.await;
}
res
}
#[post("/users/<uuid>/deauth")]
fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?;
async fn deauth_user(uuid: String, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.save(&conn)
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[post("/users/<uuid>/disable")]
fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?;
async fn disable_user(uuid: String, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.enabled = false;
user.save(&conn)
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[post("/users/<uuid>/enable")]
fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
async fn enable_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
user.enabled = true;
user.save(&conn)
user.save(&mut conn).await
}
#[post("/users/<uuid>/remove-2fa")]
fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
TwoFactor::delete_all_by_user(&user.uuid, &conn)?;
async fn remove_2fa(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
user.totp_recover = None;
user.save(&conn)
user.save(&mut conn).await
}
#[post("/users/<uuid>/invite/resend")]
async fn resend_user_invite(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(&uuid, &mut conn).await {
//TODO: replace this with user.status check when it will be available (PR#3397)
if !user.password_hash.is_empty() {
err_code!("User already accepted invitation", Status::BadRequest.code);
}
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
} else {
Ok(())
}
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[derive(Deserialize, Debug)]
@@ -391,13 +462,14 @@ struct UserOrgTypeData {
}
#[post("/users/org_type", data = "<data>")]
fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn) {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let mut user_to_edit =
match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
@@ -405,46 +477,70 @@ fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: D
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there are at least another owner
let num_owners = UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).len();
if num_owners <= 1 {
// Removing owner permission, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&data.org_uuid, UserOrgType::Owner, &mut conn).await <= 1 {
err!("Can't change the type of the last owner")
}
}
user_to_edit.atype = new_type as i32;
user_to_edit.save(&conn)
// This check is also done at api::organizations::{accept_invite(), _confirm_invite, _activate_user(), edit_user()}, update_user_org_type
// It returns different error messages per function.
if new_type < UserOrgType::Admin {
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, &user_to_edit.org_uuid, true, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot modify this user to this type because it has no two-step login method activated");
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot modify this user to this type because it is a member of an organization which forbids it");
}
}
}
log_event(
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
data.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
)
.await;
user_to_edit.atype = new_type;
user_to_edit.save(&mut conn).await
}
#[post("/users/update_revision")]
fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn)
async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
User::update_all_revisions(&mut conn).await
}
#[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations
.iter()
.map(|o| {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org
})
.collect();
async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&mut conn).await;
let mut organizations_json = Vec::with_capacity(organizations.len());
for o in organizations {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &mut conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await);
org["collection_count"] = json!(Collection::count_by_org(&o.uuid, &mut conn).await);
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32));
organizations_json.push(org);
}
let text = AdminTemplateData::with_data("admin/organizations", json!(organizations_json)).render()?;
let text = AdminTemplateData::new("admin/organizations", json!(organizations_json)).render()?;
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).map_res("Organization doesn't exist")?;
org.delete(&conn)
async fn delete_organization(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &mut conn).await.map_res("Organization doesn't exist")?;
org.delete(&mut conn).await
}
#[derive(Deserialize)]
@@ -462,62 +558,46 @@ struct GitCommit {
sha: String,
}
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let github_api = get_reqwest_client();
Ok(github_api.get(url).send()?.error_for_status()?.json::<T>()?)
#[derive(Deserialize)]
struct TimeApi {
year: u16,
month: u8,
day: u8,
hour: u8,
minute: u8,
seconds: u8,
}
fn has_http_access() -> bool {
async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let json_api = get_reqwest_client();
Ok(json_api.get(url).send().await?.error_for_status()?.json::<T>().await?)
}
async fn has_http_access() -> bool {
let http_access = get_reqwest_client();
match http_access.head("https://github.com/dani-garcia/vaultwarden").send() {
match http_access.head("https://github.com/dani-garcia/vaultwarden").send().await {
Ok(r) => r.status().is_success(),
_ => false,
}
}
#[get("/diagnostics")]
fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResult<Html<String>> {
use crate::util::read_file_string;
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let web_vault_version: WebVaultVersion =
match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => WebVaultVersion {
version: String::from("Version file missing"),
},
},
};
// Execute some environment checks
let running_within_docker = is_running_in_docker();
let has_http_access = has_http_access();
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
|| env::var_os("HTTPS_PROXY").is_some()
|| env::var_os("https_proxy").is_some();
// Check if we are able to resolve DNS entries
let dns_resolved = match ("github.com", 0).to_socket_addrs().map(|mut i| i.next()) {
Ok(Some(a)) => a.ip().to_string(),
_ => "Could not resolve domain name.".to_string(),
};
use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already.
/// It will cache this function for 300 seconds (5 minutes) which should prevent the exhaustion of the rate limit.
#[cached(time = 300, sync_writes = true)]
async fn get_release_info(has_http_access: bool, running_within_docker: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
// TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already.
let (latest_release, latest_commit, latest_web_build) = if has_http_access {
if has_http_access {
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest") {
match get_json_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest")
.await
{
Ok(r) => r.tag_name,
_ => "-".to_string(),
},
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main") {
match get_json_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main").await {
Ok(mut c) => {
c.sha.truncate(8);
c.sha
@@ -529,9 +609,11 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
if running_within_docker {
"-".to_string()
} else {
match get_github_api::<GitRelease>(
match get_json_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
) {
)
.await
{
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(),
}
@@ -539,8 +621,61 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
)
} else {
("-".to_string(), "-".to_string(), "-".to_string())
}
}
async fn get_ntp_time(has_http_access: bool) -> String {
if has_http_access {
if let Ok(ntp_time) = get_json_api::<TimeApi>("https://www.timeapi.io/api/Time/current/zone?timeZone=UTC").await
{
return format!(
"{year}-{month:02}-{day:02} {hour:02}:{minute:02}:{seconds:02} UTC",
year = ntp_time.year,
month = ntp_time.month,
day = ntp_time.day,
hour = ntp_time.hour,
minute = ntp_time.minute,
seconds = ntp_time.seconds
);
}
}
String::from("Unable to fetch NTP time.")
}
#[get("/diagnostics")]
async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn) -> ApiResult<Html<String>> {
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let web_vault_version: WebVaultVersion =
match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => WebVaultVersion {
version: String::from("Version file missing"),
},
},
};
// Execute some environment checks
let running_within_docker = is_running_in_docker();
let has_http_access = has_http_access().await;
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
|| env::var_os("HTTPS_PROXY").is_some()
|| env::var_os("https_proxy").is_some();
// Check if we are able to resolve DNS entries
let dns_resolved = match ("github.com", 0).to_socket_addrs().map(|mut i| i.next()) {
Ok(Some(a)) => a.ip().to_string(),
_ => "Unable to resolve domain name.".to_string(),
};
let (latest_release, latest_commit, latest_web_build) =
get_release_info(has_http_access, running_within_docker).await;
let ip_header_name = match &ip_header.0 {
Some(h) => h,
_ => "",
@@ -548,13 +683,14 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
"current_release": VERSION,
"latest_release": latest_release,
"latest_commit": latest_commit,
"web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version.version,
"web_vault_version": web_vault_version.version.trim_start_matches('v'),
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"docker_base_image": docker_base_image(),
"docker_base_image": if running_within_docker { docker_base_image() } else { "Not applicable" },
"has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
@@ -562,14 +698,17 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
"db_type": *DB_TYPE,
"db_version": get_sql_server_version(&conn),
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"db_version": get_sql_server_version(&mut conn).await,
"admin_url": format!("{}/diagnostics", admin_url()),
"overrides": &CONFIG.get_overrides().join(", "),
"host_arch": std::env::consts::ARCH,
"host_os": std::env::consts::OS,
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the server date/time check as late as possible to minimize the time difference
"ntp_time": get_ntp_time(has_http_access).await, // Run the ntp check as late as possible to minimize the time difference
});
let text = AdminTemplateData::with_data("admin/diagnostics", diagnostics_json).render()?;
let text = AdminTemplateData::new("admin/diagnostics", diagnostics_json).render()?;
Ok(Html(text))
}
@@ -591,43 +730,50 @@ fn delete_config(_token: AdminToken) -> EmptyResult {
}
#[post("/config/backup_db")]
fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
async fn backup_db(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
if *CAN_BACKUP {
backup_database(&conn)
backup_database(&mut conn).await
} else {
err!("Can't back up current DB (Only SQLite supports this feature)");
}
}
pub struct AdminToken {}
pub struct AdminToken {
ip: ClientIp,
}
impl<'a, 'r> FromRequest<'a, 'r> for AdminToken {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for AdminToken {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip,
_ => err_handler!("Error getting Client IP"),
};
if CONFIG.disable_admin_token() {
Outcome::Success(AdminToken {})
Outcome::Success(Self {
ip,
})
} else {
let mut cookies = request.cookies();
let cookies = request.cookies();
let access_token = match cookies.get(COOKIE_NAME) {
Some(cookie) => cookie.value(),
None => return Outcome::Forward(()), // If there is no cookie, redirect to login
};
let ip = match request.guard::<ClientIp>() {
Outcome::Success(ip) => ip.ip,
_ => err_handler!("Error getting Client IP"),
None => return Outcome::Failure((Status::Unauthorized, "Unauthorized")),
};
if decode_admin(access_token).is_err() {
// Remove admin cookie
cookies.remove(Cookie::named(COOKIE_NAME));
error!("Invalid or expired admin JWT. IP: {}.", ip);
return Outcome::Forward(());
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
error!("Invalid or expired admin JWT. IP: {}.", &ip.ip);
return Outcome::Failure((Status::Unauthorized, "Session expired"));
}
Outcome::Success(AdminToken {})
Outcome::Success(Self {
ip,
})
}
}
}

View File

@@ -1,15 +1,22 @@
use chrono::Utc;
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType},
api::{
core::log_user_event, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, Headers},
crypto,
db::{models::*, DbConn},
mail, CONFIG,
};
use rocket::{
http::Status,
request::{FromRequest, Outcome, Request},
};
pub fn routes() -> Vec<rocket::Route> {
routes![
register,
@@ -36,15 +43,20 @@ pub fn routes() -> Vec<rocket::Route> {
verify_password,
api_key,
rotate_api_key,
get_known_device,
get_known_device_from_path,
put_avatar,
]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct RegisterData {
pub struct RegisterData {
Email: String,
Kdf: Option<i32>,
KdfIterations: Option<i32>,
KdfMemory: Option<i32>,
KdfParallelism: Option<i32>,
Key: String,
Keys: Option<KeysData>,
MasterPasswordHash: String,
@@ -62,38 +74,74 @@ struct KeysData {
PublicKey: String,
}
/// Trims whitespace from password hints, and converts blank password hints to `None`.
fn clean_password_hint(password_hint: &Option<String>) -> Option<String> {
match password_hint {
None => None,
Some(h) => match h.trim() {
"" => None,
ht => Some(ht.to_string()),
},
}
}
fn enforce_password_hint_setting(password_hint: &Option<String>) -> EmptyResult {
if password_hint.is_some() && !CONFIG.password_hints_allowed() {
err!("Password hints have been disabled by the administrator. Remove the hint and try again.");
}
Ok(())
}
#[post("/accounts/register", data = "<data>")]
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
}
pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> JsonResult {
let data: RegisterData = data.into_inner().data;
let email = data.Email.to_lowercase();
let mut user = match User::find_by_mail(&email, &conn) {
Some(user) => {
// Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden)
// This also prevents issues with very long usernames causing to large JWT's. See #2419
if let Some(ref name) = data.Name {
if name.len() > 50 {
err!("The field Name must be a string with a maximum length of 50.");
}
}
// Check against the password hint setting here so if it fails, the user
// can retry without losing their invitation below.
let password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&password_hint)?;
let mut verified_by_invite = false;
let mut user = match User::find_by_mail(&email, &mut conn).await {
Some(mut user) => {
if !user.password_hash.is_empty() {
if CONFIG.is_signup_allowed(&email) {
err!("User already exists")
} else {
err!("Registration not allowed or user already exists")
}
err!("Registration not allowed or user already exists")
}
if let Some(token) = data.Token {
let claims = decode_invite(&token)?;
if claims.email == email {
// Verify the email address when signing up via a valid invite token
verified_by_invite = true;
user.verified_at = Some(Utc::now().naive_utc());
user
} else {
err!("Registration email does not match invite email")
}
} else if Invitation::take(&email, &conn) {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
} else if Invitation::take(&email, &mut conn).await {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&conn)?;
user_org.save(&mut conn).await?;
}
user
} else if EmergencyAccess::find_invited_by_grantee_email(&email, &conn).is_some() {
} else if CONFIG.is_signup_allowed(&email)
|| EmergencyAccess::find_invited_by_grantee_email(&email, &mut conn).await.is_some()
{
user
} else if CONFIG.is_signup_allowed(&email) {
err!("Account with this email already exists")
} else {
err!("Registration not allowed or user already exists")
}
@@ -102,7 +150,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
// Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions.
if Invitation::take(&email, &conn) || CONFIG.is_signup_allowed(&email) {
if Invitation::take(&email, &mut conn).await || CONFIG.is_signup_allowed(&email) {
User::new(email.clone())
} else {
err!("Registration not allowed or user already exists")
@@ -111,85 +159,115 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
};
// Make sure we don't leave a lingering invitation.
Invitation::take(&email, &conn);
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
}
Invitation::take(&email, &mut conn).await;
if let Some(client_kdf_type) = data.Kdf {
user.client_kdf_type = client_kdf_type;
}
user.set_password(&data.MasterPasswordHash, None);
user.akey = data.Key;
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
}
user.client_kdf_memory = data.KdfMemory;
user.client_kdf_parallelism = data.KdfParallelism;
user.set_password(&data.MasterPasswordHash, Some(data.Key), true, None);
user.password_hint = password_hint;
// Add extra fields if present
if let Some(name) = data.Name {
user.name = name;
}
if let Some(hint) = data.MasterPasswordHint {
user.password_hint = Some(hint);
}
if let Some(keys) = data.Keys {
user.private_key = Some(keys.EncryptedPrivateKey);
user.public_key = Some(keys.PublicKey);
}
if CONFIG.mail_enabled() {
if CONFIG.signups_verify() {
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid) {
if CONFIG.signups_verify() && !verified_by_invite {
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await {
error!("Error sending welcome email: {:#?}", e);
}
user.last_verifying_at = Some(user.created_at);
} else if let Err(e) = mail::send_welcome(&user.email) {
} else if let Err(e) = mail::send_welcome(&user.email).await {
error!("Error sending welcome email: {:#?}", e);
}
}
user.save(&conn)
user.save(&mut conn).await?;
Ok(Json(json!({
"Object": "register",
"CaptchaBypassToken": "",
})))
}
#[get("/accounts/profile")]
fn profile(headers: Headers, conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&conn))
async fn profile(headers: Headers, mut conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&mut conn).await)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct ProfileData {
#[serde(rename = "Culture")]
_Culture: String, // Ignored, always use en-US
MasterPasswordHint: Option<String>,
// Culture: String, // Ignored, always use en-US
// MasterPasswordHint: Option<String>, // Ignored, has been moved to ChangePassData
Name: String,
}
#[put("/accounts/profile", data = "<data>")]
fn put_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
post_profile(data, headers, conn)
async fn put_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
post_profile(data, headers, conn).await
}
#[post("/accounts/profile", data = "<data>")]
fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: ProfileData = data.into_inner().data;
let mut user = headers.user;
// Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden)
// This also prevents issues with very long usernames causing to large JWT's. See #2419
if data.Name.len() > 50 {
err!("The field Name must be a string with a maximum length of 50.");
}
let mut user = headers.user;
user.name = data.Name;
user.password_hint = match data.MasterPasswordHint {
Some(ref h) if h.is_empty() => None,
_ => data.MasterPasswordHint,
};
user.save(&conn)?;
Ok(Json(user.to_json(&conn)))
user.save(&mut conn).await?;
Ok(Json(user.to_json(&mut conn).await))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct AvatarData {
AvatarColor: Option<String>,
}
#[put("/accounts/avatar", data = "<data>")]
async fn put_avatar(data: JsonUpcase<AvatarData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: AvatarData = data.into_inner().data;
// It looks like it only supports the 6 hex color format.
// If you try to add the short value it will not show that color.
// Check and force 7 chars, including the #.
if let Some(color) = &data.AvatarColor {
if color.len() != 7 {
err!("The field AvatarColor must be a HTML/Hex color code with a length of 7 characters")
}
}
let mut user = headers.user;
user.avatar_color = data.AvatarColor;
user.save(&mut conn).await?;
Ok(Json(user.to_json(&mut conn).await))
}
#[get("/users/<uuid>/public-key")]
fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &conn) {
async fn get_public_keys(uuid: String, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -202,7 +280,7 @@ fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult
}
#[post("/accounts/keys", data = "<data>")]
fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: KeysData = data.into_inner().data;
let mut user = headers.user;
@@ -210,7 +288,7 @@ fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> Json
user.private_key = Some(data.EncryptedPrivateKey);
user.public_key = Some(data.PublicKey);
user.save(&conn)?;
user.save(&mut conn).await?;
Ok(Json(json!({
"PrivateKey": user.private_key,
@@ -224,11 +302,17 @@ fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> Json
struct ChangePassData {
MasterPasswordHash: String,
NewMasterPasswordHash: String,
MasterPasswordHint: Option<String>,
Key: String,
}
#[post("/accounts/password", data = "<data>")]
fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_password(
data: JsonUpcase<ChangePassData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: ChangePassData = data.into_inner().data;
let mut user = headers.user;
@@ -236,12 +320,27 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
err!("Invalid password")
}
user.password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&user.password_hint)?;
log_user_event(EventType::UserChangedPassword as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn)
.await;
user.set_password(
&data.NewMasterPasswordHash,
Some(data.Key),
true,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
);
user.akey = data.Key;
user.save(&conn)
let save_result = user.save(&mut conn).await;
// Prevent loging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[derive(Deserialize)]
@@ -249,6 +348,8 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
struct ChangeKdfData {
Kdf: i32,
KdfIterations: i32,
KdfMemory: Option<i32>,
KdfParallelism: Option<i32>,
MasterPasswordHash: String,
NewMasterPasswordHash: String,
@@ -256,7 +357,7 @@ struct ChangeKdfData {
}
#[post("/accounts/kdf", data = "<data>")]
fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let data: ChangeKdfData = data.into_inner().data;
let mut user = headers.user;
@@ -264,11 +365,42 @@ fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) ->
err!("Invalid password")
}
if data.Kdf == UserKdfType::Pbkdf2 as i32 && data.KdfIterations < 100_000 {
err!("PBKDF2 KDF iterations must be at least 100000.")
}
if data.Kdf == UserKdfType::Argon2id as i32 {
if data.KdfIterations < 1 {
err!("Argon2 KDF iterations must be at least 1.")
}
if let Some(m) = data.KdfMemory {
if !(15..=1024).contains(&m) {
err!("Argon2 memory must be between 15 MB and 1024 MB.")
}
user.client_kdf_memory = data.KdfMemory;
} else {
err!("Argon2 memory parameter is required.")
}
if let Some(p) = data.KdfParallelism {
if !(1..=16).contains(&p) {
err!("Argon2 parallelism must be between 1 and 16.")
}
user.client_kdf_parallelism = data.KdfParallelism;
} else {
err!("Argon2 parallelism parameter is required.")
}
} else {
user.client_kdf_memory = None;
user.client_kdf_parallelism = None;
}
user.client_kdf_iter = data.KdfIterations;
user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn)
user.set_password(&data.NewMasterPasswordHash, Some(data.Key), true, None);
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[derive(Deserialize)]
@@ -291,18 +423,24 @@ struct KeyData {
}
#[post("/accounts/key", data = "<data>")]
fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let data: KeyData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
// Validate the import before continuing
// Bitwarden does not process the import if there is one item invalid.
// Since we check for the size of the encrypted note length, we need to do that here to pre-validate it.
// TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks.
Cipher::validate_notes(&data.Ciphers)?;
let user_uuid = &headers.user.uuid;
// Update folder data
for folder_data in data.Folders {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &conn) {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &mut conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
};
@@ -312,14 +450,14 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
}
saved_folder.name = folder_data.Name;
saved_folder.save(&conn)?
saved_folder.save(&mut conn).await?
}
// Update cipher data
use super::ciphers::update_cipher_from_data;
for cipher_data in data.Ciphers {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &conn) {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
@@ -330,7 +468,9 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None)?
// We force the users to logout after the user has been saved to try and prevent these issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &mut conn, &nt, UpdateType::None)
.await?
}
// Update user data
@@ -340,11 +480,23 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
user.private_key = Some(data.PrivateKey);
user.reset_security_stamp();
user.save(&conn)
let save_result = user.save(&mut conn).await;
// Prevent loging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[post("/accounts/security-stamp", data = "<data>")]
fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_sstamp(
data: JsonUpcase<PasswordData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let mut user = headers.user;
@@ -352,9 +504,13 @@ fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -
err!("Invalid password")
}
Device::delete_all_by_user(&user.uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.save(&conn)
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[derive(Deserialize)]
@@ -365,7 +521,7 @@ struct EmailTokenData {
}
#[post("/accounts/email-token", data = "<data>")]
fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: EmailTokenData = data.into_inner().data;
let mut user = headers.user;
@@ -373,7 +529,7 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).is_some() {
if User::find_by_mail(&data.NewEmail, &mut conn).await.is_some() {
err!("Email already in use");
}
@@ -384,14 +540,14 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
let token = crypto::generate_email_token(6);
if CONFIG.mail_enabled() {
if let Err(e) = mail::send_change_email(&data.NewEmail, &token) {
if let Err(e) = mail::send_change_email(&data.NewEmail, &token).await {
error!("Error sending change-email email: {:#?}", e);
}
}
user.email_new = Some(data.NewEmail);
user.email_new_token = Some(token);
user.save(&conn)
user.save(&mut conn).await
}
#[derive(Deserialize)]
@@ -406,7 +562,12 @@ struct ChangeEmailData {
}
#[post("/accounts/email", data = "<data>")]
fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email(
data: JsonUpcase<ChangeEmailData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: ChangeEmailData = data.into_inner().data;
let mut user = headers.user;
@@ -414,7 +575,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).is_some() {
if User::find_by_mail(&data.NewEmail, &mut conn).await.is_some() {
err!("Email already in use");
}
@@ -446,21 +607,24 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
user.email_new = None;
user.email_new_token = None;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.set_password(&data.NewMasterPasswordHash, Some(data.Key), true, None);
user.save(&conn)
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[post("/accounts/verify-email")]
fn post_verify_email(headers: Headers) -> EmptyResult {
async fn post_verify_email(headers: Headers) -> EmptyResult {
let user = headers.user;
if !CONFIG.mail_enabled() {
err!("Cannot verify email address");
}
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid) {
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await {
error!("Error sending verify_email email: {:#?}", e);
}
@@ -475,10 +639,10 @@ struct VerifyEmailTokenData {
}
#[post("/accounts/verify-email-token", data = "<data>")]
fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn) -> EmptyResult {
async fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, mut conn: DbConn) -> EmptyResult {
let data: VerifyEmailTokenData = data.into_inner().data;
let mut user = match User::find_by_uuid(&data.UserId, &conn) {
let mut user = match User::find_by_uuid(&data.UserId, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -493,7 +657,7 @@ fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn)
user.verified_at = Some(Utc::now().naive_utc());
user.last_verifying_at = None;
user.login_verify_count = 0;
if let Err(e) = user.save(&conn) {
if let Err(e) = user.save(&mut conn).await {
error!("Error saving email verification: {:#?}", e);
}
@@ -507,14 +671,12 @@ struct DeleteRecoverData {
}
#[post("/accounts/delete-recover", data = "<data>")]
fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, mut conn: DbConn) -> EmptyResult {
let data: DeleteRecoverData = data.into_inner().data;
let user = User::find_by_mail(&data.Email, &conn);
if CONFIG.mail_enabled() {
if let Some(user) = user {
if let Err(e) = mail::send_delete_account(&user.email, &user.uuid) {
if let Some(user) = User::find_by_mail(&data.Email, &mut conn).await {
if let Err(e) = mail::send_delete_account(&user.email, &user.uuid).await {
error!("Error sending delete account email: {:#?}", e);
}
}
@@ -536,10 +698,10 @@ struct DeleteRecoverTokenData {
}
#[post("/accounts/delete-recover-token", data = "<data>")]
fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, mut conn: DbConn) -> EmptyResult {
let data: DeleteRecoverTokenData = data.into_inner().data;
let user = match User::find_by_uuid(&data.UserId, &conn) {
let user = match User::find_by_uuid(&data.UserId, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -551,16 +713,16 @@ fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbC
if claims.sub != user.uuid {
err!("Invalid claim");
}
user.delete(&conn)
user.delete(&mut conn).await
}
#[post("/accounts/delete", data = "<data>")]
fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn)
async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn).await
}
#[delete("/accounts", data = "<data>")]
fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -568,13 +730,13 @@ fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn
err!("Invalid password")
}
user.delete(&conn)
user.delete(&mut conn).await
}
#[get("/accounts/revision-date")]
fn revision_date(headers: Headers) -> String {
fn revision_date(headers: Headers) -> JsonResult {
let revision_date = headers.user.updated_at.timestamp_millis();
revision_date.to_string()
Ok(Json(json!(revision_date)))
}
#[derive(Deserialize)]
@@ -584,7 +746,7 @@ struct PasswordHintData {
}
#[post("/accounts/password-hint", data = "<data>")]
fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
async fn password_hint(data: JsonUpcase<PasswordHintData>, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
err!("This server is not configured to provide password hints.");
}
@@ -594,19 +756,18 @@ fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResul
let data: PasswordHintData = data.into_inner().data;
let email = &data.Email;
match User::find_by_mail(email, &conn) {
match User::find_by_mail(email, &mut conn).await {
None => {
// To prevent user enumeration, act as if the user exists.
if CONFIG.mail_enabled() {
// There is still a timing side channel here in that the code
// paths that send mail take noticeably longer than ones that
// don't. Add a randomized sleep to mitigate this somewhat.
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
let base = 1000;
use rand::{rngs::SmallRng, Rng, SeedableRng};
let mut rng = SmallRng::from_entropy();
let delta: i32 = 100;
let sleep_ms = (base + rng.gen_range(-delta..=delta)) as u64;
std::thread::sleep(std::time::Duration::from_millis(sleep_ms));
let sleep_ms = (1_000 + rng.gen_range(-delta..=delta)) as u64;
tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await;
Ok(())
} else {
err!(NO_HINT);
@@ -615,10 +776,10 @@ fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResul
Some(user) => {
let hint: Option<String> = user.password_hint;
if CONFIG.mail_enabled() {
mail::send_password_hint(email, hint)?;
mail::send_password_hint(email, hint).await?;
Ok(())
} else if let Some(hint) = hint {
err!(format!("Your password hint is: {}", hint));
err!(format!("Your password hint is: {hint}"));
} else {
err!(NO_HINT);
}
@@ -628,23 +789,31 @@ fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResul
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct PreloginData {
pub struct PreloginData {
Email: String,
}
#[post("/accounts/prelogin", data = "<data>")]
fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
pub async fn _prelogin(data: JsonUpcase<PreloginData>, mut conn: DbConn) -> Json<Value> {
let data: PreloginData = data.into_inner().data;
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn) {
Some(user) => (user.client_kdf_type, user.client_kdf_iter),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT),
let (kdf_type, kdf_iter, kdf_mem, kdf_para) = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => (user.client_kdf_type, user.client_kdf_iter, user.client_kdf_memory, user.client_kdf_parallelism),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT, None, None),
};
Json(json!({
let result = json!({
"Kdf": kdf_type,
"KdfIterations": kdf_iter
}))
"KdfIterations": kdf_iter,
"KdfMemory": kdf_mem,
"KdfParallelism": kdf_para,
});
Json(result)
}
// https://github.com/bitwarden/server/blob/master/src/Api/Models/Request/Accounts/SecretVerificationRequestModel.cs
@@ -666,7 +835,14 @@ fn verify_password(data: JsonUpcase<SecretVerificationRequest>, headers: Headers
Ok(())
}
fn _api_key(data: JsonUpcase<SecretVerificationRequest>, rotate: bool, headers: Headers, conn: DbConn) -> JsonResult {
async fn _api_key(
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
use crate::util::format_date;
let data: SecretVerificationRequest = data.into_inner().data;
let mut user = headers.user;
@@ -676,21 +852,81 @@ fn _api_key(data: JsonUpcase<SecretVerificationRequest>, rotate: bool, headers:
if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key());
user.save(&conn).expect("Error saving API key");
user.save(&mut conn).await.expect("Error saving API key");
}
Ok(Json(json!({
"ApiKey": user.api_key,
"RevisionDate": format_date(&user.updated_at),
"Object": "apiKey",
})))
}
#[post("/accounts/api-key", data = "<data>")]
fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn)
async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn).await
}
#[post("/accounts/rotate-api-key", data = "<data>")]
fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn)
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await
}
// This variant is deprecated: https://github.com/bitwarden/server/pull/2682
#[get("/devices/knowndevice/<email>/<uuid>")]
async fn get_known_device_from_path(email: String, uuid: String, mut conn: DbConn) -> JsonResult {
// This endpoint doesn't have auth header
let mut result = false;
if let Some(user) = User::find_by_mail(&email, &mut conn).await {
result = Device::find_by_uuid_and_user(&uuid, &user.uuid, &mut conn).await.is_some();
}
Ok(Json(json!(result)))
}
#[get("/devices/knowndevice")]
async fn get_known_device(device: KnownDevice, conn: DbConn) -> JsonResult {
get_known_device_from_path(device.email, device.uuid, conn).await
}
struct KnownDevice {
email: String,
uuid: String,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for KnownDevice {
type Error = &'static str;
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let email = if let Some(email_b64) = req.headers().get_one("X-Request-Email") {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes,
Err(_) => {
return Outcome::Failure((
Status::BadRequest,
"X-Request-Email value failed to decode as base64url",
));
}
};
match String::from_utf8(email_bytes) {
Ok(email) => email,
Err(_) => {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
}
}
} else {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value is required"));
};
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string()
} else {
return Outcome::Failure((Status::BadRequest, "X-Device-Identifier value is required"));
};
Outcome::Success(KnownDevice {
email,
uuid,
})
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,12 @@
use chrono::{Duration, Utc};
use rocket::Route;
use rocket_contrib::json::Json;
use rocket::{serde::json::Json, Route};
use serde_json::Value;
use std::borrow::Borrow;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString},
api::{
core::{CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, NumberOrString,
},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
@@ -36,13 +37,14 @@ pub fn routes() -> Vec<Route> {
// region get
#[get("/emergency-access/trusted")]
fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
async fn get_contacts(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn);
let emergency_access_list_json: Vec<Value> =
emergency_access_list.iter().map(|e| e.to_json_grantee_details(&conn)).collect();
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
}
Ok(Json(json!({
"Data": emergency_access_list_json,
@@ -52,13 +54,14 @@ fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
}
#[get("/emergency-access/granted")]
fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn);
let emergency_access_list_json: Vec<Value> =
emergency_access_list.iter().map(|e| e.to_json_grantor_details(&conn)).collect();
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await);
}
Ok(Json(json!({
"Data": emergency_access_list_json,
@@ -68,11 +71,11 @@ fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
}
#[get("/emergency-access/<emer_id>")]
fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
async fn get_emergency_access(emer_id: String, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn))),
match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
None => err!("Emergency access not valid."),
}
}
@@ -81,7 +84,7 @@ fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
// region put/post
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessUpdateData {
Type: NumberOrString,
@@ -90,17 +93,25 @@ struct EmergencyAccessUpdateData {
}
#[put("/emergency-access/<emer_id>", data = "<data>")]
fn put_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
post_emergency_access(emer_id, data, conn)
async fn put_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
conn: DbConn,
) -> JsonResult {
post_emergency_access(emer_id, data, conn).await
}
#[post("/emergency-access/<emer_id>", data = "<data>")]
fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
async fn post_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
@@ -112,9 +123,11 @@ fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdate
emergency_access.atype = new_type;
emergency_access.wait_time_days = data.WaitTimeDays;
emergency_access.key_encrypted = data.KeyEncrypted;
if data.KeyEncrypted.is_some() {
emergency_access.key_encrypted = data.KeyEncrypted;
}
emergency_access.save(&conn)?;
emergency_access.save(&mut conn).await?;
Ok(Json(emergency_access.to_json()))
}
@@ -123,12 +136,12 @@ fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdate
// region delete
#[delete("/emergency-access/<emer_id>")]
fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
async fn delete_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
@@ -137,20 +150,20 @@ fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> E
}
None => err!("Emergency access not valid."),
};
emergency_access.delete(&conn)?;
emergency_access.delete(&mut conn).await?;
Ok(())
}
#[post("/emergency-access/<emer_id>/delete")]
fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn)
async fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn).await
}
// endregion
// region invite
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessInviteData {
Email: String,
@@ -159,7 +172,7 @@ struct EmergencyAccessInviteData {
}
#[post("/emergency-access/invite", data = "<data>")]
fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
@@ -180,10 +193,10 @@ fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, co
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &conn) {
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", email))
err!(format!("Grantee user does not exist: {}", &email))
}
if !CONFIG.is_email_domain_allowed(&email) {
@@ -191,12 +204,12 @@ fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, co
}
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(email.clone());
invitation.save(&conn)?;
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
let mut user = User::new(email.clone());
user.save(&conn)?;
user.save(&mut conn).await?;
user
}
Some(user) => user,
@@ -206,39 +219,34 @@ fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, co
&grantor_user.uuid,
&grantee_user.uuid,
&grantee_user.email,
&conn,
&mut conn,
)
.await
.is_some()
{
err!(format!("Grantee user already invited: {}", email))
err!(format!("Grantee user already invited: {}", &grantee_user.email))
}
let mut new_emergency_access = EmergencyAccess::new(
grantor_user.uuid.clone(),
Some(grantee_user.email.clone()),
emergency_access_status,
new_type,
wait_time_days,
);
new_emergency_access.save(&conn)?;
let mut new_emergency_access =
EmergencyAccess::new(grantor_user.uuid, grantee_user.email, emergency_access_status, new_type, wait_time_days);
new_emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&grantee_user.email,
&new_emergency_access.email.expect("Grantee email does not exists"),
&grantee_user.uuid,
Some(new_emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
&new_emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &conn) {
Some(user) => {
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()) {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
match User::find_by_mail(&email, &mut conn).await {
Some(user) => match accept_invite_process(user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
},
None => err!("Grantee user not found."),
}
}
@@ -247,10 +255,10 @@ fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, co
}
#[post("/emergency-access/<emer_id>/reinvite")]
fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
async fn resend_invite(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -268,7 +276,7 @@ fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult
None => err!("Email not valid."),
};
let grantee_user = match User::find_by_mail(&email, &conn) {
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
@@ -279,19 +287,20 @@ fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult
mail::send_emergency_access_invite(
&email,
&grantor_user.uuid,
Some(emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
&emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
if Invitation::find_by_mail(&email, &conn).is_none() {
let invitation = Invitation::new(email);
invitation.save(&conn)?;
if Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow()) {
Ok(v) => (v),
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
}
@@ -306,43 +315,54 @@ struct AcceptData {
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
async fn accept_invite(
emer_id: String,
data: JsonUpcase<AcceptData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
let claims = decode_emergency_access_invite(token)?;
let grantee_user = match User::find_by_mail(&claims.email, &conn) {
// This can happen if the user who received the invite used a different email to signup.
// Since we do not know if this is intented, we error out here and do nothing with the invite.
if claims.email != headers.user.email {
err!("Claim email does not match current users email")
}
let grantee_user = match User::find_by_mail(&claims.email, &mut conn).await {
Some(user) => {
Invitation::take(&claims.email, &conn);
Invitation::take(&claims.email, &mut conn).await;
user
}
None => err!("Invited user not found"),
};
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
if emer_id == claims.emer_id
&& grantor_user.name == claims.grantor_name
&& grantor_user.email == claims.grantor_email
{
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn) {
Ok(v) => (v),
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email)?;
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email).await?;
}
Ok(())
@@ -351,14 +371,13 @@ fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) ->
}
}
fn accept_invite_process(grantee_uuid: String, emer_id: String, email: Option<String>, conn: &DbConn) -> EmptyResult {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let emer_email = emergency_access.email;
if emer_email.is_none() || emer_email != email {
async fn accept_invite_process(
grantee_uuid: String,
emergency_access: &mut EmergencyAccess,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
if emergency_access.email.is_none() || emergency_access.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
@@ -369,7 +388,7 @@ fn accept_invite_process(grantee_uuid: String, emer_id: String, email: Option<St
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(grantee_uuid);
emergency_access.email = None;
emergency_access.save(conn)
emergency_access.save(conn).await
}
#[derive(Deserialize)]
@@ -379,11 +398,11 @@ struct ConfirmData {
}
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
fn confirm_emergency_access(
async fn confirm_emergency_access(
emer_id: String,
data: JsonUpcase<ConfirmData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
@@ -391,7 +410,7 @@ fn confirm_emergency_access(
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -402,13 +421,13 @@ fn confirm_emergency_access(
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn) {
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
@@ -417,10 +436,10 @@ fn confirm_emergency_access(
emergency_access.key_encrypted = Some(key);
emergency_access.email = None;
emergency_access.save(&conn)?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name)?;
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name).await?;
}
Ok(Json(emergency_access.to_json()))
} else {
@@ -433,22 +452,22 @@ fn confirm_emergency_access(
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn initiate_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
|| emergency_access.grantee_uuid != Some(initiating_user.uuid)
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
@@ -458,51 +477,51 @@ fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) ->
emergency_access.updated_at = now;
emergency_access.recovery_initiated_at = Some(now);
emergency_access.last_notification_at = Some(now);
emergency_access.save(&conn)?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_initiated(
&grantor_user.email,
&initiating_user.name,
emergency_access.get_type_as_str(),
&emergency_access.wait_time_days.clone().to_string(),
)?;
&emergency_access.wait_time_days,
)
.await?;
}
Ok(Json(emergency_access.to_json()))
}
#[post("/emergency-access/<emer_id>/approve")]
fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn approve_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let approving_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|| emergency_access.grantor_uuid != approving_user.uuid
|| emergency_access.grantor_uuid != headers.user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn) {
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
emergency_access.save(&conn)?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)?;
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name).await?;
}
Ok(Json(emergency_access.to_json()))
} else {
@@ -511,38 +530,37 @@ fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) ->
}
#[post("/emergency-access/<emer_id>/reject")]
fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn reject_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let rejecting_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|| emergency_access.grantor_uuid != rejecting_user.uuid
|| emergency_access.grantor_uuid != headers.user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn) {
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.save(&conn)?;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name)?;
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name).await?;
}
Ok(Json(emergency_access.to_json()))
} else {
@@ -555,24 +573,34 @@ fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> J
// region action
#[post("/emergency-access/<emer_id>/view")]
fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn view_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let host = headers.host;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
if !is_valid_request(&emergency_access, headers.user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
}
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn);
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &mut conn).await;
let cipher_sync_data = CipherSyncData::new(&emergency_access.grantor_uuid, CipherSyncType::User, &mut conn).await;
let ciphers_json: Vec<Value> =
ciphers.iter().map(|c| c.to_json(&host, &emergency_access.grantor_uuid, &conn)).collect();
let mut ciphers_json = Vec::with_capacity(ciphers.len());
for c in ciphers {
ciphers_json.push(
c.to_json(
&headers.host,
&emergency_access.grantor_uuid,
Some(&cipher_sync_data),
CipherSyncType::User,
&mut conn,
)
.await,
);
}
Ok(Json(json!({
"Ciphers": ciphers_json,
@@ -582,11 +610,11 @@ fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> Jso
}
#[post("/emergency-access/<emer_id>/takeover")]
fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn takeover_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -595,20 +623,24 @@ fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) ->
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
Ok(Json(json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
})))
let result = json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KdfMemory": grantor_user.client_kdf_memory,
"KdfParallelism": grantor_user.client_kdf_parallelism,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
});
Ok(Json(result))
}
#[derive(Deserialize, Debug)]
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct EmergencyAccessPasswordData {
NewMasterPasswordHash: String,
@@ -616,20 +648,20 @@ struct EmergencyAccessPasswordData {
}
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
fn password_emergency_access(
async fn password_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessPasswordData>,
headers: Headers,
conn: DbConn,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
let key = data.Key;
//let key = &data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -638,26 +670,22 @@ fn password_emergency_access(
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
// change grantor_user password
grantor_user.set_password(new_master_password_hash, None);
grantor_user.akey = key;
grantor_user.save(&conn)?;
grantor_user.set_password(new_master_password_hash, Some(data.Key), true, None);
grantor_user.save(&mut conn).await?;
// Disable TwoFactor providers since they will otherwise block logins
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn)?;
// Removing owner, check that there are at least another owner
let user_org_grantor = UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn);
TwoFactor::delete_all_by_user(&grantor_user.uuid, &mut conn).await?;
// Remove grantor from all organisations unless Owner
for user_org in user_org_grantor {
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &mut conn).await {
if user_org.atype != UserOrgType::Owner as i32 {
user_org.delete(&conn)?;
user_org.delete(&mut conn).await?;
}
}
Ok(())
@@ -666,9 +694,9 @@ fn password_emergency_access(
// endregion
#[get("/emergency-access/<emer_id>/policies")]
fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
async fn policies_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -677,13 +705,13 @@ fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) ->
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &mut conn);
let policies_json: Vec<Value> = policies.await.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
@@ -709,44 +737,52 @@ fn check_emergency_access_allowed() -> EmptyResult {
Ok(())
}
pub fn emergency_request_timeout_job(pool: DbPool) {
pub async fn emergency_request_timeout_job(pool: DbPool) {
debug!("Start emergency_request_timeout_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get() {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
if let Ok(mut conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries_initiated(&mut conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request timeout to approve");
}
let now = Utc::now().naive_utc();
for mut emer in emergency_access_list {
if emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days(emer.wait_time_days as i64)
{
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
emer.save(&conn).expect("Cannot save emergency access on job");
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
let recovery_allowed_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days));
if recovery_allowed_at.le(&now) {
// Only update the access status
// Updating the whole record could cause issues when the emergency_notification_reminder_job is also active
emer.update_access_status_and_save(EmergencyAccessStatus::RecoveryApproved as i32, &now, &mut conn)
.await
.expect("Unable to update emergency access status");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &mut conn).await.expect("Grantor user not found");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.expect("Grantee user not found.");
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid"), &mut conn)
.await
.expect("Grantee user not found");
mail::send_emergency_access_recovery_timed_out(
&grantor_user.email,
&grantee_user.name.clone(),
&grantee_user.name,
emer.get_type_as_str(),
)
.await
.expect("Error on sending email");
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)
.await
.expect("Error on sending email");
}
}
@@ -756,44 +792,56 @@ pub fn emergency_request_timeout_job(pool: DbPool) {
}
}
pub fn emergency_notification_reminder_job(pool: DbPool) {
pub async fn emergency_notification_reminder_job(pool: DbPool) {
debug!("Start emergency_notification_reminder_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get() {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
if let Ok(mut conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries_initiated(&mut conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request reminder notification to send");
}
let now = Utc::now().naive_utc();
for mut emer in emergency_access_list {
if (emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days((emer.wait_time_days as i64) - 1))
&& (emer.last_notification_at.is_none()
|| (emer.last_notification_at.is_some()
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
{
emer.save(&conn).expect("Cannot save emergency access on job");
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
// Calculate the day before the recovery will become active
let final_recovery_reminder_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days - 1));
// Calculate if a day has passed since the previous notification, else no notification has been sent before
let next_recovery_reminder_at = if let Some(last_notification_at) = emer.last_notification_at {
last_notification_at + Duration::days(1)
} else {
now
};
if final_recovery_reminder_at.le(&now) && next_recovery_reminder_at.le(&now) {
// Only update the last notification date
// Updating the whole record could cause issues when the emergency_request_timeout_job is also active
emer.update_last_notification_date_and_save(&now, &mut conn)
.await
.expect("Unable to update emergency access notification date");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &mut conn).await.expect("Grantor user not found");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.expect("Grantee user not found.");
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid"), &mut conn)
.await
.expect("Grantee user not found");
mail::send_emergency_access_recovery_reminder(
&grantor_user.email,
&grantee_user.name.clone(),
&grantee_user.name,
emer.get_type_as_str(),
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
"1", // This notification is only triggered one day before the activation
)
.await
.expect("Error on sending email");
}
}

336
src/api/core/events.rs Normal file
View File

@@ -0,0 +1,336 @@
use std::net::IpAddr;
use chrono::NaiveDateTime;
use rocket::{form::FromForm, serde::json::Json, Route};
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcaseVec},
auth::{AdminHeaders, Headers},
db::{
models::{Cipher, Event, UserOrganization},
DbConn, DbPool,
},
util::parse_date,
CONFIG,
};
/// ###############################################################################################################
/// /api routes
pub fn routes() -> Vec<Route> {
routes![get_org_events, get_cipher_events, get_user_events,]
}
#[derive(FromForm)]
#[allow(non_snake_case)]
struct EventRange {
start: String,
end: String,
#[field(name = "continuationToken")]
continuation_token: Option<String>,
}
// Upstream: https://github.com/bitwarden/server/blob/9ecf69d9cabce732cf2c57976dd9afa5728578fb/src/Api/Controllers/EventsController.cs#LL84C35-L84C41
#[get("/organizations/<org_id>/events?<data..>")]
async fn get_org_events(org_id: String, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
Event::find_by_organization_uuid(&org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
#[get("/ciphers/<cipher_id>/events?<data..>")]
async fn get_cipher_events(cipher_id: String, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let mut events_json = Vec::with_capacity(0);
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, &cipher_id, &mut conn).await {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
events_json = Event::find_by_cipher_uuid(&cipher_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
}
events_json
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
#[get("/organizations/<org_id>/users/<user_org_id>/events?<data..>")]
async fn get_user_events(
org_id: String,
user_org_id: String,
data: EventRange,
_headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
} else {
parse_date(&data.end)
};
Event::find_by_org_and_user_org(&org_id, &user_org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
.collect()
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
})))
}
fn get_continuation_token(events_json: &Vec<Value>) -> Option<&str> {
// When the length of the vec equals the max page_size there probably is more data
// When it is less, then all events are loaded.
if events_json.len() as i64 == Event::PAGE_SIZE {
if let Some(last_event) = events_json.last() {
last_event["date"].as_str()
} else {
None
}
} else {
None
}
}
/// ###############################################################################################################
/// /events routes
pub fn main_routes() -> Vec<Route> {
routes![post_events_collect,]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EventCollection {
// Mandatory
Type: i32,
Date: String,
// Optional
CipherId: Option<String>,
OrganizationId: Option<String>,
}
// Upstream:
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Events/Controllers/CollectController.cs
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
#[post("/collect", format = "application/json", data = "<data>")]
async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.org_events_enabled() {
return Ok(());
}
for event in data.iter().map(|d| &d.data) {
let event_date = parse_date(&event.Date);
match event.Type {
1000..=1099 => {
_log_user_event(
event.Type,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&headers.ip.ip,
&mut conn,
)
.await;
}
1600..=1699 => {
if let Some(org_uuid) = &event.OrganizationId {
_log_event(
event.Type,
org_uuid,
String::from(org_uuid),
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&headers.ip.ip,
&mut conn,
)
.await;
}
}
_ => {
if let Some(cipher_uuid) = &event.CipherId {
if let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
if let Some(org_uuid) = cipher.organization_uuid {
_log_event(
event.Type,
cipher_uuid,
org_uuid,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&headers.ip.ip,
&mut conn,
)
.await;
}
}
}
}
}
}
Ok(())
}
pub async fn log_user_event(event_type: i32, user_uuid: &str, device_type: i32, ip: &IpAddr, conn: &mut DbConn) {
if !CONFIG.org_events_enabled() {
return;
}
_log_user_event(event_type, user_uuid, device_type, None, ip, conn).await;
}
async fn _log_user_event(
event_type: i32,
user_uuid: &str,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
conn: &mut DbConn,
) {
let orgs = UserOrganization::get_org_uuid_by_user(user_uuid, conn).await;
let mut events: Vec<Event> = Vec::with_capacity(orgs.len() + 1); // We need an event per org and one without an org
// Upstream saves the event also without any org_uuid.
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.act_user_uuid = Some(String::from(user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
// For each org a user is a member of store these events per org
for org_uuid in orgs {
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.org_uuid = Some(org_uuid);
event.act_user_uuid = Some(String::from(user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
}
Event::save_user_event(events, conn).await.unwrap_or(());
}
pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
act_user_uuid: String,
device_type: i32,
ip: &IpAddr,
conn: &mut DbConn,
) {
if !CONFIG.org_events_enabled() {
return;
}
_log_event(event_type, source_uuid, org_uuid, &act_user_uuid, device_type, None, ip, conn).await;
}
#[allow(clippy::too_many_arguments)]
async fn _log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
act_user_uuid: &str,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
conn: &mut DbConn,
) {
// Create a new empty event
let mut event = Event::new(event_type, event_date);
match event_type {
// 1000..=1099 Are user events, they need to be logged via log_user_event()
// Collection Events
1100..=1199 => {
event.cipher_uuid = Some(String::from(source_uuid));
}
// Collection Events
1300..=1399 => {
event.collection_uuid = Some(String::from(source_uuid));
}
// Group Events
1400..=1499 => {
event.group_uuid = Some(String::from(source_uuid));
}
// Org User Events
1500..=1599 => {
event.org_user_uuid = Some(String::from(source_uuid));
}
// 1600..=1699 Are organizational events, and they do not need the source_uuid
// Policy Events
1700..=1799 => {
event.policy_uuid = Some(String::from(source_uuid));
}
// Ignore others
_ => {}
}
event.org_uuid = Some(org_uuid);
event.act_user_uuid = Some(String::from(act_user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
event.save(conn).await.unwrap_or(());
}
pub async fn event_cleanup_job(pool: DbPool) {
debug!("Start events cleanup job");
if CONFIG.events_days_retain().is_none() {
debug!("events_days_retain is not configured, abort");
return;
}
if let Ok(mut conn) = pool.get().await {
Event::clean_events(&mut conn).await.ok();
} else {
error!("Failed to get DB connection while trying to cleanup the events table")
}
}

View File

@@ -1,4 +1,4 @@
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
@@ -12,9 +12,8 @@ pub fn routes() -> Vec<rocket::Route> {
}
#[get("/folders")]
fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &conn);
async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &mut conn).await;
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
Json(json!({
@@ -25,8 +24,8 @@ fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
}
#[get("/folders/<uuid>")]
fn get_folder(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &conn) {
async fn get_folder(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -45,27 +44,39 @@ pub struct FolderData {
}
#[post("/folders", data = "<data>")]
fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = Folder::new(headers.user.uuid, data.Name);
folder.save(&conn)?;
nt.send_folder_update(UpdateType::FolderCreate, &folder);
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderCreate, &folder, &headers.device.uuid).await;
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>", data = "<data>")]
fn post_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
put_folder(uuid, data, headers, conn, nt)
async fn post_folder(
uuid: String,
data: JsonUpcase<FolderData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
put_folder(uuid, data, headers, conn, nt).await
}
#[put("/folders/<uuid>", data = "<data>")]
fn put_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
async fn put_folder(
uuid: String,
data: JsonUpcase<FolderData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = match Folder::find_by_uuid(&uuid, &conn) {
let mut folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -76,20 +87,20 @@ fn put_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn
folder.name = data.Name;
folder.save(&conn)?;
nt.send_folder_update(UpdateType::FolderUpdate, &folder);
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderUpdate, &folder, &headers.device.uuid).await;
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>/delete")]
fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_folder(uuid, headers, conn, nt)
async fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(uuid, headers, conn, nt).await
}
#[delete("/folders/<uuid>")]
fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &conn) {
async fn delete_folder(uuid: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -99,8 +110,8 @@ fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> Em
}
// Delete the actual folder entry
folder.delete(&conn)?;
folder.delete(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderDelete, &folder);
nt.send_folder_update(UpdateType::SyncFolderDelete, &folder, &headers.device.uuid).await;
Ok(())
}

View File

@@ -1,29 +1,44 @@
mod accounts;
pub mod accounts;
mod ciphers;
mod emergency_access;
mod events;
mod folders;
mod organizations;
mod sends;
pub mod two_factor;
pub use ciphers::purge_trashed_ciphers;
pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncType};
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> {
let mut mod_routes =
routes![clear_device_token, put_device_token, get_eq_domains, post_eq_domains, put_eq_domains, hibp_breach,];
let mut device_token_routes = routes![clear_device_token, put_device_token];
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version, config];
let mut routes = Vec::new();
routes.append(&mut accounts::routes());
routes.append(&mut ciphers::routes());
routes.append(&mut emergency_access::routes());
routes.append(&mut events::routes());
routes.append(&mut folders::routes());
routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes());
routes.append(&mut sends::routes());
routes.append(&mut mod_routes);
routes.append(&mut device_token_routes);
routes.append(&mut eq_domains_routes);
routes.append(&mut hibp_routes);
routes.append(&mut meta_routes);
routes
}
pub fn events_routes() -> Vec<Route> {
let mut routes = Vec::new();
routes.append(&mut events::main_routes());
routes
}
@@ -31,12 +46,11 @@ pub fn routes() -> Vec<Route> {
//
// Move this somewhere else
//
use rocket::Route;
use rocket_contrib::json::Json;
use rocket::{serde::json::Json, Catcher, Route};
use serde_json::Value;
use crate::{
api::{JsonResult, JsonUpcase},
api::{JsonResult, JsonUpcase, Notify, UpdateType},
auth::Headers,
db::DbConn,
error::Error,
@@ -121,7 +135,12 @@ struct EquivDomainData {
}
#[post("/settings/domains", data = "<data>")]
fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_eq_domains(
data: JsonUpcase<EquivDomainData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: EquivDomainData = data.into_inner().data;
let excluded_globals = data.ExcludedGlobalEquivalentDomains.unwrap_or_default();
@@ -133,34 +152,40 @@ fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: Db
user.excluded_globals = to_string(&excluded_globals).unwrap_or_else(|_| "[]".to_string());
user.equivalent_domains = to_string(&equivalent_domains).unwrap_or_else(|_| "[]".to_string());
user.save(&conn)?;
user.save(&mut conn).await?;
nt.send_user_update(UpdateType::SyncSettings, &user).await;
Ok(Json(json!({})))
}
#[put("/settings/domains", data = "<data>")]
fn put_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
post_eq_domains(data, headers, conn)
async fn put_eq_domains(
data: JsonUpcase<EquivDomainData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
post_eq_domains(data, headers, conn, nt).await
}
#[get("/hibp/breach?<username>")]
fn hibp_breach(username: String) -> JsonResult {
async fn hibp_breach(username: String) -> JsonResult {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{}?truncateResponse=false&includeUnverified=false",
username
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let hibp_client = get_reqwest_client();
let res = hibp_client.get(&url).header("hibp-api-key", api_key).send()?;
let res = hibp_client.get(&url).header("hibp-api-key", api_key).send().await?;
// If we get a 404, return a 404, it means no breached accounts
if res.status() == 404 {
return Err(Error::empty().with_code(404));
}
let value: Value = res.error_for_status()?.json()?;
let value: Value = res.error_for_status()?.json().await?;
Ok(Json(value))
} else {
Ok(Json(json!([{
@@ -169,7 +194,7 @@ fn hibp_breach(username: String) -> JsonResult {
"Domain": "haveibeenpwned.com",
"BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{username}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{username}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>"),
"LogoPath": "vw_static/hibp.png",
"PwnCount": 0,
"DataClasses": [
@@ -178,3 +203,55 @@ fn hibp_breach(username: String) -> JsonResult {
}])))
}
}
// We use DbConn here to let the alive healthcheck also verify the database connection.
#[get("/alive")]
fn alive(_conn: DbConn) -> Json<String> {
now()
}
#[get("/now")]
pub fn now() -> Json<String> {
Json(crate::util::format_date(&chrono::Utc::now().naive_utc()))
}
#[get("/version")]
fn version() -> Json<&'static str> {
Json(crate::VERSION.unwrap_or_default())
}
#[get("/config")]
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
Json(json!({
"version": crate::VERSION,
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden"
},
"environment": {
"vault": domain,
"api": format!("{domain}/api"),
"identity": format!("{domain}/identity"),
"notifications": format!("{domain}/notifications"),
"sso": "",
},
"object": "config",
}))
}
pub fn catchers() -> Vec<Catcher> {
catchers![api_not_found]
}
#[catch(404)]
fn api_not_found() -> Json<Value> {
Json(json!({
"error": {
"code": 404,
"reason": "Not Found",
"description": "The requested resource could not be found."
}
}))
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,15 @@
use std::{io::Read, path::Path};
use std::path::Path;
use chrono::{DateTime, Duration, Utc};
use multipart::server::{save::SavedData, Multipart, SaveResult};
use rocket::{http::ContentType, response::NamedFile, Data};
use rocket_contrib::json::Json;
use rocket::form::Form;
use rocket::fs::NamedFile;
use rocket::fs::TempFile;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, UpdateType},
auth::{Headers, Host},
auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool},
util::SafeString,
CONFIG,
@@ -16,6 +17,9 @@ use crate::{
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
// The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
pub fn routes() -> Vec<rocket::Route> {
routes![
get_sends,
@@ -27,14 +31,16 @@ pub fn routes() -> Vec<rocket::Route> {
put_send,
delete_send,
put_remove_password,
download_send
download_send,
post_send_file_v2,
post_send_file_v2_data
]
}
pub fn purge_sends(pool: DbPool) {
pub async fn purge_sends(pool: DbPool) {
debug!("Purging sends");
if let Ok(conn) = pool.get() {
Send::purge(&conn);
if let Ok(mut conn) = pool.get().await {
Send::purge(&mut conn).await;
} else {
error!("Failed to get DB connection while purging sends")
}
@@ -57,6 +63,7 @@ struct SendData {
Notes: Option<String>,
Text: Option<Value>,
File: Option<Value>,
FileLength: Option<NumberOrString>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@@ -67,10 +74,11 @@ struct SendData {
///
/// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally.
fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::DisableSend;
if !CONFIG.sends_allowed() || OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
if !CONFIG.sends_allowed()
|| OrgPolicy::is_applicable_to_user(user_uuid, OrgPolicyType::DisableSend, None, conn).await
{
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
}
Ok(())
@@ -82,10 +90,10 @@ fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult
/// but is allowed to remove this option from an existing Send.
///
/// Ref: https://bitwarden.com/help/article/policies/#send-options
fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let hide_email = data.HideEmail.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn) {
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn).await {
err!(
"Due to an Enterprise Policy, you are not allowed to hide your email address \
from recipients when creating or editing a Send."
@@ -134,9 +142,9 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
}
#[get("/sends")]
fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &conn);
let sends_json: Vec<Value> = sends.iter().map(|s| s.to_json()).collect();
async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &mut conn);
let sends_json: Vec<Value> = sends.await.iter().map(|s| s.to_json()).collect();
Json(json!({
"Data": sends_json,
@@ -146,8 +154,8 @@ fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
}
#[get("/sends/<uuid>")]
fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &conn) {
async fn get_send(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &mut conn).await {
Some(send) => send,
None => err!("Send not found"),
};
@@ -160,50 +168,53 @@ fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
}
#[post("/sends", data = "<data>")]
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn post_send(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
if data.Type == SendType::File as i32 {
err!("File sends should use /api/sends/file")
}
let mut send = create_send(data, headers.user.uuid)?;
send.save(&conn)?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn));
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
#[derive(FromForm)]
struct UploadData<'f> {
model: Json<crate::util::UpCase<SendData>>,
data: TempFile<'f>,
}
#[derive(FromForm)]
struct UploadDataV2<'f> {
data: TempFile<'f>,
}
// @deprecated Mar 25 2021: This method has been deprecated in favor of direct uploads (v2).
// This method still exists to support older clients, probably need to remove it sometime.
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L164-L167
#[post("/sends/file", format = "multipart/form-data", data = "<data>")]
fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let boundary = content_type.params().next().expect("No boundary provided").1;
let UploadData {
model,
mut data,
} = data.into_inner();
let model = model.into_inner().data;
let mut mpart = Multipart::with_body(data.open(), boundary);
// First entry is the SendData JSON
let mut model_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "model" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let mut buf = String::new();
model_entry.data.read_to_string(&mut buf)?;
let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?;
enforce_disable_hide_email_policy(&data.data, &headers, &conn)?;
// Get the file length and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
enforce_disable_hide_email_policy(&model, &headers, &mut conn).await?;
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn);
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
@@ -212,55 +223,126 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
None => SIZE_525_MB,
};
// Create the Send
let mut send = create_send(data.data, headers.user.uuid)?;
let file_id = crate::crypto::generate_send_id();
let mut send = create_send(model, headers.user.uuid)?;
if send.atype != SendType::File as i32 {
err!("Send content is not a file");
}
let file_path = Path::new(&CONFIG.sends_folder()).join(&send.uuid).join(&file_id);
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
}
// Read the data entry and save the file
let mut data_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "data" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let file_id = crate::crypto::generate_send_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
let size = match data_entry.data.save().memory_threshold(0).size_limit(size_limit).with_path(&file_path) {
SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment is not a file: {:?}", other));
}
SaveResult::Partial(_, reason) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment storage limit exceeded with this file: {:?}", reason));
}
SaveResult::Error(e) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Error: {:?}", e));
}
};
if let Err(_err) = data.persist_to(&file_path).await {
data.move_copy_to(file_path).await?
}
// Set ID and sizes
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size as i32)));
}
send.data = serde_json::to_string(&data_value)?;
// Save the changes in the database
send.save(&conn)?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L190
#[post("/sends/file/v2", data = "<data>")]
async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data = data.into_inner().data;
if data.Type != SendType::File as i32 {
err!("Send content is not a file");
}
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let file_length = match &data.FileLength {
Some(m) => Some(m.into_i32()?),
_ => None,
};
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
std::cmp::Ord::max(left as u64, SIZE_525_MB)
}
None => SIZE_525_MB,
};
if file_length.is_some() && file_length.unwrap() as u64 > size_limit {
err!("Attachment storage limit exceeded with this file");
}
let mut send = create_send(data, headers.user.uuid)?;
let file_id = crate::crypto::generate_send_id();
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id.clone()));
o.insert(String::from("Size"), Value::Number(file_length.unwrap().into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length.unwrap())));
}
send.data = serde_json::to_string(&data_value)?;
send.save(&mut conn).await?;
Ok(Json(json!({
"fileUploadType": 0, // 0 == Direct | 1 == Azure
"object": "send-fileUpload",
"url": format!("/sends/{}/file/{}", send.uuid, file_id),
"sendResponse": send.to_json()
})))
}
// https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L243
#[post("/sends/<send_uuid>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
async fn post_send_file_v2_data(
send_uuid: String,
file_id: String,
data: Form<UploadDataV2<'_>>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut data = data.into_inner();
if let Some(send) = Send::find_by_uuid(&send_uuid, &mut conn).await {
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
} else {
err!("Send not found. Unable to save the file.");
}
Ok(())
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendAccessData {
@@ -268,8 +350,14 @@ pub struct SendAccessData {
}
#[post("/sends/access/<access_id>", data = "<data>")]
fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &conn) {
async fn post_access(
access_id: String,
data: JsonUpcase<SendAccessData>,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -297,8 +385,8 @@ fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn
if send.password_hash.is_some() {
match data.into_inner().data.Password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password."),
None => err_code!("Password not provided", 401),
Some(_) => err!("Invalid password", format!("IP: {}.", ip.ip)),
None => err_code!("Password not provided", format!("IP: {}.", ip.ip), 401),
}
}
@@ -307,20 +395,23 @@ fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn
send.access_count += 1;
}
send.save(&conn)?;
send.save(&mut conn).await?;
Ok(Json(send.to_json_access(&conn)))
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json_access(&mut conn).await))
}
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
fn post_access_file(
async fn post_access_file(
send_id: String,
file_id: String,
data: JsonUpcase<SendAccessData>,
host: Host,
conn: DbConn,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_uuid(&send_id, &conn) {
let mut send = match Send::find_by_uuid(&send_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -355,7 +446,9 @@ fn post_access_file(
send.access_count += 1;
send.save(&conn)?;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
@@ -367,23 +460,29 @@ fn post_access_file(
}
#[get("/sends/<send_id>/<file_id>?<t>")]
fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
async fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(&t) {
if claims.sub == format!("{}/{}", send_id, file_id) {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok();
if claims.sub == format!("{send_id}/{file_id}") {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok();
}
}
None
}
#[put("/sends/<id>", data = "<data>")]
fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn put_send(
id: String,
data: JsonUpcase<SendData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn) {
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -430,15 +529,15 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
send.set_password(Some(&password));
}
send.save(&conn)?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}
#[delete("/sends/<id>")]
fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &conn) {
async fn delete_send(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -447,17 +546,17 @@ fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyR
err!("Send is not owned by user")
}
send.delete(&conn)?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn));
send.delete(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&mut conn).await).await;
Ok(())
}
#[put("/sends/<id>/remove-password")]
fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn put_remove_password(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn) {
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -467,8 +566,8 @@ fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -
}
send.set_password(None);
send.save(&conn)?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
Ok(Json(send.to_json()))
}

View File

@@ -1,15 +1,16 @@
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{
core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase,
NumberOrString, PasswordData,
},
auth::{ClientIp, Headers},
crypto,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
};
@@ -21,7 +22,7 @@ pub fn routes() -> Vec<Route> {
}
#[post("/two-factor/get-authenticator", data = "<data>")]
fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -30,11 +31,11 @@ fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn
}
let type_ = TwoFactorType::Authenticator as i32;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn);
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await;
let (enabled, key) = match twofactor {
Some(tf) => (true, tf.data),
_ => (false, BASE32.encode(&crypto::get_random(vec![0u8; 20]))),
_ => (false, crypto::encode_random_bytes::<20>(BASE32)),
};
Ok(Json(json!({
@@ -53,11 +54,10 @@ struct EnableAuthenticatorData {
}
#[post("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator(
async fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
@@ -81,9 +81,11 @@ fn activate_authenticator(
}
// Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn)?;
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &headers.ip, &mut conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -93,30 +95,35 @@ fn activate_authenticator(
}
#[put("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator_put(
async fn activate_authenticator_put(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
) -> JsonResult {
activate_authenticator(data, headers, ip, conn)
activate_authenticator(data, headers, conn).await
}
pub fn validate_totp_code_str(
pub async fn validate_totp_code_str(
user_uuid: &str,
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> EmptyResult {
if !totp_code.chars().all(char::is_numeric) {
err!("TOTP code is not a number");
}
validate_totp_code(user_uuid, totp_code, secret, ip, conn)
validate_totp_code(user_uuid, totp_code, secret, ip, conn).await
}
pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
pub async fn validate_totp_code(
user_uuid: &str,
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &mut DbConn,
) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
@@ -124,15 +131,16 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &C
Err(_) => err!("Invalid TOTP secret"),
};
let mut twofactor = match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn) {
Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
};
let mut twofactor =
match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn).await {
Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
};
// The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes.
// If that is the case, we set the steps to 0 so only the current TOTP is valid.
let steps = !CONFIG.authenticator_disable_time_drift() as i64;
let steps = i64::from(!CONFIG.authenticator_disable_time_drift());
// Get the current system time in UNIX Epoch (UTC)
let current_time = chrono::Utc::now();
@@ -147,7 +155,7 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &C
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
// Check the the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > twofactor.last_used as i64 {
if generated == totp_code && time_step > i64::from(twofactor.last_used) {
// If the step does not equals 0 the time is drifted either server or client side.
if step != 0 {
warn!("TOTP Time drift detected. The step offset is {}", step);
@@ -156,14 +164,24 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &C
// Save the last used time step so only totp time steps higher then this one are allowed.
// This will also save a newly created twofactor if the code is correct.
twofactor.last_used = time_step as i32;
twofactor.save(conn)?;
twofactor.save(conn).await?;
return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used as i64 {
} else if generated == totp_code && time_step <= i64::from(twofactor.last_used) {
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}
}
// Else no valide code received, deny access
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}

View File

@@ -1,14 +1,17 @@
use chrono::Utc;
use data_encoding::BASE64;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, PasswordData},
api::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
PasswordData,
},
auth::Headers,
crypto,
db::{
models::{TwoFactor, TwoFactorType, User},
models::{EventType, TwoFactor, TwoFactorType, User},
DbConn,
},
error::MapResult,
@@ -89,14 +92,14 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")]
fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let data = get_user_duo_data(&headers.user.uuid, &conn);
let data = get_user_duo_data(&headers.user.uuid, &mut conn).await;
let (enabled, data) = match data {
DuoStatus::Global(_) => (true, Some(DuoData::secret())),
@@ -152,7 +155,7 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
}
#[post("/two-factor/duo", data = "<data>")]
fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user;
@@ -163,7 +166,7 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
let (data, data_str) = if check_duo_fields_custom(&data) {
let data_req: DuoData = data.into();
let data_str = serde_json::to_string(&data_req)?;
duo_api_request("GET", "/auth/v2/check", "", &data_req).map_res("Failed to validate Duo credentials")?;
duo_api_request("GET", "/auth/v2/check", "", &data_req).await.map_res("Failed to validate Duo credentials")?;
(data_req.obscure(), data_str)
} else {
(DuoData::secret(), String::new())
@@ -171,9 +174,11 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
let type_ = TwoFactorType::Duo;
let twofactor = TwoFactor::new(user.uuid.clone(), type_, data_str);
twofactor.save(&conn)?;
twofactor.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -185,11 +190,11 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
}
#[put("/two-factor/duo", data = "<data>")]
fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn)
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn).await
}
fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
use reqwest::{header, Method};
use std::str::FromStr;
@@ -209,7 +214,8 @@ fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> Em
.basic_auth(username, Some(password))
.header(header::USER_AGENT, "vaultwarden:Duo/1.0 (Rust)")
.header(header::DATE, date)
.send()?
.send()
.await?
.error_for_status()?;
Ok(())
@@ -222,11 +228,11 @@ const AUTH_PREFIX: &str = "AUTH";
const DUO_PREFIX: &str = "TX";
const APP_PREFIX: &str = "APP";
fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn).await {
Some(t) => t,
None => return DuoStatus::Disabled(DuoData::global().is_some()),
};
@@ -246,41 +252,47 @@ fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
}
// let (ik, sk, ak, host) = get_duo_keys();
fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> {
let data = User::find_by_mail(email, conn)
.and_then(|u| get_user_duo_data(&u.uuid, conn).data())
.or_else(DuoData::global)
.map_res("Can't fetch Duo keys")?;
async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiResult<(String, String, String, String)> {
let data = match User::find_by_mail(email, conn).await {
Some(u) => get_user_duo_data(&u.uuid, conn).await.data(),
_ => DuoData::global(),
}
.map_res("Can't fetch Duo Keys")?;
Ok((data.ik, data.sk, CONFIG.get_duo_akey(), data.host))
}
pub fn generate_duo_signature(email: &str, conn: &DbConn) -> ApiResult<(String, String)> {
pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult<(String, String)> {
let now = Utc::now().timestamp();
let (ik, sk, ak, host) = get_duo_keys_email(email, conn)?;
let (ik, sk, ak, host) = get_duo_keys_email(email, conn).await?;
let duo_sign = sign_duo_values(&sk, email, &ik, DUO_PREFIX, now + DUO_EXPIRE);
let app_sign = sign_duo_values(&ak, email, &ik, APP_PREFIX, now + APP_EXPIRE);
Ok((format!("{}:{}", duo_sign, app_sign), host))
Ok((format!("{duo_sign}:{app_sign}"), host))
}
fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64) -> String {
let val = format!("{}|{}|{}", email, ikey, expire);
let val = format!("{email}|{ikey}|{expire}");
let cookie = format!("{}|{}", prefix, BASE64.encode(val.as_bytes()));
format!("{}|{}", cookie, crypto::hmac_sign(key, &cookie))
}
pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_duo_login(email: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
// email is as entered by the user, so it needs to be normalized before
// comparison with auth_user below.
let email = &email.to_lowercase();
let split: Vec<&str> = response.split(':').collect();
if split.len() != 2 {
err!("Invalid response length");
err!(
"Invalid response length",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}
let auth_sig = split[0];
@@ -288,13 +300,18 @@ pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyRe
let now = Utc::now().timestamp();
let (ik, sk, ak, _host) = get_duo_keys_email(email, conn)?;
let (ik, sk, ak, _host) = get_duo_keys_email(email, conn).await?;
let auth_user = parse_duo_values(&sk, auth_sig, &ik, AUTH_PREFIX, now)?;
let app_user = parse_duo_values(&ak, app_sig, &ik, APP_PREFIX, now)?;
if !crypto::ct_eq(&auth_user, app_user) || !crypto::ct_eq(&auth_user, email) {
err!("Error validating duo authentication")
err!(
"Error validating duo authentication",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
Ok(())
@@ -310,7 +327,7 @@ fn parse_duo_values(key: &str, val: &str, ikey: &str, prefix: &str, time: i64) -
let u_b64 = split[1];
let u_sig = split[2];
let sig = crypto::hmac_sign(key, &format!("{}|{}", u_prefix, u_b64));
let sig = crypto::hmac_sign(key, &format!("{u_prefix}|{u_b64}"));
if !crypto::ct_eq(crypto::hmac_sign(key, &sig), crypto::hmac_sign(key, u_sig)) {
err!("Duo signatures don't match")

View File

@@ -1,13 +1,16 @@
use chrono::{Duration, NaiveDateTime, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, PasswordData},
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::Headers,
crypto,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
@@ -28,13 +31,13 @@ struct SendEmailLoginData {
/// User is trying to login and wants to use email 2FA.
/// Does not require Bearer token
#[post("/two-factor/send-email-login", data = "<data>")] // JsonResult
fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> EmptyResult {
async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, mut conn: DbConn) -> EmptyResult {
let data: SendEmailLoginData = data.into_inner().data;
use crate::db::models::User;
// Get the user
let user = match User::find_by_mail(&data.Email, &conn) {
let user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -48,31 +51,32 @@ fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> Empty
err!("Email 2FA is disabled")
}
send_token(&user.uuid, &conn)?;
send_token(&user.uuid, &mut conn).await?;
Ok(())
}
/// Generate the token, save the data for later verification and send email to user
pub fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, type_, conn).map_res("Two factor not found")?;
let mut twofactor =
TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await.map_res("Two factor not found")?;
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let mut twofactor_data = EmailTokenData::from_json(&twofactor.data)?;
twofactor_data.set_token(generated_token);
twofactor.data = twofactor_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?).await?;
Ok(())
}
/// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")]
fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -80,13 +84,14 @@ fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) ->
err!("Invalid password");
}
let (enabled, mfa_email) = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn) {
Some(x) => {
let twofactor_data = EmailTokenData::from_json(&x.data)?;
(true, json!(twofactor_data.email))
}
_ => (false, json!(null)),
};
let (enabled, mfa_email) =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &mut conn).await {
Some(x) => {
let twofactor_data = EmailTokenData::from_json(&x.data)?;
(true, json!(twofactor_data.email))
}
_ => (false, serde_json::value::Value::Null),
};
Ok(Json(json!({
"Email": mfa_email,
@@ -105,7 +110,7 @@ struct SendEmailData {
/// Send a verification email to the specified email address to check whether it exists/belongs to user.
#[post("/two-factor/send-email", data = "<data>")]
fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: SendEmailData = data.into_inner().data;
let user = headers.user;
@@ -119,8 +124,8 @@ fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -
let type_ = TwoFactorType::Email as i32;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
tf.delete(&conn)?;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
tf.delete(&mut conn).await?;
}
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
@@ -128,9 +133,9 @@ fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::EmailVerificationChallenge, twofactor_data.to_json());
twofactor.save(&conn)?;
twofactor.save(&mut conn).await?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?).await?;
Ok(())
}
@@ -145,7 +150,7 @@ struct EmailData {
/// Verify email belongs to user and can be used for 2FA email codes.
#[put("/two-factor/email", data = "<data>")]
fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EmailData = data.into_inner().data;
let mut user = headers.user;
@@ -154,7 +159,8 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
}
let type_ = TwoFactorType::EmailVerificationChallenge as i32;
let mut twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).map_res("Two factor not found")?;
let mut twofactor =
TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await.map_res("Two factor not found")?;
let mut email_data = EmailTokenData::from_json(&twofactor.data)?;
@@ -170,9 +176,11 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
email_data.reset_token();
twofactor.atype = TwoFactorType::Email as i32;
twofactor.data = email_data.to_json();
twofactor.save(&conn)?;
twofactor.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Email": email_data.email,
@@ -182,13 +190,19 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
}
/// Validate the email code when used as TwoFactor token mechanism
pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &mut DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
.await
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!("No token available"),
_ => err!(
"No token available",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
};
if !crypto::ct_eq(issued_token, token) {
@@ -197,23 +211,34 @@ pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &
email_data.reset_token();
}
twofactor.data = email_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
err!("Token is invalid")
err!(
"Token is invalid",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
email_data.reset_token();
twofactor.data = email_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
let date = NaiveDateTime::from_timestamp(email_data.token_sent, 0);
let date = NaiveDateTime::from_timestamp_opt(email_data.token_sent, 0).expect("Email token timestamp invalid.");
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
err!("Token has expired")
err!(
"Token has expired",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
Ok(())
}
/// Data stored in the TwoFactor table in the db
#[derive(Serialize, Deserialize)]
pub struct EmailTokenData {
@@ -279,7 +304,7 @@ pub fn obscure_email(email: &str) -> String {
_ => {
let stars = "*".repeat(name_size - 2);
name.truncate(2);
format!("{}{}", name, stars)
format!("{name}{stars}")
}
};

View File

@@ -1,12 +1,12 @@
use chrono::{Duration, Utc};
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::Headers,
api::{core::log_user_event, JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::{ClientHeaders, Headers},
crypto,
db::{models::*, DbConn, DbPool},
mail, CONFIG,
@@ -15,17 +15,22 @@ use crate::{
pub mod authenticator;
pub mod duo;
pub mod email;
pub mod u2f;
pub mod webauthn;
pub mod yubikey;
pub fn routes() -> Vec<Route> {
let mut routes = routes![get_twofactor, get_recover, recover, disable_twofactor, disable_twofactor_put,];
let mut routes = routes![
get_twofactor,
get_recover,
recover,
disable_twofactor,
disable_twofactor_put,
get_device_verification_settings,
];
routes.append(&mut authenticator::routes());
routes.append(&mut duo::routes());
routes.append(&mut email::routes());
routes.append(&mut u2f::routes());
routes.append(&mut webauthn::routes());
routes.append(&mut yubikey::routes());
@@ -33,8 +38,8 @@ pub fn routes() -> Vec<Route> {
}
#[get("/two-factor")]
fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn);
async fn get_twofactor(headers: Headers, mut conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &mut conn).await;
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Json(json!({
@@ -68,13 +73,13 @@ struct RecoverTwoFactor {
}
#[post("/two-factor/recover", data = "<data>")]
fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
async fn recover(data: JsonUpcase<RecoverTwoFactor>, client_headers: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner().data;
use crate::db::models::User;
// Get the user
let mut user = match User::find_by_mail(&data.Email, &conn) {
let mut user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -90,19 +95,28 @@ fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
}
// Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &conn)?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
log_user_event(
EventType::UserRecovered2fa as i32,
&user.uuid,
client_headers.device_type,
&client_headers.ip.ip,
&mut conn,
)
.await;
// Remove the recovery code, not needed without twofactors
user.totp_recover = None;
user.save(&conn)?;
Ok(Json(json!({})))
user.save(&mut conn).await?;
Ok(Json(Value::Object(serde_json::Map::new())))
}
fn _generate_recover_code(user: &mut User, conn: &DbConn) {
async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
if user.totp_recover.is_none() {
let totp_recover = BASE32.encode(&crypto::get_random(vec![0u8; 20]));
let totp_recover = crypto::encode_random_bytes::<20>(BASE32);
user.totp_recover = Some(totp_recover);
user.save(conn).ok();
user.save(conn).await.ok();
}
}
@@ -114,7 +128,7 @@ struct DisableTwoFactorData {
}
#[post("/two-factor/disable", data = "<data>")]
fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user;
@@ -125,23 +139,26 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
let type_ = data.Type.into_i32()?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
twofactor.delete(&conn)?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
twofactor.delete(&mut conn).await?;
log_user_event(EventType::UserDisabled2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn)
.await;
}
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).is_empty();
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty();
if twofactor_disabled {
let policy_type = OrgPolicyType::TwoFactorAuthentication;
let org_list = UserOrganization::find_by_user_and_policy(&user.uuid, policy_type, &conn);
for user_org in org_list.into_iter() {
for user_org in
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &mut conn)
.await
.into_iter()
{
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
let org = Organization::find_by_uuid(&user_org.org_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
user_org.delete(&conn)?;
user_org.delete(&mut conn).await?;
}
}
}
@@ -154,18 +171,18 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
}
#[put("/two-factor/disable", data = "<data>")]
fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn)
async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn).await
}
pub fn send_incomplete_2fa_notifications(pool: DbPool) {
pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
debug!("Sending notifications for incomplete 2FA logins");
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return;
}
let conn = match pool.get() {
let mut conn = match pool.get().await {
Ok(conn) => conn,
_ => {
error!("Failed to get DB connection in send_incomplete_2fa_notifications()");
@@ -175,15 +192,35 @@ pub fn send_incomplete_2fa_notifications(pool: DbPool) {
let now = Utc::now().naive_utc();
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&(now - time_limit), &conn);
let time_before = now - time_limit;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &mut conn).await;
for login in incomplete_logins {
let user = User::find_by_uuid(&login.user_uuid, &conn).expect("User not found");
let user = User::find_by_uuid(&login.user_uuid, &mut conn).await.expect("User not found");
info!(
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
user.email, login.ip_address
);
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
.await
.expect("Error sending incomplete 2FA email");
login.delete(&conn).expect("Error deleting incomplete 2FA record");
login.delete(&mut conn).await.expect("Error deleting incomplete 2FA record");
}
}
// This function currently is just a dummy and the actual part is not implemented yet.
// This also prevents 404 errors.
//
// See the following Bitwarden PR's regarding this feature.
// https://github.com/bitwarden/clients/pull/2843
// https://github.com/bitwarden/clients/pull/2839
// https://github.com/bitwarden/server/pull/2016
//
// The HTML part is hidden via the CSS patches done via the bw_web_build repo
#[get("/two-factor/get-device-verification-settings")]
fn get_device_verification_settings(_headers: Headers, _conn: DbConn) -> Json<Value> {
Json(json!({
"isDeviceVerificationSectionEnabled":false,
"unknownDeviceVerificationEnabled":false,
"object":"deviceVerificationSettings"
}))
}

View File

@@ -1,352 +0,0 @@
use once_cell::sync::Lazy;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use u2f::{
messages::{RegisterResponse, SignResponse, U2fSignRequest},
protocol::{Challenge, U2f},
register::Registration,
};
use crate::{
api::{
core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, NumberOrString,
PasswordData,
},
auth::Headers,
db::{
models::{TwoFactor, TwoFactorType},
DbConn,
},
error::Error,
CONFIG,
};
const U2F_VERSION: &str = "U2F_V2";
static APP_ID: Lazy<String> = Lazy::new(|| format!("{}/app-id.json", &CONFIG.domain()));
static U2F: Lazy<U2f> = Lazy::new(|| U2f::new(APP_ID.clone()));
pub fn routes() -> Vec<Route> {
routes![generate_u2f, generate_u2f_challenge, activate_u2f, activate_u2f_put, delete_u2f,]
}
#[post("/two-factor/get-u2f", data = "<data>")]
fn generate_u2f(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. U2F disabled")
}
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let (enabled, keys) = get_u2f_registrations(&headers.user.uuid, &conn)?;
let keys_json: Vec<Value> = keys.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": enabled,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
#[post("/two-factor/get-u2f-challenge", data = "<data>")]
fn generate_u2f_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let _type = TwoFactorType::U2fRegisterChallenge;
let challenge = _create_u2f_challenge(&headers.user.uuid, _type, &conn).challenge;
Ok(Json(json!({
"UserId": headers.user.uuid,
"AppId": APP_ID.to_string(),
"Challenge": challenge,
"Version": U2F_VERSION,
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EnableU2FData {
Id: NumberOrString,
// 1..5
Name: String,
MasterPasswordHash: String,
DeviceResponse: String,
}
// This struct is referenced from the U2F lib
// because it doesn't implement Deserialize
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
#[serde(remote = "Registration")]
struct RegistrationDef {
key_handle: Vec<u8>,
pub_key: Vec<u8>,
attestation_cert: Option<Vec<u8>>,
device_name: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct U2FRegistration {
pub id: i32,
pub name: String,
#[serde(with = "RegistrationDef")]
pub reg: Registration,
pub counter: u32,
compromised: bool,
pub migrated: Option<bool>,
}
impl U2FRegistration {
fn to_json(&self) -> Value {
json!({
"Id": self.id,
"Name": self.name,
"Compromised": self.compromised,
})
}
}
// This struct is copied from the U2F lib
// to add an optional error code
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct RegisterResponseCopy {
pub registration_data: String,
pub version: String,
pub client_data: String,
pub error_code: Option<NumberOrString>,
}
impl From<RegisterResponseCopy> for RegisterResponse {
fn from(r: RegisterResponseCopy) -> RegisterResponse {
RegisterResponse {
registration_data: r.registration_data,
version: r.version,
client_data: r.client_data,
}
}
}
#[post("/two-factor/u2f", data = "<data>")]
fn activate_u2f(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableU2FData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let tf_type = TwoFactorType::U2fRegisterChallenge as i32;
let tf_challenge = match TwoFactor::find_by_user_and_type(&user.uuid, tf_type, &conn) {
Some(c) => c,
None => err!("Can't recover challenge"),
};
let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?;
tf_challenge.delete(&conn)?;
let response: RegisterResponseCopy = serde_json::from_str(&data.DeviceResponse)?;
let error_code = response.error_code.clone().map_or("0".into(), NumberOrString::into_string);
if error_code != "0" {
err!("Error registering U2F token")
}
let registration = U2F.register_response(challenge, response.into())?;
let full_registration = U2FRegistration {
id: data.Id.into_i32()?,
name: data.Name,
reg: registration,
compromised: false,
counter: 0,
migrated: None,
};
let mut regs = get_u2f_registrations(&user.uuid, &conn)?.1;
// TODO: Check that there is no repeat Id
regs.push(full_registration);
save_u2f_registrations(&user.uuid, &regs, &conn)?;
_generate_recover_code(&mut user, &conn);
let keys_json: Vec<Value> = regs.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
#[put("/two-factor/u2f", data = "<data>")]
fn activate_u2f_put(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_u2f(data, headers, conn)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct DeleteU2FData {
Id: NumberOrString,
MasterPasswordHash: String,
}
#[delete("/two-factor/u2f", data = "<data>")]
fn delete_u2f(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: DeleteU2FData = data.into_inner().data;
let id = data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let type_ = TwoFactorType::U2f as i32;
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, type_, &conn) {
Some(tf) => tf,
None => err!("U2F data not found!"),
};
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&tf.data) {
Ok(d) => d,
Err(_) => err!("Error parsing U2F data"),
};
data.retain(|r| r.id != id);
let new_data_str = serde_json::to_string(&data)?;
tf.data = new_data_str;
tf.save(&conn)?;
let keys_json: Vec<Value> = data.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
fn _create_u2f_challenge(user_uuid: &str, type_: TwoFactorType, conn: &DbConn) -> Challenge {
let challenge = U2F.generate_challenge().unwrap();
TwoFactor::new(user_uuid.into(), type_, serde_json::to_string(&challenge).unwrap())
.save(conn)
.expect("Error saving challenge");
challenge
}
fn save_u2f_registrations(user_uuid: &str, regs: &[U2FRegistration], conn: &DbConn) -> EmptyResult {
TwoFactor::new(user_uuid.into(), TwoFactorType::U2f, serde_json::to_string(regs)?).save(conn)
}
fn get_u2f_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<U2FRegistration>), Error> {
let type_ = TwoFactorType::U2f as i32;
let (enabled, regs) = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
Some(tf) => (tf.enabled, tf.data),
None => return Ok((false, Vec::new())), // If no data, return empty list
};
let data = match serde_json::from_str(&regs) {
Ok(d) => d,
Err(_) => {
// If error, try old format
let mut old_regs = _old_parse_registrations(&regs);
if old_regs.len() != 1 {
err!("The old U2F format only allows one device")
}
// Convert to new format
let new_regs = vec![U2FRegistration {
id: 1,
name: "Unnamed U2F key".into(),
reg: old_regs.remove(0),
compromised: false,
counter: 0,
migrated: None,
}];
// Save new format
save_u2f_registrations(user_uuid, &new_regs, conn)?;
new_regs
}
};
Ok((enabled, data))
}
fn _old_parse_registrations(registations: &str) -> Vec<Registration> {
#[derive(Deserialize)]
struct Helper(#[serde(with = "RegistrationDef")] Registration);
let regs: Vec<Value> = serde_json::from_str(registations).expect("Can't parse Registration data");
regs.into_iter().map(|r| serde_json::from_value(r).unwrap()).map(|Helper(r)| r).collect()
}
pub fn generate_u2f_login(user_uuid: &str, conn: &DbConn) -> ApiResult<U2fSignRequest> {
let challenge = _create_u2f_challenge(user_uuid, TwoFactorType::U2fLoginChallenge, conn);
let registrations: Vec<_> = get_u2f_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.reg).collect();
if registrations.is_empty() {
err!("No U2F devices registered")
}
Ok(U2F.sign_request(challenge, registrations))
}
pub fn validate_u2f_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
let challenge_type = TwoFactorType::U2fLoginChallenge as i32;
let tf_challenge = TwoFactor::find_by_user_and_type(user_uuid, challenge_type, conn);
let challenge = match tf_challenge {
Some(tf_challenge) => {
let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?;
tf_challenge.delete(conn)?;
challenge
}
None => err!("Can't recover login challenge"),
};
let response: SignResponse = serde_json::from_str(response)?;
let mut registrations = get_u2f_registrations(user_uuid, conn)?.1;
if registrations.is_empty() {
err!("No U2F devices registered")
}
for reg in &mut registrations {
let response = U2F.sign_response(challenge.clone(), reg.reg.clone(), response.clone(), reg.counter);
match response {
Ok(new_counter) => {
reg.counter = new_counter;
save_u2f_registrations(user_uuid, &registrations, conn)?;
return Ok(());
}
Err(u2f::u2ferror::U2fError::CounterTooLow) => {
reg.compromised = true;
save_u2f_registrations(user_uuid, &registrations, conn)?;
err!("This device might be compromised!");
}
Err(e) => {
warn!("E {:#}", e);
// break;
}
}
}
err!("error verifying response")
}

View File

@@ -1,16 +1,17 @@
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use url::Url;
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
use crate::{
api::{
core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
},
auth::Headers,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::Error,
@@ -21,6 +22,28 @@ pub fn routes() -> Vec<Route> {
routes![get_webauthn, generate_webauthn_challenge, activate_webauthn, activate_webauthn_put, delete_webauthn,]
}
// Some old u2f structs still needed for migrating from u2f to WebAuthn
// Both `struct Registration` and `struct U2FRegistration` can be removed if we remove the u2f to WebAuthn migration
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Registration {
pub key_handle: Vec<u8>,
pub pub_key: Vec<u8>,
pub attestation_cert: Option<Vec<u8>>,
pub device_name: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct U2FRegistration {
pub id: i32,
pub name: String,
#[serde(with = "Registration")]
pub reg: Registration,
pub counter: u32,
compromised: bool,
pub migrated: Option<bool>,
}
struct WebauthnConfig {
url: String,
origin: Url,
@@ -80,7 +103,7 @@ impl WebauthnRegistration {
}
#[post("/two-factor/get-webauthn", data = "<data>")]
fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
@@ -89,7 +112,7 @@ fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn)
err!("Invalid password");
}
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &conn)?;
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &mut conn).await?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -100,12 +123,13 @@ fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn)
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let registrations = get_webauthn_registrations(&headers.user.uuid, &conn)?
let registrations = get_webauthn_registrations(&headers.user.uuid, &mut conn)
.await?
.1
.into_iter()
.map(|r| r.credential.cred_id) // We return the credentialIds to the clients to avoid double registering
@@ -121,7 +145,7 @@ fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers,
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&conn)?;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
@@ -218,7 +242,7 @@ impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
}
#[post("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
@@ -228,10 +252,10 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
// Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
Some(tf) => {
let state: RegistrationState = serde_json::from_str(&tf.data)?;
tf.delete(&conn)?;
tf.delete(&mut conn).await?;
state
}
None => err!("Can't recover challenge"),
@@ -241,7 +265,7 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
let (credential, _data) =
WebauthnConfig::load().register_credential(&data.DeviceResponse.into(), &state, |_| Ok(false))?;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &conn)?.1;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &mut conn).await?.1;
// TODO: Check for repeated ID's
registrations.push(WebauthnRegistration {
id: data.Id.into_i32()?,
@@ -252,8 +276,12 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
});
// Save the registrations and return them
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?).save(&conn)?;
_generate_recover_code(&mut user, &conn);
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(&mut conn)
.await?;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -264,8 +292,8 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
}
#[put("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn)
async fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn).await
}
#[derive(Deserialize, Debug)]
@@ -276,16 +304,17 @@ struct DeleteU2FData {
}
#[delete("/two-factor/webauthn", data = "<data>")]
fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let id = data.data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn) {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let mut tf =
match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
@@ -296,12 +325,13 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
let removed_item = data.remove(item_pos);
tf.data = serde_json::to_string(&data)?;
tf.save(&conn)?;
tf.save(&mut conn).await?;
drop(tf);
// If entry is migrated from u2f, delete the u2f entry as well
if let Some(mut u2f) = TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &conn) {
use crate::api::core::two_factor::u2f::U2FRegistration;
if let Some(mut u2f) =
TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &mut conn).await
{
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&u2f.data) {
Ok(d) => d,
Err(_) => err!("Error parsing U2F data"),
@@ -311,7 +341,7 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
let new_data_str = serde_json::to_string(&data)?;
u2f.data = new_data_str;
u2f.save(&conn)?;
u2f.save(&mut conn).await?;
}
let keys_json: Vec<Value> = data.iter().map(WebauthnRegistration::to_json).collect();
@@ -323,18 +353,21 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
})))
}
pub fn get_webauthn_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
pub async fn get_webauthn_registrations(
user_uuid: &str,
conn: &mut DbConn,
) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)),
None => Ok((false, Vec::new())), // If no data, return empty list
}
}
pub fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> JsonResult {
// Load saved credentials
let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.credential).collect();
get_webauthn_registrations(user_uuid, conn).await?.1.into_iter().map(|r| r.credential).collect();
if creds.is_empty() {
err!("No Webauthn devices registered")
@@ -346,27 +379,33 @@ pub fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
// Save the challenge state for later validation
TwoFactor::new(user_uuid.into(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
.save(conn)?;
.save(conn)
.await?;
// Return challenge to the clients
Ok(Json(serde_json::to_value(response.public_key)?))
}
pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
Some(tf) => {
let state: AuthenticationState = serde_json::from_str(&tf.data)?;
tf.delete(conn)?;
tf.delete(conn).await?;
state
}
None => err!("Can't recover login challenge"),
None => err!(
"Can't recover login challenge",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
};
let rsp: crate::util::UpCase<PublicKeyCredentialCopy> = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.data.into();
let mut registrations = get_webauthn_registrations(user_uuid, conn)?.1;
let mut registrations = get_webauthn_registrations(user_uuid, conn).await?.1;
// If the credential we received is migrated from U2F, enable the U2F compatibility
//let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0);
@@ -377,10 +416,16 @@ pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -
reg.credential.counter = auth_data.counter;
TwoFactor::new(user_uuid.to_string(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn)?;
.save(conn)
.await?;
return Ok(());
}
}
err!("Credential not present")
err!(
"Credential not present",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}

View File

@@ -1,13 +1,16 @@
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use yubico::{config::Config, verify};
use crate::{
api::{core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, PasswordData},
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::Headers,
db::{
models::{TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
@@ -44,7 +47,7 @@ fn parse_yubikeys(data: &EnableYubikeyData) -> Vec<String> {
}
fn jsonify_yubikeys(yubikeys: Vec<String>) -> serde_json::Value {
let mut result = json!({});
let mut result = Value::Object(serde_json::Map::new());
for (i, key) in yubikeys.into_iter().enumerate() {
result[format!("Key{}", i + 1)] = Value::String(key);
@@ -64,21 +67,23 @@ fn get_yubico_credentials() -> Result<(String, String), Error> {
}
}
fn verify_yubikey_otp(otp: String) -> EmptyResult {
async fn verify_yubikey_otp(otp: String) -> EmptyResult {
let (yubico_id, yubico_secret) = get_yubico_credentials()?;
let config = Config::default().set_client_id(yubico_id).set_key(yubico_secret);
match CONFIG.yubico_server() {
Some(server) => verify(otp, config.set_api_hosts(vec![server])),
None => verify(otp, config),
Some(server) => {
tokio::task::spawn_blocking(move || verify(otp, config.set_api_hosts(vec![server]))).await.unwrap()
}
None => tokio::task::spawn_blocking(move || verify(otp, config)).await.unwrap(),
}
.map_res("Failed to verify OTP")
.and(Ok(()))
}
#[post("/two-factor/get-yubikey", data = "<data>")]
fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
// Make sure the credentials are set
get_yubico_credentials()?;
@@ -92,7 +97,7 @@ fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbCo
let user_uuid = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &conn);
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &mut conn).await;
if let Some(r) = r {
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?;
@@ -113,7 +118,7 @@ fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbCo
}
#[post("/two-factor/yubikey", data = "<data>")]
fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user;
@@ -122,10 +127,11 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
}
// Check if we already have some data
let mut yubikey_data = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &conn) {
Some(data) => data,
None => TwoFactor::new(user.uuid.clone(), TwoFactorType::YubiKey, String::new()),
};
let mut yubikey_data =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &mut conn).await {
Some(data) => data,
None => TwoFactor::new(user.uuid.clone(), TwoFactorType::YubiKey, String::new()),
};
let yubikeys = parse_yubikeys(&data);
@@ -143,10 +149,10 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
continue;
}
verify_yubikey_otp(yubikey.to_owned()).map_res("Invalid Yubikey OTP provided")?;
verify_yubikey_otp(yubikey.to_owned()).await.map_res("Invalid Yubikey OTP provided")?;
}
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (&x[..12]).to_owned()).collect();
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect();
let yubikey_metadata = YubikeyMetadata {
Keys: yubikey_ids,
@@ -154,9 +160,11 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
};
yubikey_data.data = serde_json::to_string(&yubikey_metadata).unwrap();
yubikey_data.save(&conn)?;
yubikey_data.save(&mut conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
@@ -168,11 +176,11 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
}
#[put("/two-factor/yubikey", data = "<data>")]
fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn)
async fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn).await
}
pub fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {
pub async fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {
if response.len() != 44 {
err!("Invalid Yubikey OTP length");
}
@@ -184,7 +192,7 @@ pub fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResu
err!("Given Yubikey is not registered");
}
let result = verify_yubikey_otp(response.to_owned());
let result = verify_yubikey_otp(response.to_owned()).await;
match result {
Ok(_answer) => Ok(()),

View File

@@ -1,20 +1,25 @@
use std::{
collections::HashMap,
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::prelude::*,
net::{IpAddr, ToSocketAddrs},
sync::{Arc, RwLock},
net::IpAddr,
sync::Arc,
time::{Duration, SystemTime},
};
use bytes::{Bytes, BytesMut};
use futures::{stream::StreamExt, TryFutureExt};
use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header};
use rocket::{
http::ContentType,
response::{Content, Redirect},
Route,
use reqwest::{
header::{self, HeaderMap, HeaderValue},
Client, Response,
};
use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
net::lookup_host,
};
use html5gum::{Emitter, EndTag, HtmlString, InfallibleTokenizer, Readable, StartTag, StringReader, Tokenizer};
use crate::{
error::Error,
@@ -25,48 +30,56 @@ use crate::{
pub fn routes() -> Vec<Route> {
match CONFIG.icon_service().as_str() {
"internal" => routes![icon_internal],
"bitwarden" => routes![icon_bitwarden],
"duckduckgo" => routes![icon_duckduckgo],
"google" => routes![icon_google],
_ => routes![icon_custom],
_ => routes![icon_external],
}
}
static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = header::HeaderMap::new();
default_headers
.insert(header::USER_AGENT, header::HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers
.insert(header::ACCEPT, header::HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
let mut default_headers = HeaderMap::new();
default_headers.insert(header::USER_AGENT, HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache"));
// Generate the cookie store
let cookie_store = Arc::new(Jar::default());
// Reuse the client between requests
get_reqwest_client_builder()
.cookie_provider(Arc::new(Jar::default()))
let client = get_reqwest_client_builder()
.cookie_provider(Arc::clone(&cookie_store))
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers)
.build()
.expect("Failed to build icon client")
.default_headers(default_headers.clone());
match client.build() {
Ok(client) => client,
Err(e) => {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder()
.cookie_provider(cookie_store)
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers)
.trust_dns(false)
.build()
.expect("Failed to build client")
}
}
});
// Build Regex only once since this takes a lot of time.
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap());
static ICON_REL_BLACKLIST: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)mask-icon").unwrap());
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new()));
static ICON_BLACKLIST_REGEX: Lazy<dashmap::DashMap<String, Regex>> = Lazy::new(dashmap::DashMap::new);
fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain);
return None;
}
if is_domain_blacklisted(domain) {
if check_domain_blacklist_reason(domain).await.is_some() {
return None;
}
@@ -84,47 +97,28 @@ fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
}
#[get("/<domain>/icon.png")]
fn icon_custom(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG.icon_service())
async fn icon_external(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG._icon_service_url()).await
}
#[get("/<domain>/icon.png")]
fn icon_bitwarden(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.bitwarden.net/{}/icon.png")
}
#[get("/<domain>/icon.png")]
fn icon_duckduckgo(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.duckduckgo.com/ip3/{}.ico")
}
#[get("/<domain>/icon.png")]
fn icon_google(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://www.google.com/s2/favicons?domain={}&sz=32")
}
#[get("/<domain>/icon.png")]
fn icon_internal(domain: String) -> Cached<Content<Vec<u8>>> {
async fn icon_internal(domain: String) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(&domain) {
warn!("Invalid domain: {}", domain);
return Cached::ttl(
Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
);
}
match get_icon(&domain) {
match get_icon(&domain).await {
Some((icon, icon_type)) => {
Cached::ttl(Content(ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
Cached::ttl((ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
}
_ => Cached::ttl(
Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
),
_ => Cached::ttl((ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), CONFIG.icon_cache_negttl(), true),
}
}
@@ -136,7 +130,7 @@ fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "_-.";
// If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = url::Url::parse(format!("https://{}", domain).as_str()) {
if let Err(parse_error) = url::Url::parse(format!("https://{domain}").as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
return false;
} else if domain.is_empty()
@@ -264,68 +258,65 @@ mod tests {
}
}
fn is_domain_blacklisted(domain: &str) -> bool {
let mut is_blacklisted = CONFIG.icon_blacklist_non_global_ips()
&& (domain, 0)
.to_socket_addrs()
.map(|x| {
for ip_port in x {
if !is_global(ip_port.ip()) {
warn!("IP {} for domain '{}' is not a global IP!", ip_port.ip(), domain);
return true;
}
#[derive(Debug, Clone)]
enum DomainBlacklistReason {
Regex,
IP,
}
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
async fn check_domain_blacklist_reason(domain: &str) -> Option<DomainBlacklistReason> {
// First check the blacklist regex if there is a match.
// This prevents the blocked domain(s) from being leaked via a DNS lookup.
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let is_match = if let Some(regex) = ICON_BLACKLIST_REGEX.get(&blacklist) {
regex.is_match(domain)
} else {
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if ICON_BLACKLIST_REGEX.len() >= 1 {
ICON_BLACKLIST_REGEX.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
let is_match = blacklist_regex.is_match(domain);
ICON_BLACKLIST_REGEX.insert(blacklist.clone(), blacklist_regex);
is_match
};
if is_match {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
return Some(DomainBlacklistReason::Regex);
}
}
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return Some(DomainBlacklistReason::IP);
}
false
})
.unwrap_or(false);
// Skip the regex check if the previous one is true already
if !is_blacklisted {
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
let mut regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let regex = if let Some(regex) = regex_hashmap.get(&blacklist) {
regex
} else {
drop(regex_hashmap);
let mut regex_hashmap_write = ICON_BLACKLIST_REGEX.write().unwrap();
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if regex_hashmap_write.len() >= 1 {
regex_hashmap_write.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
regex_hashmap_write.insert(blacklist.to_string(), blacklist_regex);
drop(regex_hashmap_write);
regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
regex_hashmap.get(&blacklist).unwrap()
};
// Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(domain) {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
is_blacklisted = true;
}
}
}
is_blacklisted
None
}
fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
// Check for expiration of negatively cached copy
if icon_is_negcached(&path) {
if icon_is_negcached(&path).await {
return None;
}
if let Some(icon) = get_cached_icon(&path) {
if let Some(icon) = get_cached_icon(&path).await {
let icon_type = match get_icon_type(&icon) {
Some(x) => x,
_ => "x-icon",
@@ -338,31 +329,31 @@ fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
}
// Get the icon, or None in case of error
match download_icon(domain) {
match download_icon(domain).await {
Ok((icon, icon_type)) => {
save_icon(&path, &icon);
Some((icon, icon_type.unwrap_or("x-icon").to_string()))
save_icon(&path, &icon).await;
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
}
Err(e) => {
warn!("Unable to download icon: {:?}", e);
let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]);
save_icon(&miss_indicator, &[]).await;
None
}
}
}
fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
// Check for expiration of successfully cached copy
if icon_is_expired(path) {
if icon_is_expired(path).await {
return None;
}
// Try to read the cached icon, and return it if it exists
if let Ok(mut f) = File::open(path) {
if let Ok(mut f) = File::open(path).await {
let mut buffer = Vec::new();
if f.read_to_end(&mut buffer).is_ok() {
if f.read_to_end(&mut buffer).await.is_ok() {
return Some(buffer);
}
}
@@ -370,22 +361,22 @@ fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
None
}
fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path)?;
async fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path).await?;
let modified = meta.modified()?;
let age = SystemTime::now().duration_since(modified)?;
Ok(ttl > 0 && ttl <= age.as_secs())
}
fn icon_is_negcached(path: &str) -> bool {
async fn icon_is_negcached(path: &str) -> bool {
let miss_indicator = path.to_owned() + ".miss";
let expired = file_is_expired(&miss_indicator, CONFIG.icon_cache_negttl());
let expired = file_is_expired(&miss_indicator, CONFIG.icon_cache_negttl()).await;
match expired {
// No longer negatively cached, drop the marker
Ok(true) => {
if let Err(e) = remove_file(&miss_indicator) {
if let Err(e) = remove_file(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {:?}: {:?}", path, e);
}
false
@@ -397,8 +388,8 @@ fn icon_is_negcached(path: &str) -> bool {
}
}
fn icon_is_expired(path: &str) -> bool {
let expired = file_is_expired(path, CONFIG.icon_cache_ttl());
async fn icon_is_expired(path: &str) -> bool {
let expired = file_is_expired(path, CONFIG.icon_cache_ttl()).await;
expired.unwrap_or(true)
}
@@ -416,91 +407,62 @@ impl Icon {
}
}
/// Iterates over the HTML document to find <base href="http://domain.tld">
/// When found it will stop the iteration and the found base href will be shared deref via `base_href`.
///
/// # Arguments
/// * `node` - A Parsed HTML document via html5ever::parse_document()
/// * `base_href` - a mutable url::Url which will be overwritten when a base href tag has been found.
///
fn get_base_href(node: &std::rc::Rc<markup5ever_rcdom::Node>, base_href: &mut url::Url) -> bool {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "base" {
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
fn get_favicons_node(
dom: InfallibleTokenizer<StringReader<'_>, FaviconEmitter>,
icons: &mut Vec<Icon>,
url: &url::Url,
) {
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
const ATTR_REL: &[u8] = b"rel";
const ATTR_HREF: &[u8] = b"href";
const ATTR_SIZES: &[u8] = b"sizes";
if attr_name == "href" {
debug!("Found base href: {}", attr_value);
*base_href = match base_href.join(attr_value) {
Ok(href) => href,
_ => base_href.clone(),
};
return true;
}
}
return true;
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
// Check if we got a true back and stop the iter.
// This means we found a <base> tag and can stop processing the html.
if get_base_href(child, base_href) {
return true;
}
}
false
}
fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Vec<Icon>, url: &url::Url) {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "link" {
let mut has_rel = false;
let mut href = None;
let mut sizes = None;
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
if attr_name == "rel" && ICON_REL_REGEX.is_match(attr_value) && !ICON_REL_BLACKLIST.is_match(attr_value)
let mut base_url = url.clone();
let mut icon_tags: Vec<StartTag> = Vec::new();
for token in dom {
match token {
FaviconToken::StartTag(tag) => {
if *tag.name == TAG_LINK
&& tag.attributes.contains_key(ATTR_REL)
&& tag.attributes.contains_key(ATTR_HREF)
{
has_rel = true;
} else if attr_name == "href" {
href = Some(attr_value);
} else if attr_name == "sizes" {
sizes = Some(attr_value);
let rel_value = std::str::from_utf8(tag.attributes.get(ATTR_REL).unwrap())
.unwrap_or_default()
.to_ascii_lowercase();
if rel_value.contains("icon") && !rel_value.contains("mask-icon") {
icon_tags.push(tag);
}
} else if *tag.name == TAG_BASE && tag.attributes.contains_key(ATTR_HREF) {
let href = std::str::from_utf8(tag.attributes.get(ATTR_HREF).unwrap()).unwrap_or_default();
debug!("Found base href: {href}");
base_url = match base_url.join(href) {
Ok(inner_url) => inner_url,
_ => url.clone(),
};
}
}
if has_rel {
if let Some(inner_href) = href {
if let Ok(full_href) = url.join(inner_href).map(String::from) {
let priority = get_icon_priority(&full_href, sizes);
icons.push(Icon::new(priority, full_href));
}
FaviconToken::EndTag(tag) => {
if *tag.name == TAG_HEAD {
break;
}
}
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
get_favicons_node(child, icons, url);
for icon_tag in icon_tags {
if let Some(icon_href) = icon_tag.attributes.get(ATTR_HREF) {
if let Ok(full_href) = base_url.join(std::str::from_utf8(icon_href).unwrap_or_default()) {
let sizes = if let Some(v) = icon_tag.attributes.get(ATTR_SIZES) {
std::str::from_utf8(v).unwrap_or_default()
} else {
""
};
let priority = get_icon_priority(full_href.as_str(), sizes);
icons.push(Icon::new(priority, full_href.to_string()));
}
};
}
}
@@ -518,16 +480,16 @@ struct IconUrlResult {
///
/// # Example
/// ```
/// let icon_result = get_icon_url("github.com")?;
/// let icon_result = get_icon_url("vaultwarden.discourse.group")?;
/// let icon_result = get_icon_url("github.com").await?;
/// let icon_result = get_icon_url("vaultwarden.discourse.group").await?;
/// ```
fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Default URL with secure and insecure schemes
let ssldomain = format!("https://{}", domain);
let httpdomain = format!("http://{}", domain);
let ssldomain = format!("https://{domain}");
let httpdomain = format!("http://{domain}");
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)) {
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)).await {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
@@ -542,32 +504,31 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{}", base_domain);
let httpbase = format!("http://{}", base_domain);
debug!("[get_icon_url]: Trying without subdomains '{}'", base_domain);
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase));
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{}", domain);
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{}", www_domain);
let httpwww = format!("http://{}", www_domain);
debug!("[get_icon_url]: Trying with www. prefix '{}'", www_domain);
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww));
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
}
}
sub_resp
}
};
// Create the iconlist
let mut iconlist: Vec<Icon> = Vec::new();
let mut referer = String::from("");
let mut referer = String::new();
if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com)
@@ -575,26 +536,23 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Set the referer to be used on the final request, some sites check this.
// Mostly used to prevent direct linking and other security resons.
referer = url.as_str().to_string();
referer = url.to_string();
// Add the default favicon.ico to the list with the domain the content responded from.
// Add the fallback favicon.ico and apple-touch-icon.png to the list with the domain the content responded from.
iconlist.push(Icon::new(35, String::from(url.join("/favicon.ico").unwrap())));
iconlist.push(Icon::new(40, String::from(url.join("/apple-touch-icon.png").unwrap())));
// 384KB should be more than enough for the HTML, though as we only really need the HTML header.
let mut limited_reader = content.take(384 * 1024);
let limited_reader = stream_to_bytes_limit(content, 384 * 1024).await?.to_vec();
use html5ever::tendril::TendrilSink;
let dom = html5ever::parse_document(markup5ever_rcdom::RcDom::default(), Default::default())
.from_utf8()
.read_from(&mut limited_reader)?;
let mut base_url: url::Url = url;
get_base_href(&dom.document, &mut base_url);
get_favicons_node(&dom.document, &mut iconlist, &base_url);
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default()).infallible();
get_favicons_node(dom, &mut iconlist, &url);
} else {
// Add the default favicon.ico to the list with just the given domain
iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain)));
iconlist.push(Icon::new(35, format!("{}/favicon.ico", httpdomain)));
iconlist.push(Icon::new(35, format!("{ssldomain}/favicon.ico")));
iconlist.push(Icon::new(40, format!("{ssldomain}/apple-touch-icon.png")));
iconlist.push(Icon::new(35, format!("{httpdomain}/favicon.ico")));
iconlist.push(Icon::new(40, format!("{httpdomain}/apple-touch-icon.png")));
}
// Sort the iconlist by priority
@@ -607,13 +565,15 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
})
}
fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_referer(url, "")
async fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_referer(url, "").await
}
fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
warn!("Favicon '{}' resolves to a blacklisted domain or IP!", url);
async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
match check_domain_blacklist_reason(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
Some(DomainBlacklistReason::Regex) => warn!("Favicon '{}' is from a blacklisted domain!", url),
Some(DomainBlacklistReason::IP) => warn!("Favicon '{}' is hosted on a non-global IP!", url),
None => (),
}
let mut client = CLIENT.get(url);
@@ -621,9 +581,9 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
client = client.header("Referer", referer)
}
match client.send() {
match client.send().await {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{}", e)),
Err(e) => err_silent!(format!("{e}")),
}
}
@@ -639,7 +599,7 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
/// priority1 = get_icon_priority("http://example.com/path/to/a/favicon.png", "32x32");
/// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", "");
/// ```
fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
fn get_icon_priority(href: &str, sizes: &str) -> u8 {
// Check if there is a dimension set
let (width, height) = parse_sizes(sizes);
@@ -687,11 +647,11 @@ fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
/// let (width, height) = parse_sizes("x128x128"); // (128, 128)
/// let (width, height) = parse_sizes("32"); // (0, 0)
/// ```
fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
fn parse_sizes(sizes: &str) -> (u16, u16) {
let mut width: u16 = 0;
let mut height: u16 = 0;
if let Some(sizes) = sizes {
if !sizes.is_empty() {
match ICON_SIZE_REGEX.captures(sizes.trim()) {
None => {}
Some(dimensions) => {
@@ -706,14 +666,16 @@ fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
(width, height)
}
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if is_domain_blacklisted(domain) {
err_silent!("Domain is blacklisted", domain)
async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
match check_domain_blacklist_reason(domain).await {
Some(DomainBlacklistReason::Regex) => err_silent!("Domain is blacklisted", domain),
Some(DomainBlacklistReason::IP) => err_silent!("Host resolves to a non-global IP", domain),
None => (),
}
let icon_result = get_icon_url(domain)?;
let icon_result = get_icon_url(domain).await?;
let mut buffer = Vec::new();
let mut buffer = Bytes::new();
let mut icon_type: Option<&str> = None;
use data_url::DataUrl;
@@ -722,8 +684,12 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if icon.href.starts_with("data:image") {
let datauri = DataUrl::process(&icon.href).unwrap();
// Check if we are able to decode the data uri
match datauri.decode_to_vec() {
Ok((body, _fragment)) => {
let mut body = BytesMut::new();
match datauri.decode::<_, ()>(|bytes| {
body.extend_from_slice(bytes);
Ok(())
}) {
Ok(_) => {
// Also check if the size is atleast 67 bytes, which seems to be the smallest png i could create
if body.len() >= 67 {
// Check if the icon type is allowed, else try an icon from the list.
@@ -733,16 +699,17 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
continue;
}
info!("Extracted icon from data:image uri for {}", domain);
buffer = body;
buffer = body.freeze();
break;
}
}
_ => debug!("Extracted icon from data:image uri is invalid"),
};
} else {
match get_page_with_referer(&icon.href, &icon_result.referer) {
Ok(mut res) => {
res.copy_to(&mut buffer)?;
match get_page_with_referer(&icon.href, &icon_result.referer).await {
Ok(res) => {
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
@@ -765,13 +732,13 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
Ok((buffer, icon_type))
}
fn save_icon(path: &str, icon: &[u8]) {
match File::create(path) {
async fn save_icon(path: &str, icon: &[u8]) {
match File::create(path).await {
Ok(mut f) => {
f.write_all(icon).expect("Error writing icon file");
f.write_all(icon).await.expect("Error writing icon file");
}
Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => {
create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache folder");
create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder");
}
Err(e) => {
warn!("Unable to save icon: {:?}", e);
@@ -791,13 +758,30 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
}
}
/// Minimize the amount of bytes to be parsed from a reqwest result.
/// This prevents very long parsing and memory usage.
async fn stream_to_bytes_limit(res: Response, max_size: usize) -> Result<Bytes, reqwest::Error> {
let mut stream = res.bytes_stream().take(max_size);
let mut buf = BytesMut::new();
let mut size = 0;
while let Some(chunk) = stream.next().await {
let chunk = &chunk?;
size += chunk.len();
buf.extend(chunk);
if size >= max_size {
break;
}
}
Ok(buf.freeze())
}
/// This is an implementation of the default Cookie Jar from Reqwest and reqwest_cookie_store build by pfernie.
/// The default cookie jar used by Reqwest keeps all the cookies based upon the Max-Age or Expires which could be a long time.
/// That could be used for tracking, to prevent this we force the lifespan of the cookies to always be max two minutes.
/// A Cookie Jar is needed because some sites force a redirect with cookies to verify if a request uses cookies or not.
use cookie_store::CookieStore;
#[derive(Default)]
pub struct Jar(RwLock<CookieStore>);
pub struct Jar(std::sync::RwLock<CookieStore>);
impl reqwest::cookie::CookieStore for Jar {
fn set_cookies(&self, cookie_headers: &mut dyn Iterator<Item = &header::HeaderValue>, url: &url::Url) {
@@ -820,12 +804,10 @@ impl reqwest::cookie::CookieStore for Jar {
}
fn cookies(&self, url: &url::Url) -> Option<header::HeaderValue> {
use bytes::Bytes;
let cookie_store = self.0.read().unwrap();
let s = cookie_store
.get_request_values(url)
.map(|(name, value)| format!("{}={}", name, value))
.map(|(name, value)| format!("{name}={value}"))
.collect::<Vec<_>>()
.join("; ");
@@ -836,3 +818,158 @@ impl reqwest::cookie::CookieStore for Jar {
header::HeaderValue::from_maybe_shared(Bytes::from(s)).ok()
}
}
/// Custom FaviconEmitter for the html5gum parser.
/// The FaviconEmitter is using an almost 1:1 copy of the DefaultEmitter with some small changes.
/// This prevents emitting tags like comments, doctype and also strings between the tags.
/// Therefor parsing the HTML content is faster.
use std::collections::{BTreeSet, VecDeque};
#[derive(Debug)]
enum FaviconToken {
StartTag(StartTag),
EndTag(EndTag),
}
#[derive(Default, Debug)]
struct FaviconEmitter {
current_token: Option<FaviconToken>,
last_start_tag: HtmlString,
current_attribute: Option<(HtmlString, HtmlString)>,
seen_attributes: BTreeSet<HtmlString>,
emitted_tokens: VecDeque<FaviconToken>,
}
impl FaviconEmitter {
fn emit_token(&mut self, token: FaviconToken) {
self.emitted_tokens.push_front(token);
}
fn flush_current_attribute(&mut self) {
if let Some((k, v)) = self.current_attribute.take() {
match self.current_token {
Some(FaviconToken::StartTag(ref mut tag)) => {
tag.attributes.entry(k).and_modify(|_| {}).or_insert(v);
}
Some(FaviconToken::EndTag(_)) => {
self.seen_attributes.insert(k);
}
_ => {
debug_assert!(false);
}
}
}
}
}
impl Emitter for FaviconEmitter {
type Token = FaviconToken;
fn set_last_start_tag(&mut self, last_start_tag: Option<&[u8]>) {
self.last_start_tag.clear();
self.last_start_tag.extend(last_start_tag.unwrap_or_default());
}
fn pop_token(&mut self) -> Option<Self::Token> {
self.emitted_tokens.pop_back()
}
fn init_start_tag(&mut self) {
self.current_token = Some(FaviconToken::StartTag(StartTag::default()));
}
fn init_end_tag(&mut self) {
self.current_token = Some(FaviconToken::EndTag(EndTag::default()));
self.seen_attributes.clear();
}
fn emit_current_tag(&mut self) -> Option<html5gum::State> {
self.flush_current_attribute();
let mut token = self.current_token.take().unwrap();
let mut emit = false;
match token {
FaviconToken::EndTag(ref mut tag) => {
// Always clean seen attributes
self.seen_attributes.clear();
// Only trigger an emit for the </head> tag.
// This is matched, and will break the for-loop.
if *tag.name == b"head" {
emit = true;
}
}
FaviconToken::StartTag(ref mut tag) => {
// Only trriger an emit for <link> and <base> tags.
// These are the only tags we want to parse.
if *tag.name == b"link" || *tag.name == b"base" {
self.set_last_start_tag(Some(&tag.name));
emit = true;
} else {
self.set_last_start_tag(None);
}
}
}
// Only emit the tags we want to parse.
if emit {
self.emit_token(token);
}
None
}
fn push_tag_name(&mut self, s: &[u8]) {
match self.current_token {
Some(
FaviconToken::StartTag(StartTag {
ref mut name,
..
})
| FaviconToken::EndTag(EndTag {
ref mut name,
..
}),
) => {
name.extend(s);
}
_ => debug_assert!(false),
}
}
fn init_attribute(&mut self) {
self.flush_current_attribute();
self.current_attribute = Some(Default::default());
}
fn push_attribute_name(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().0.extend(s);
}
fn push_attribute_value(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().1.extend(s);
}
fn current_is_appropriate_end_tag_token(&mut self) -> bool {
match self.current_token {
Some(FaviconToken::EndTag(ref tag)) => !self.last_start_tag.is_empty() && self.last_start_tag == tag.name,
_ => false,
}
}
// We do not want and need these parts of the HTML document
// These will be skipped and ignored during the tokenization and iteration.
fn emit_current_comment(&mut self) {}
fn emit_current_doctype(&mut self) {}
fn emit_eof(&mut self) {}
fn emit_error(&mut self, _: html5gum::Error) {}
fn emit_string(&mut self, _: &[u8]) {}
fn init_comment(&mut self) {}
fn init_doctype(&mut self) {}
fn push_comment(&mut self, _: &[u8]) {}
fn push_doctype_name(&mut self, _: &[u8]) {}
fn push_doctype_public_identifier(&mut self, _: &[u8]) {}
fn push_doctype_system_identifier(&mut self, _: &[u8]) {}
fn set_doctype_public_identifier(&mut self, _: &[u8]) {}
fn set_doctype_system_identifier(&mut self, _: &[u8]) {}
fn set_force_quirks(&mut self) {}
fn set_self_closing(&mut self) {}
}

View File

@@ -1,35 +1,39 @@
use chrono::Utc;
use num_traits::FromPrimitive;
use rocket::serde::json::Json;
use rocket::{
request::{Form, FormItems, FromForm},
form::{Form, FromForm},
Route,
};
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
api::{
core::accounts::{PreloginData, RegisterData, _prelogin, _register},
core::log_user_event,
core::two_factor::{duo, email, email::EmailTokenData, yubikey},
ApiResult, EmptyResult, JsonResult,
ApiResult, EmptyResult, JsonResult, JsonUpcase,
},
auth::ClientIp,
auth::{ClientHeaders, ClientIp},
db::{models::*, DbConn},
error::MapResult,
mail, util, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![login]
routes![login, prelogin, identity_register]
}
#[post("/connect/token", data = "<data>")]
fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: ConnectData = data.into_inner();
match data.grant_type.as_ref() {
let mut user_uuid: Option<String> = None;
let login_result = match data.grant_type.as_ref() {
"refresh_token" => {
_check_is_some(&data.refresh_token, "refresh_token cannot be blank")?;
_refresh_login(data, conn)
_refresh_login(data, &mut conn).await
}
"password" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
@@ -41,36 +45,69 @@ fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, conn, &ip)
_password_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.client_secret, "client_secret cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_api_key_login(data, conn, &ip)
_check_is_some(&data.device_identifier, "device_identifier cannot be blank")?;
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_api_key_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
}
t => err!("Invalid type", t),
};
if let Some(user_uuid) = user_uuid {
match &login_result {
Ok(_) => {
log_user_event(
EventType::UserLoggedIn as i32,
&user_uuid,
client_header.device_type,
&client_header.ip.ip,
&mut conn,
)
.await;
}
Err(e) => {
if let Some(ev) = e.get_event() {
log_user_event(
ev.event as i32,
&user_uuid,
client_header.device_type,
&client_header.ip.ip,
&mut conn,
)
.await
}
}
}
}
login_result
}
fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Extract token
let token = data.refresh_token.unwrap();
// Get device by refresh token
let mut device = Device::find_by_refresh_token(&token, &conn).map_res("Invalid refresh token")?;
let mut device = Device::find_by_refresh_token(&token, conn).await.map_res("Invalid refresh token")?;
let scope = "api offline_access";
let scope_vec = vec!["api".into(), "offline_access".into()];
// Common
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
let user = User::find_by_uuid(&device.user_uuid, conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn)?;
device.save(conn).await?;
Ok(Json(json!({
let result = json!({
"access_token": access_token,
"expires_in": expires_in,
"token_type": "Bearer",
@@ -80,13 +117,22 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": scope,
"unofficialServer": true,
})))
});
Ok(Json(result))
}
fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
async fn _password_login(
data: ConnectData,
user_uuid: &mut Option<String>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api offline_access" {
@@ -98,21 +144,46 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
crate::ratelimit::check_limit_login(&ip.ip)?;
// Get the user
let username = data.username.as_ref().unwrap();
let user = match User::find_by_mail(username, &conn) {
let username = data.username.as_ref().unwrap().trim();
let mut user = match User::find_by_mail(username, conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Check password
let password = data.password.as_ref().unwrap();
if !user.check_valid_password(password) {
err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
// Change the KDF Iterations
if user.password_iterations != CONFIG.password_iterations() {
user.password_iterations = CONFIG.password_iterations();
user.set_password(password, None, false, None);
if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e);
}
}
// Check if the user is disabled
if !user.enabled {
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let now = Utc::now().naive_utc();
@@ -126,42 +197,52 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
if resend_limit == 0 || user.login_verify_count < resend_limit {
// We want to send another email verification if we require signups to verify
// their email address, and we haven't sent them a reminder in a while...
let mut user = user;
user.last_verifying_at = Some(now);
user.login_verify_count += 1;
if let Err(e) = user.save(&conn) {
if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e);
}
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid) {
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await {
error!("Error auto-sending email verification email: {:#?}", e);
}
}
}
// We still want the login to fail until they actually verified the email address
err!("Please verify your email before trying again.", format!("IP: {}. Username: {}.", ip.ip, username))
err!(
"Please verify your email before trying again.",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let (mut device, new_device) = get_device(&data, &conn, &user);
let (mut device, new_device) = get_device(&data, conn, &user).await;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, &conn)?;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, conn).await?;
if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
err!("Could not send login notification email. Please contact your administrator.")
err!(
"Could not send login notification email. Please contact your administrator.",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
}
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn)?;
device.save(conn).await?;
let mut result = json!({
"access_token": access_token,
@@ -174,6 +255,8 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false,// TODO: Same as above
"scope": scope,
"unofficialServer": true,
@@ -187,7 +270,12 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
Ok(Json(result))
}
fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
async fn _api_key_login(
data: ConnectData,
user_uuid: &mut Option<String>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api" {
@@ -200,49 +288,69 @@ fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let user_uuid = match client_id.strip_prefix("user.") {
let client_user_uuid = match client_id.strip_prefix("user.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
};
let user = match User::find_by_uuid(user_uuid, &conn) {
let user = match User::find_by_uuid(client_user_uuid, conn).await {
Some(user) => user,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Check if the user is disabled
if !user.enabled {
err!("This user has been disabled (API key login)", format!("IP: {}. Username: {}.", ip.ip, user.email))
err!(
"This user has been disabled (API key login)",
format!("IP: {}. Username: {}.", ip.ip, user.email),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
// Check API key. Note that API key logins bypass 2FA.
let client_secret = data.client_secret.as_ref().unwrap();
if !user.check_valid_api_key(client_secret) {
err!("Incorrect client_secret", format!("IP: {}. Username: {}.", ip.ip, user.email))
err!(
"Incorrect client_secret",
format!("IP: {}. Username: {}.", ip.ip, user.email),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let (mut device, new_device) = get_device(&data, &conn, &user);
let (mut device, new_device) = get_device(&data, conn, &user).await;
if CONFIG.mail_enabled() && new_device {
let now = Utc::now().naive_utc();
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
err!("Could not send login notification email. Please contact your administrator.")
err!(
"Could not send login notification email. Please contact your administrator.",
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
}
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn)?;
device.save(conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
// Note: No refresh_token is returned. The CLI just repeats the
// client_credentials login flow when the existing token expires.
Ok(Json(json!({
let result = json!({
"access_token": access_token,
"expires_in": expires_in,
"token_type": "Bearer",
@@ -251,32 +359,28 @@ fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: Same as above
"scope": scope,
"unofficialServer": true,
})))
});
Ok(Json(result))
}
/// Retrieves an existing device or creates a new device from ConnectData and the User
fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool) {
async fn get_device(data: &ConnectData, conn: &mut DbConn, user: &User) -> (Device, bool) {
// On iOS, device_type sends "iOS", on others it sends a number
let device_type = util::try_parse_string(data.device_type.as_ref()).unwrap_or(0);
// When unknown or unable to parse, return 14, which is 'Unknown Browser'
let device_type = util::try_parse_string(data.device_type.as_ref()).unwrap_or(14);
let device_id = data.device_identifier.clone().expect("No device id provided");
let device_name = data.device_name.clone().expect("No device name provided");
let mut new_device = false;
// Find device or create new
let device = match Device::find_by_uuid(&device_id, conn) {
Some(device) => {
// Check if owned device, and recreate if not
if device.user_uuid != user.uuid {
info!("Device exists but is owned by another user. The old device will be discarded");
new_device = true;
Device::new(device_id, user.uuid.clone(), device_name, device_type)
} else {
device
}
}
let device = match Device::find_by_uuid_and_user(&device_id, &user.uuid, conn).await {
Some(device) => device,
None => {
new_device = true;
Device::new(device_id, user.uuid.clone(), device_name, device_type)
@@ -286,28 +390,28 @@ fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool)
(device, new_device)
}
fn twofactor_auth(
async fn twofactor_auth(
user_uuid: &str,
data: &ConnectData,
device: &mut Device,
ip: &ClientIp,
conn: &DbConn,
conn: &mut DbConn,
) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn);
let twofactors = TwoFactor::find_by_user(user_uuid, conn).await;
// No twofactor token if twofactor is disabled
if twofactors.is_empty() {
return Ok(None);
}
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn)?;
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, asume the first one
let twofactor_code = match data.two_factor_token {
Some(ref code) => code,
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA token not provided"),
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?, "2FA token not provided"),
};
let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
@@ -320,16 +424,17 @@ fn twofactor_auth(
match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => {
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn).await?
}
Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::Webauthn) => _tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::Webauthn) => {
_tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn).await?
}
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?
_tf::duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
}
Some(TwoFactorType::Email) => {
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn)?
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn).await?
}
Some(TwoFactorType::Remember) => {
@@ -338,14 +443,22 @@ fn twofactor_auth(
remember = 1; // Make sure we also return the token here, otherwise it will only remember the first time
}
_ => {
err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA Remember token not provided")
err_json!(
_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?,
"2FA Remember token not provided"
)
}
}
}
_ => err!("Invalid two factor provider"),
_ => err!(
"Invalid two factor provider",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
}
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn)?;
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn).await?;
if !CONFIG.disable_2fa_remember() && remember == 1 {
Ok(Some(device.refresh_twofactor_remember()))
@@ -359,7 +472,7 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
tf.map(|t| t.data).map_res("Two factor doesn't exist")
}
fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> ApiResult<Value> {
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> {
use crate::api::core::two_factor;
let mut result = json!({
@@ -375,38 +488,18 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
match TwoFactorType::from_i32(*provider) {
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::U2f) if CONFIG.domain_set() => {
let request = two_factor::u2f::generate_u2f_login(user_uuid, conn)?;
let mut challenge_list = Vec::new();
for key in request.registered_keys {
challenge_list.push(json!({
"appId": request.app_id,
"challenge": request.challenge,
"version": key.version,
"keyHandle": key.key_handle,
}));
}
let challenge_list_str = serde_json::to_string(&challenge_list).unwrap();
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Challenges": challenge_list_str,
});
}
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn)?;
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0;
}
Some(TwoFactorType::Duo) => {
let email = match User::find_by_uuid(user_uuid, conn) {
let email = match User::find_by_uuid(user_uuid, conn).await {
Some(u) => u.email,
None => err!("User does not exist"),
};
let (signature, host) = duo::generate_duo_signature(&email, conn)?;
let (signature, host) = duo::generate_duo_signature(&email, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Host": host,
@@ -415,7 +508,7 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
}
Some(tf_type @ TwoFactorType::YubiKey) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No YubiKey devices registered"),
};
@@ -430,14 +523,14 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Some(tf_type @ TwoFactorType::Email) => {
use crate::api::core::two_factor as _tf;
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No twofactor email registered"),
};
// Send email immediately if email is the only 2FA option
if providers.len() == 1 {
_tf::email::send_token(user_uuid, conn)?
_tf::email::send_token(user_uuid, conn).await?
}
let email_data = EmailTokenData::from_json(&twofactor.data)?;
@@ -453,68 +546,70 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Ok(result)
}
#[post("/accounts/prelogin", data = "<data>")]
async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
#[post("/accounts/register", data = "<data>")]
async fn identity_register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
}
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default)]
#[derive(Debug, Clone, Default, FromForm)]
#[allow(non_snake_case)]
struct ConnectData {
// refresh_token, password, client_credentials (API key)
grant_type: String,
#[field(name = uncased("grant_type"))]
#[field(name = uncased("granttype"))]
grant_type: String, // refresh_token, password, client_credentials (API key)
// Needed for grant_type="refresh_token"
#[field(name = uncased("refresh_token"))]
#[field(name = uncased("refreshtoken"))]
refresh_token: Option<String>,
// Needed for grant_type = "password" | "client_credentials"
client_id: Option<String>, // web, cli, desktop, browser, mobile
client_secret: Option<String>, // API key login (cli only)
#[field(name = uncased("client_id"))]
#[field(name = uncased("clientid"))]
client_id: Option<String>, // web, cli, desktop, browser, mobile
#[field(name = uncased("client_secret"))]
#[field(name = uncased("clientsecret"))]
client_secret: Option<String>,
#[field(name = uncased("password"))]
password: Option<String>,
#[field(name = uncased("scope"))]
scope: Option<String>,
#[field(name = uncased("username"))]
username: Option<String>,
#[field(name = uncased("device_identifier"))]
#[field(name = uncased("deviceidentifier"))]
device_identifier: Option<String>,
#[field(name = uncased("device_name"))]
#[field(name = uncased("devicename"))]
device_name: Option<String>,
#[field(name = uncased("device_type"))]
#[field(name = uncased("devicetype"))]
device_type: Option<String>,
device_push_token: Option<String>, // Unused; mobile device push not yet supported.
#[allow(unused)]
#[field(name = uncased("device_push_token"))]
#[field(name = uncased("devicepushtoken"))]
_device_push_token: Option<String>, // Unused; mobile device push not yet supported.
// Needed for two-factor auth
#[field(name = uncased("two_factor_provider"))]
#[field(name = uncased("twofactorprovider"))]
two_factor_provider: Option<i32>,
#[field(name = uncased("two_factor_token"))]
#[field(name = uncased("twofactortoken"))]
two_factor_token: Option<String>,
#[field(name = uncased("two_factor_remember"))]
#[field(name = uncased("twofactorremember"))]
two_factor_remember: Option<i32>,
}
impl<'f> FromForm<'f> for ConnectData {
type Error = String;
fn from_form(items: &mut FormItems<'f>, _strict: bool) -> Result<Self, Self::Error> {
let mut form = Self::default();
for item in items {
let (key, value) = item.key_value_decoded();
let mut normalized_key = key.to_lowercase();
normalized_key.retain(|c| c != '_'); // Remove '_'
match normalized_key.as_ref() {
"granttype" => form.grant_type = value,
"refreshtoken" => form.refresh_token = Some(value),
"clientid" => form.client_id = Some(value),
"clientsecret" => form.client_secret = Some(value),
"password" => form.password = Some(value),
"scope" => form.scope = Some(value),
"username" => form.username = Some(value),
"deviceidentifier" => form.device_identifier = Some(value),
"devicename" => form.device_name = Some(value),
"devicetype" => form.device_type = Some(value),
"devicepushtoken" => form.device_push_token = Some(value),
"twofactorprovider" => form.two_factor_provider = value.parse().ok(),
"twofactortoken" => form.two_factor_token = Some(value),
"twofactorremember" => form.two_factor_remember = value.parse().ok(),
key => warn!("Detected unexpected parameter during login: {}", key),
}
}
Ok(form)
}
}
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {
if value.is_none() {
err!(msg)

Some files were not shown because too many files have changed in this diff Show More