Compare commits

...

370 Commits

Author SHA1 Message Date
Daniel García
66bff73ebf Merge pull request #3730 from BlackDex/update-admin-interface
Update admin interface
2023-08-31 21:31:10 +02:00
BlackDex
83d5432cbf Update admin interface
- Updated the admin interface dependencies.
- Replace bootstrap-native with bootstrap
- Added auto theme with an option to switch to dark/light
- Some small color changes
- Added an dev only function to always load static files from disk
2023-08-31 21:14:53 +02:00
Daniel García
f579a4154c Merge pull request #3806 from BlackDex/fix-3776
Allow Authorization header for Web Sockets
2023-08-31 20:46:07 +02:00
Daniel García
f5a19c5f8b Merge pull request #3797 from stefan0xC/add-plans-all-endpoint
add new secretsmanager plan for web-v2023.8.x
2023-08-31 20:37:04 +02:00
BlackDex
aa9bc1f785 Allow Authorization header for Web Sockets
Some clients (Thirdparty) might use the `Authorization` header instead
of a query param. We didn't supported this since all the official
clients do not seem to use this way of working. But Bitwarden does check
both ways.

This PR adds an extra check for this header which can be optional.

Fixes #3776
2023-08-31 12:35:20 +02:00
Stefan Melmuk
f162e85e44 add UserDecryptionOptions to login response (#3813)
needed for web-v2023.8.2+ compatibility due to the inclusion of the new
trusted device encryption feature. without this change, the web vault
will assume that you don't have a master password set and force you to
set one.
2023-08-31 11:02:36 +02:00
Stefan Melmuk
33ef70c192 add minimal secretsmanager plan for web-v2023.8.x
in web-v2023.8.x the getPlans() call was changed from `/plans/` to `/plans/all`
and the create new organization form also requires a bitwardenProduct to
differentiate between plans for PasswordManager and the SecretsManager
2023-08-24 22:39:16 +02:00
Mathijs van Veluw
3d2df6ce11 Merge pull request #3751 from BlackDex/optimize-icon-fetching
Optimized Favicon downloading
2023-08-13 19:31:43 +02:00
BlackDex
6cdcb3b297 Optimized Favicon downloading
Some optimizations in regards to downloading Favicon's.

I also encounterd some issues with accessing some sites where the
connection got dropped or closed early. This seems a reqwest/hyper
thingy, https://github.com/hyperium/hyper/issues/2136. This is now also
fixed.

General:

- Decreased struct size
- Decreased memory allocations
- Optimized tokenizer a bit more to only emit tags when all attributes are there and are valid.

reqwest/hyper connection issue:
The following changes helped solve the connection issues to some sites.
The endresult is that some icons are now able to be downloaded always instead of sometimes.

- Enabled some extra reqwest features, `deflate` and `native-tls-alpn`
  (Which do not bring in any extra crates since other crates already enabled them, but they were not active for Vaultwarden it self)
- Configured reqwest to have a max amount of idle pool connections per host
- Configured reqwest to timeout the idle connections in 10 seconds
2023-08-13 19:13:00 +02:00
Mathijs van Veluw
d1af468700 Merge pull request #3769 from GeekCornerGH/feature/bump-web-vault-v2023.7.1
chore: Bump web vault to v2023.7.1 and bump Rust
2023-08-13 19:10:18 +02:00
GeekCornerGH
ae1c53f4e5 build (deps): Bump Rust version and sync lockfile 2023-08-13 18:52:23 +02:00
GeekCorner
bc57c4b193 feat (web vault): Bump web vault to v2023.7.1 2023-08-13 18:18:00 +02:00
Mathijs van Veluw
61ae4c9cf5 Merge pull request #3592 from quexten/feature/login-with-device
Implement "login with device"
2023-08-13 18:15:09 +02:00
Bernd Schoolmann
8d7b3db33d Implement login-with-device 2023-08-13 17:54:18 +02:00
Daniel García
e9ec3741ae Merge pull request #3573 from BlackDex/update-base-images-and-versions
Update images to Bookworm and PQ15 and Rust v1.71
2023-08-12 23:55:14 +02:00
Daniel García
dacd50f3f1 Merge pull request #3740 from BlackDex/fix-ldap-import-org-status
Fix UserOrg status during LDAP Import
2023-08-12 22:19:20 +02:00
Daniel García
9412112639 Merge pull request #3734 from BlackDex/fix-env-template
Fix .env.template file
2023-08-12 22:18:33 +02:00
BlackDex
aaeae16983 Update images to Bookworm and PQ15
This PR updates the base images to use Debian Bookworm as base image. Also the MUSL/Alpine builds now use OpenSSLv3 and PostgreSQL v15.

The GHA Workflows are updated to use Ubuntu 22.04 to better match the versions of Debian Bookworm.

Also:
- Enabled spares crate registry
- Updated workflow actions
- Updated Rust to v1.71.0
- The rust-musl images now use musl v1.2.3 for the 32bit arch's if the Rust version is v1.71.0 or higher.
   The 64bit arch's already used musl v1.2.3.
- Updated crates.

Improves / Closes #3434
2023-08-12 12:29:33 +02:00
BlackDex
d892880dd2 Fix UserOrg status during LDAP Import
When a user does not have an account yet and SMTP was disabled it would
set the UserOrg status still to Accepted, though that would make it
possible to verify the user by the Org Admin's.
This would fail, since the user didn't actually crated his account, and
therefor no PublicKey existed.

This PR fixes this behaviour by checking if the password is empty and if
so, puts the user to an `Invited` state instead of `Accepted`.

Fixes #3737
2023-07-31 20:40:48 +02:00
BlackDex
4395e8e888 Fix .env.template file
There was one item missing and one item wrongly named.
This has been fixed including a spellcheck.
2023-07-29 13:20:57 +02:00
Daniel García
3dbfc484a5 Merge pull request #3704 from BlackDex/remove-debug-code
Remove debug code during attachment download
2023-07-17 18:22:56 +02:00
BlackDex
4ec2507073 Remove debug code during attachment download
There was some debug code during attachment downloads.
This produces extra logs not needed or even wanted.
2023-07-17 15:36:54 +02:00
Daniel García
ab65d7989b Merge pull request #3690 from BlackDex/fix-issue-3685
Fix some external_id issues
2023-07-14 20:43:51 +02:00
Daniel García
8707728cdb Merge pull request #3686 from GeekCornerGH/feat/add-forwardemail-support
feat: Add support for forwardemail
2023-07-14 20:43:32 +02:00
BlackDex
631d022e17 Fix some external_id issues
- Do not update `externalId` on group updates
   Groups are only updated via the web-vault currently, and those do not
   send the `externalId` value, and thus we need to prevent updating it.
 - Refactored some other ExternalId functions
 - Prevent empty `externalId` on `Collections`
 - Return `externalId` for users

Fixes #3685
2023-07-12 22:04:18 +02:00
GeekCorner
211f4492fa feat: Add support for forwardemail 2023-07-12 10:50:41 +02:00
Daniel García
61f9081827 Merge pull request #3678 from BlackDex/fix-org-api-creation-postgres
Fix Org API Key generation on PosgreSQL
2023-07-10 17:59:53 +02:00
BlackDex
a8e5384c4a Fix Org API Key generation on PosgreSQL
Using PostgreSQL creating or rotating the Org API Key failed because of
some query mismatch. This PR fixes that.

Fixes https://github.com/dani-garcia/vaultwarden/discussions/3671#discussioncomment-6400394
2023-07-10 15:29:06 +02:00
Mathijs van Veluw
1c7338c7c4 Merge pull request #3659 from BlackDex/fix-org-creation
Fix org creation regresion
2023-07-06 10:39:59 +02:00
BlackDex
08f37b9935 Fix org creation regresion
A previous PR added a field which isn't there on the initial creation of
an org. This PR fixes that.
2023-07-06 10:14:04 +02:00
Daniel García
4826ddca4c Merge pull request #3651 from tessus/fix/branch-on-HEAD
fix version when compiled at a specific commit
2023-07-05 18:45:08 +02:00
Helmut K. C. Tessarek
2b32b6f78c fix version when compiled at a specific commit
When a specific commit is checked out from the main branch, the vaultwarden
version is reported as `vaultwarden x.y.z-githash (HEAD)`.
This is a problem, because the admin interface reports this as a version from
a branch called HEAD, while in reality the commit was from the main branch.
2023-07-04 18:08:52 -04:00
Daniel García
a6cfdddfd8 Merge pull request #3649 from BlackDex/update-crates
Update crates and small clippy fix
2023-07-04 20:56:05 +02:00
Daniel García
814ce9a6ac Merge pull request #3632 from sirux88/fix-reset-password-check-issue
fix missing password check while manual reset password enrollment
2023-07-04 20:55:34 +02:00
Daniel García
1bee46f64b Merge pull request #3623 from fashberg/main
Added-External_id for Collections
2023-07-04 20:54:36 +02:00
Daniel García
556d945396 Merge pull request #3620 from DenuxPlays/main
Updated docker run command
2023-07-04 20:54:05 +02:00
Daniel García
664b480c71 Merge pull request #3609 from farodin91/add-user-to-collection-during-creation
add user to collection during creation
2023-07-04 20:53:46 +02:00
Jan Jansen
84e901b7d2 add user to collection during creation
Signed-off-by: Jan Jansen <jan.jansen@gdata.de>
2023-07-04 20:27:37 +02:00
Folke Ashberg
839b2bc950 fix format error 2023-07-04 20:26:03 +02:00
Folke Ashberg
6050c8dac5 Added-External_id for Collections 2023-07-04 20:26:03 +02:00
BlackDex
0a6b797e6e Update crates and small clippy fix
- Update all crates
- Remove async which is reported by clippy in v1.72.0
2023-07-04 20:12:50 +02:00
sirux88
fb6f441a4f fixed unnecessary variable usage 2023-07-04 18:57:49 +02:00
sirux88
9876aedd67 added password check for manual reset
password enrollment endpoint
2023-07-04 18:57:49 +02:00
Daniel García
19e671ff25 Fix dataurl parse panic when icon is malformed 2023-07-03 20:20:26 +02:00
Daniel García
60964c07e6 Add some extra access checks for attachments and groups 2023-07-03 19:58:14 +02:00
Timon Klinkert
e4894524e4 updated docker run command 2023-06-26 00:31:40 +02:00
Daniel García
e7f083dee9 Merge pull request #3593 from GeekCornerGH/feature/store-passkeys-in-the-vault
feat: Support for storing passkeys in the vault
2023-06-22 19:06:55 +02:00
GeekCornerGH
1074315a87 feat: Support for storing passkeys in the vault 2023-06-22 18:48:13 +02:00
Daniel García
c56bf38079 Merge pull request #3608 from BlackDex/fix-issue-3607
Fix send access regression
2023-06-22 17:58:15 +02:00
BlackDex
3c0cac623d Fix send access regression
In a previous commit push notifications for mobile were added, but this
introduced a header guard which caused issues with anonymous endpoints.

This PR fixes this by using a uuid with only 0's.

Fixes #3607
2023-06-22 16:40:26 +02:00
Mathijs van Veluw
550794b127 Merge pull request #3606 from farodin91/add-group-import-on-invite
Add group import on invite
2023-06-22 11:57:49 +02:00
Jan Jansen
e818a0bf37 Add group import on invite
Fixes #3599

Signed-off-by: Jan Jansen <jan.jansen@gdata.de>
2023-06-22 11:10:43 +02:00
Daniel García
2aedff50e8 Merge pull request #3603 from BlackDex/update-crates-and-workflows
Update crates and workflow
2023-06-21 23:29:15 +02:00
BlackDex
84a23008f4 Update crates and workflow
- Updated all the crates
- Updated workflow actions
- Set cargo registry to sparse
2023-06-21 22:01:05 +02:00
Mathijs van Veluw
44e9e1a58e Merge pull request #3578 from quexten/fix/mobile-push-to-empty-uuid
Add mobile push device filter to non-null push uuid
2023-06-18 17:40:10 +02:00
Bernd Schoolmann
e4606431d1 Fix mobile push blocking requests and spamming push server 2023-06-16 23:34:16 +02:00
Mathijs van Veluw
5b7d7390b0 Merge pull request #3568 from BlackDex/org-api-key-refresh
Implement the Organization API Key support for the new Directory Connector v2022
2023-06-13 09:05:22 +02:00
BlackDex
a05187c0ff Some code changes and optimizations
Some cleanups and optimizations done on the code generated by @Kurnihil
2023-06-13 08:51:07 +02:00
BlackDex
8e34495e73 Merge and modify PR from @Kurnihil
Merging a PR from @Kurnihil into the already rebased branch.
Made some small changes to make it work with newer changes.

Some finetuning is probably still needed.

Co-authored-by: Daniele Andrei <daniele.andrei@geo-satis.com>
Co-authored-by: Kurnihil
2023-06-13 08:51:07 +02:00
BlackDex
4219249e11 Add support for Organization token
This is a WIP for adding organization token login support.
It has basic token login and verification support, but that's about it.

This branch is a refresh of the previous version, and will contain code
from a PR based upon my previous branch.
2023-06-13 08:48:18 +02:00
Mathijs van Veluw
bd883de70e Merge pull request #3304 from GeekCornerGH/feature/push-notifications
feat: Implement Push Notifications sync
2023-06-12 23:45:03 +02:00
GeekCornerGH
2d66292350 feat: Push Notifications
Co-authored-by: samb-devel <125741162+samb-devel@users.noreply.github.com>
Co-authored-by: Zoruk <Zoruk@users.noreply.github.com>
2023-06-11 13:28:18 +02:00
Mathijs van Veluw
adf67a8ee8 Merge pull request #3563 from tessus/update/rust-and-crates
Update Rust and Crates
2023-06-04 22:39:40 +02:00
Helmut K. C. Tessarek
f40f5b8399 update web-vault to v2023.5.0 2023-06-04 16:15:10 -04:00
Helmut K. C. Tessarek
2d6ca0ea95 Update a few more crates 2023-06-04 16:14:51 -04:00
Helmut K. C. Tessarek
06a10e2c5a Update Rust and Crates 2023-06-03 17:04:45 -04:00
Mathijs van Veluw
445680fb84 Merge pull request #3546 from BlackDex/GH-3534
Fix collection change ws notifications
2023-05-26 18:03:45 +02:00
BlackDex
83376544d8 Fix collection change ws notifications
When chaning a collection this did not got notified via WebSockets.
This PR adds this feature and resolves #3534
2023-05-26 17:42:00 +02:00
Mathijs van Veluw
04a17dcdef Merge pull request #3548 from BlackDex/update-crates
Update crates and GH Workflow
2023-05-26 17:41:03 +02:00
BlackDex
0851561392 Update crates and GH Workflow
- Updated crates
- Updated GHA where needed
2023-05-26 17:26:09 +02:00
Mathijs van Veluw
95cd6deda6 Merge pull request #3547 from BlackDex/GH-3540
Prevent 401 on main admin page
2023-05-26 17:25:48 +02:00
BlackDex
636f16dc66 Prevent 401 on main admin page
When you are not loggedin, and have no cookie etc.. we always returned a 401.
This was mainly to allow the login page on all the sub pages, and after
login being redirected to the requested page, for these pages a 401 is a
valid response, since, you do not have access.

But for the main `/admin` page, it should just respond with a `200` and
show the login page.

This PR fixes this flow and response. It should prevent people using
Fail2ban, or other tools being triggered by only accessing the login page.

Resolves #3540
2023-05-25 23:40:36 +02:00
Mathijs van Veluw
9e5b049dca Merge pull request #3532 from jjlin/global-domains
Sync global_domains.json (Pinterest)
2023-05-17 21:20:48 +02:00
Jeremy Lin
23aa9088f3 Sync global_domains.json to bitwarden/server@8dda73a (Pinterest) 2023-05-17 12:04:31 -07:00
Mathijs van Veluw
4f0ed06b06 Merge pull request #3522 from stefan0xC/update-to-v2023.4.2
update web-vault to v2023.4.2
2023-05-12 09:48:56 +02:00
Stefan Melmuk
349c97efaf update crates 2023-05-12 09:31:29 +02:00
Stefan Melmuk
8b05a5d192 update web-vault to v2023.4.2 2023-05-12 08:05:35 +02:00
Mathijs van Veluw
83bf77d713 Merge pull request #3513 from stefan0xC/fix-empty-policy
policy data should be `null` not an empty object
2023-05-09 12:00:10 +02:00
Stefan Melmuk
4d5c047ddc policy data should be null not an empty object 2023-05-09 11:14:46 +02:00
Daniel García
147c9c7b50 Merge pull request #3505 from gitouche-sur-osm/dockerfile-fqin
Use fully qualified image names in Dockerfile
2023-05-08 21:00:30 +02:00
Daniel García
6515a2fcad Merge pull request #3502 from BlackDex/fix-trailing-slash
Use Rocket `v0.5` branch to fix endpoints
2023-05-08 21:00:19 +02:00
BlackDex
4a2ed553df Use Rocket v0.5 branch to fix endpoints
There now is a `v0.5` branch which will be the final release version
when the time is there. Switched to this instead of the `master` branch
which contains other fixes and enhancements as well (for `v0.6`).

This should solve all the endpoint issue we were having.
2023-05-06 19:46:55 +02:00
Gitouche
ba492c0602 Use fully qualified image names in Dockerfile 2023-05-03 18:31:28 +02:00
Daniel García
1ec049e2b5 Update web vault to v2023.4.0 2023-05-01 19:49:48 +02:00
Daniel García
0fb8563b13 Merge pull request #3491 from BlackDex/rocket_changes
Change `String` to `&str` for all Rocket functions and some other fixes
2023-04-30 23:53:15 +02:00
BlackDex
f906f6230a Change String to &str for all Rocket functions
During setting the latest commit hash for Rocket and updating all the
other crates, there were some messages regarding the usage of `String`
for the Rocket endpoint function calls. I acted upon this message and
changed all `String` types to `&str` and modified the code where needed.

This ended up in less alloc calls, and probably also a bit less memory usage.

- Updated all the crates and commit hashes
- Modified all `String` to `&str` where applicable
2023-04-30 17:18:12 +02:00
BlackDex
951ba55123 Prevent some ::_ logs from outputting 2023-04-30 17:17:43 +02:00
BlackDex
18abf226be Fix admin post endpoints 2023-04-30 17:09:42 +02:00
Daniel García
393645617e Merge pull request #3469 from BlackDex/rust-and-crate-updates
Update Rust and Crates
2023-04-24 18:54:35 +02:00
Daniel García
5bf243b675 Merge pull request #3475 from vilgotf/inline-statics
inline static rsa keys
2023-04-24 18:54:19 +02:00
BlackDex
cfba8347a3 Update Rust and Crates
- Updated Rust to v1.69.0
- Updated MSRV to v1.67.1
- Updated crates
- Updated GitHub Actions
2023-04-24 14:10:58 +02:00
Tim Vilgot Mikael Fredenberg
55c1b6e8d5 inline static rsa keys 2023-04-23 21:34:26 +02:00
Daniel García
3d7e80a7aa Merge pull request #3440 from BlackDex/switch-ws-to-streams
Small update to Rocket WebSockets
2023-04-17 20:26:03 +02:00
Daniel García
5866338de4 Merge pull request #3439 from kennymc-c/main
Fixed missing footer_text and a few inconsistencies in email templates
2023-04-17 20:25:36 +02:00
kennymc-c
271e3ae757 Changed permissions back to 644 2023-04-12 18:06:46 +02:00
BlackDex
48cc31a59f Small update to Rocket WebSockets
Switched from channels to stream. This is able to use yield, and the
code looks a bit nicer this way.

Also updated all the crates.
2023-04-12 15:59:05 +02:00
kennymc-c
6a7cee4e7e Fixed footer to footer_text 2023-04-11 22:00:10 +02:00
kennymc-c
f850dbb310 Fixed some missing footer_text partials and a few inconsistencies between plain text and html email templates 2023-04-11 21:27:38 +02:00
Daniel García
07099df41a Merge pull request #3436 from BlackDex/fix-admin-base-url
Several config and admin interface fixes
2023-04-10 21:11:44 +02:00
Daniel García
0c0a80720e Merge pull request #3404 from BlackDex/websockets-via-rocket
WebSockets via Rocket's Upgrade connection
2023-04-10 21:10:29 +02:00
BlackDex
ae437f70a3 Several config and admin interface fixes
- Fixed issue with domains starting with `admin`
- Fixed issue with DUO not being enabled globally anymore (regression)
- Renamed `Ciphers` to `Entries` in overview
- Improved `ADMIN_TOKEN` description
- Updated jquery-slim and datatables

Resolves #3382
Resolves #3415
Resolves discussion on #3288
2023-04-10 20:39:51 +02:00
BlackDex
3d11f4cd16 WebSockets via Rocket's Upgrade connection
This PR implements a (not yet fully released) new feature of Rocket which allows WebSockets/Upgrade connections.
No more need for multiple ports to be opened for Vaultwarden.
No explicit need for a reverse proxy to get WebSockets to work (Although I still suggest to use a reverse proxy).

- Using a git revision for Rocket, since `rocket_ws` is not yet released.
- Updated other crates as well.
- Added a connection guard to clear the WS connection from the Users list.

Fixes #685
Fixes #2917
Fixes #1424
2023-04-10 16:58:58 +02:00
Daniel García
3bd4e42fb0 Merge pull request #3427 from stefan0xC/check-if-policies-enabled
check if reset password policy is enabled
2023-04-09 19:02:27 +02:00
Stefan Melmuk
89e94b1d91 check if reset policy is enabled 2023-04-06 22:34:05 +02:00
Daniel García
0b28ab3be1 Merge pull request #3403 from BlackDex/update-dockerfile-and-rust
Revert setcap, update rust and crates
2023-04-02 15:39:36 +02:00
Daniel García
c5bcc340fa Merge pull request #3405 from BlackDex/fix-multiple-websocket-messages
Fix sending out multiple websocket notifications
2023-04-02 15:24:00 +02:00
BlackDex
bff54fbfdb Fix sending out multiple websocket notifications
For some reason I encountered a strange bug which resulted in sending
out multiple websocket notifications for the exact same user.

Added a `distinct()` for the query to filter out multiple uuid's.
2023-04-02 15:23:36 +02:00
Daniel García
867c6ba056 Merge pull request #3398 from stefan0xC/dont-expect-kdf-memory-or-parallelism
always return KdfMemory and KdfParallelism
2023-04-02 15:22:42 +02:00
Daniel García
d1ecf03f44 Merge pull request #3397 from nikolaevn/feature/add-admin-reinvite-endpoint
support `/users/<uuid>/invite/resend` admin api
2023-04-02 15:21:51 +02:00
BlackDex
fc43608eec Revert setcap, update rust and crates
- Revert #3170 as discussed in #3387
  In hindsight it's better to not have this feature
- Update Dockerfile.j2 for easy version changes.
  Just change it in one place instead of multiple
- Updated to Rust to latest patched version
- Updated crates to latest available
- Pinned mimalloc to an older version, as it breaks on musl builds
2023-04-02 15:19:59 +02:00
Daniel García
15dd05c78d Merge pull request #3390 from BlackDex/fix-abort-pw-reset-on-mail-error
Fix abort on pw reset mail error
2023-04-02 15:19:53 +02:00
Nikolay Nikolaev
aa6f774f65 add check user state 2023-03-31 14:03:37 +03:00
Nikolay Nikolaev
379f885354 add mail check 2023-03-31 13:00:57 +03:00
Stefan Melmuk
39a5f2dbe8 clear kdf memory and parallelism with pbkdf2
when changing back from argon2id to PBKDF2 the unused parameters
should be set to 0.

also fix small bug in _register
2023-03-31 07:31:40 +02:00
Stefan Melmuk
0daaa9b175 always return KdfMemory and KdfParallelism
the client will ignore the value of theses fields in case of `PBKDF2`
(whether they are unset or left from trying out `Argon2id` as KDF).

with `Argon2id` those fields should never be `null` but always in a
valid state. if they are `null` (how would that even happen?) the
client still assumes default values for `Argon2id` (i.e. m=64 and p=4)
and if they are set to something else login will fail anyway.
2023-03-31 01:10:28 +02:00
Nikolay Nikolaev
0c085d21ce fmt 2023-03-30 16:04:35 +03:00
Nikolay Nikolaev
dcaaa430f0 support /users/<uuid>/invite/resend admin api 2023-03-30 15:23:16 +03:00
BlackDex
2cda54ceff Fix password reset issues
There was used a wrong macro to produce an error message when mailing
the user his password was reset failed. It was using `error!()` which
does not return an `Err` and aborts the rest of the code.

This resulted in the users password still being resetted, but not being
notified. This PR fixes this by using `err!()`. Also, do not set the
user object as mutable until it really is needed.

Second, when a user was using the new Argon2id KDF with custom values
like memory and parallelism, that would have rendered the password
incorrect. The endpoint which should return all the data did not
returned all the new Argon2id values.

Fixes #3388

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2023-03-30 09:41:13 +02:00
Daniel García
525e6bb65a Merge pull request #3376 from jjlin/knowndevices-nopad
Decode knowndevice `X-Request-Email` as base64url with no padding
2023-03-27 09:32:25 +02:00
Jeremy Lin
62cebebd3d Decode knowndevice X-Request-Email as base64url with no padding
The clients end up removing the padding characters [1][2].

[1] https://github.com/bitwarden/clients/blob/web-v2023.3.0/libs/common/src/misc/utils.ts#L141-L143
[2] https://github.com/bitwarden/mobile/blob/v2023.3.1/src/Core/Utilities/CoreHelpers.cs#L227-L234
2023-03-27 00:03:54 -07:00
Daniel García
3646f14042 Update web vault to v2023.3.0b 2023-03-26 14:10:51 +02:00
Daniel García
813e889c97 Merge pull request #3366 from BlackDex/some-fixes
Some small fixes and updates
2023-03-25 13:31:59 +01:00
BlackDex
8bcd0ab0c6 Some small fixes and updates
- Updated workflows to use new checkout version
  This probably fixes the curl download for hadolint also.
- Updated crates including Rocket to the latest rc3 :party:
- Applied 2 nightly clippy lints to prevent future clippy issues.
2023-03-25 12:51:42 +01:00
Daniel García
5725d297b4 Merge pull request #3363 from BlackDex/gha-test
Add support for Quay.io and GHCR.io as registries
2023-03-24 17:11:58 +01:00
Daniel García
a428f05e77 Merge pull request #3354 from stefan0xC/bulk-delete-endpoints
add endpoints to bulk delete collections/groups
2023-03-24 17:09:56 +01:00
BlackDex
467ecfdc99 Add support for Quay.io and GHCR.io as registries
- Added support for Quay.io
- Added support for GHCR.io

To enable support for these container image registries the following needs to be added.

As `Actions secrets and variables` - `Secrets`
- `DOCKERHUB_TOKEN` and `DOCKERHUB_USERNAME`
- `QUAY_TOKEN` and `QUAY_USERNAME`

As `Actions secrets and variables` - `Variables` - `Repository Variables`
- `DOCKERHUB_REPO`
- `GHCR_REPO`
- `QUAY_REPO`

The `DOCKERHUB_REPO` currently configured in `Secrets` can be removed if wanted, probably best after this PR has been merged.

If one of the vars/secrets are not configured it will skip that specific registry!
2023-03-23 16:38:27 +01:00
Stefan Melmuk
ed8091a994 don't use assert() in production code
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2023-03-23 00:26:28 +01:00
Stefan Melmuk
56cad93e0f add endpoint to bulk delete collections 2023-03-23 00:26:28 +01:00
Stefan Melmuk
3cf67e0b8d add endpoint to bulk delete groups 2023-03-23 00:26:26 +01:00
Daniel García
5800aceb2d Update web vault to v2023.3.0 and dependencies 2023-03-22 21:30:30 +01:00
Daniel García
729b563160 Merge pull request #3332 from BlackDex/merge-clientip-with-headers
Merge ClientIp with Headers.
2023-03-15 22:28:03 +01:00
Daniel García
6b5618a5fc Merge pull request #3348 from BlackDex/update-rust-and-crates
Update Rust, MSRV and Crates
2023-03-15 22:08:06 +01:00
Daniel García
2aa72eb240 Merge pull request #3329 from jjlin/knowndevices-header
Add support for `/api/devices/knowndevice` with HTTP header params
2023-03-15 22:03:15 +01:00
BlackDex
c8655c4f89 Update Rust, MSRV and Crates
- Updated all the crates
- Updated Rust and MSRV
2023-03-15 20:41:12 +01:00
Jeremy Lin
daaa03d1b3 Add support for /api/devices/knowndevice with HTTP header params
Upstream PR: https://github.com/bitwarden/server/pull/2682
2023-03-11 12:03:05 -08:00
BlackDex
9e5b94924f Merge ClientIp with Headers.
Since we now use the `ClientIp` Guard on a lot more places, it also
increases the size of binary, and the macro generated code because of
this extra Guard. By merging the `ClientIp` Guard with the several
`Header` guards we have it reduces the amount of code generated
(including LLVM IR), but also a small speedup in build time.

I also spotted some small `json!()` optimizations which also reduced the
amount of code generated.
2023-03-11 16:58:32 +01:00
Mathijs van Veluw
f21089900e Merge pull request #3310 from BlackDex/msrv-changes
Upd Crates, Rust, MSRV, GHA and remove Backtrace
2023-03-07 12:06:05 +01:00
BlackDex
0c0e632bc9 Upd Crates, Rust, MSRV, GHA and remove Backtrace
- Changed MSRV to v1.65.
  Discussed this with @dani-garcia, and we will support **N-2**.
  This is/will be the same as for the `time` crate we use.
  Also updated the wiki regarding this https://github.com/dani-garcia/vaultwarden/wiki/Building-binary
- Removed backtrace crate in favor of `std::backtrace` stable since v1.65
- Updated Rust to v1.67.1
- Updated all the crates
- Updated the GHA action versions
- Adjusted the GHA MSRV build to extract the MSRV from `Cargo.toml`
2023-03-07 09:17:42 +01:00
Daniel García
a13a5bd1d8 Merge pull request #3315 from BlackDex/issue-3311
Fix web-vault Member UI show/edit/save
2023-03-06 21:13:34 +01:00
Daniel García
3b34b429f3 Merge pull request #3307 from jjlin/head-routes
Add HEAD routes to avoid spurious error messages
2023-03-06 21:12:54 +01:00
Daniel García
97ffd17789 Merge pull request #3289 from BlackDex/admin-token-hash-support
Admin token Argon2 hashing support
2023-03-06 21:12:41 +01:00
BlackDex
10c5476d31 Fix web-vault Member UI show/edit/save
There was a small bug left in regards to the web-vault v2023.2.0 fixes.
This PR fixes the left items. I think all should be addressed now.
When editing a User, you were not able to see or edit groups, or see
wich collections a user bellonged to.

Fixes #3311
2023-03-06 17:07:21 +01:00
Jeremy Lin
d3626eba2a Add HEAD routes to avoid spurious error messages
Rocket automatically implements a HEAD route when there's a matching GET
route, but relying on this behavior also means a spurious error gets
logged due to <https://github.com/SergioBenitez/Rocket/issues/1098>.

Add explicit HEAD routes for `/` and `/alive` to prevent uptime monitoring
services from generating error messages like `No matching routes for HEAD /`.
With these new routes, `HEAD /` only checks that the server can respond over
the network, while `HEAD /alive` also checks that the database connection is
alive, similar to `GET /alive`.
2023-03-05 09:51:42 -08:00
BlackDex
de157b2654 Admin token Argon2 hashing support
Added support for Argon2 hashing support for the `ADMIN_TOKEN` instead
of only supporting a plain text string.

The hash must be a PHC string which can be generated via the `argon2`
CLI **or** via the also built-in hash command in Vaultwarden.

You can simply run `vaultwarden hash` to generate a hash based upon a
password the user provides them self.

Added a warning during startup and within the admin settings panel is
the `ADMIN_TOKEN` is not an Argon2 hash.

Within the admin environment a user can ignore that warning and it will
not be shown for at least 30 days. After that the warning will appear
again unless the `ADMIN_TOKEN` has be converted to an Argon2 hash.

I have also tested this on my RaspberryPi 2b and there the `Bitwarden`
preset takes almost 4.5 seconds to generate/verify the Argon2 hash.

Using the `OWASP` preset it is below 1 second, which I think should be
fine for low-graded hardware. If it is needed people could use lower
memory settings, but in those cases I even doubt Vaultwarden it self
would run. They can always use the `argon2` CLI and generate a faster hash.
2023-03-04 16:15:30 +01:00
Mathijs van Veluw
337cbfaf22 Merge pull request #3290 from dpinse/test
Fix confirmation for removing 2FA and deauthing sessions in admin panel
2023-03-01 06:38:28 +01:00
Dylan Pinsonneault
f88b6d961e Fix confirmation for removing 2FA and deauthing sessions in admin panel 2023-02-28 20:38:33 -05:00
Daniel García
0426051541 Merge pull request #3281 from BlackDex/fix-web-vault-issues
Fix the web-vault v2023.2.0 API calls
2023-02-28 23:45:59 +01:00
Daniel García
4556f668de Merge pull request #3288 from BlackDex/admin-interface-updates
Some Admin Interface updates
2023-02-28 23:43:01 +01:00
Daniel García
da8225a3bd Merge pull request #3282 from JCBird1012/main
Add confirmation for removing 2FA and deauthing sessions in admin panel
2023-02-28 23:42:47 +01:00
BlackDex
f10e6b6ac2 Some Admin Interface updates
- Updated datatables
- Added NTP Time check
- Added Collections, Groups and Events count for orgs
- Renamed `Items` to `Ciphers`
- Some small style updates
2023-02-28 20:43:22 +01:00
BlackDex
7ec00d3850 Fix the web-vault v2023.2.0 API calls
- Supports the new Collection/Group/User editing UI's
- Support `/partial` endpoint for cipher updating to allow folder and favorite update for read-only ciphers.
- Prevent `Favorite`, `Folder`, `read-only` and `hide-passwords` from being added to the organizational sync.
- Added and corrected some `Object` key's to the output json.

Fixes #3279
2023-02-27 16:37:58 +01:00
Jonathan Elias Caicedo
8f8d7418ed Add confirmation for removing 2FA and deauth sessions in admin panel 2023-02-24 16:24:48 -05:00
Mathijs van Veluw
af6d17b701 Merge pull request #3277 from jjlin/org-vault-display
Fix vault item display in org vault view
2023-02-23 13:45:47 +01:00
Jeremy Lin
61183d001c Fix vault item display in org vault view
In the org vault view, the Bitwarden web vault currently tries to fetch the
groups for an org regardless of whether it claims to have group support.
If this errors out, no vault items are displayed.
2023-02-22 12:17:13 -08:00
Daniel García
024d12db08 Update web vault to v2023.2.0 and dependencies 2023-02-21 22:48:20 +01:00
Daniel García
dc7951efaf Add missing collections/details endpoint, based on the existing one 2023-02-21 21:58:37 +01:00
Daniel García
06e14fea55 Merge branch 'mittler-works-search_user_by_email' 2023-02-21 21:37:28 +01:00
Nils Mittler
0f656b4889 Apply rewording 2023-02-21 21:37:24 +01:00
Nils Mittler
6fa1dc50be Apply Admin Session Lifetime to JWT 2023-02-21 21:37:24 +01:00
Nils Mittler
2bb41367bc Make the admin cookie lifetime adjustable 2023-02-21 21:37:24 +01:00
Misterbabou
20d8886bfa Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-21 21:37:23 +01:00
BlackDex
59ef82b740 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:37:23 +01:00
BlackDex
fc543154c0 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:37:23 +01:00
r3drun3
569b464157 docs: add build status badge in readme 2023-02-21 21:37:23 +01:00
Daniel García
adf83c698d Merge branch 'mittler-works-adjustable_admin_cookie_lifetime' 2023-02-21 21:30:19 +01:00
Misterbabou
8fcbc58ee2 Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-21 21:30:15 +01:00
BlackDex
2dcbb2be59 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:30:14 +01:00
BlackDex
7026e004e1 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:30:14 +01:00
r3drun3
a3084feaee docs: add build status badge in readme 2023-02-21 21:30:14 +01:00
Daniel García
e7d36de784 Merge branch 'Misterbabou-issue-3249' 2023-02-21 21:29:13 +01:00
BlackDex
54cc47b14e Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-21 21:29:09 +01:00
BlackDex
fac44888cd Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:29:08 +01:00
r3drun3
9f056523c9 docs: add build status badge in readme 2023-02-21 21:29:08 +01:00
Daniel García
0af1ef387d Merge branch 'BlackDex-issue-3247' 2023-02-21 21:27:35 +01:00
BlackDex
f95f40be15 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-21 21:27:31 +01:00
r3drun3
5c859e2e6c docs: add build status badge in readme 2023-02-21 21:27:31 +01:00
Daniel García
03ff5e6ece Merge branch 'BlackDex-fix-client-api-login-checks' 2023-02-21 21:26:57 +01:00
r3drun3
52d696aa74 docs: add build status badge in readme 2023-02-21 21:26:53 +01:00
Daniel García
a4e80712dd Merge branch 'R3DRUN3-new_branch' 2023-02-21 21:26:26 +01:00
Nils Mittler
a947e434f0 Apply rewording 2023-02-20 17:02:14 +01:00
Nils Mittler
2eb4f290a5 Apply Admin Session Lifetime to JWT 2023-02-20 16:51:09 +01:00
Nils Mittler
8ae799a771 Add function to fetch user by email address 2023-02-20 16:39:56 +01:00
Nils Mittler
9a5f3a5015 Make the admin cookie lifetime adjustable 2023-02-20 16:10:30 +01:00
BlackDex
1ca0d6e245 Validate all needed fields for client API login
During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.
2023-02-19 18:16:06 +01:00
Misterbabou
7f69eebeb1 Fix Collection Read Only access for groups
I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification
2023-02-17 14:17:18 +01:00
BlackDex
32bd9b83a3 Fix Organization delete when groups are configured
With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes #3247
2023-02-16 17:29:12 +01:00
r3drun3
477d60de49 docs: add build status badge in readme 2023-02-15 10:15:42 +01:00
Mathijs van Veluw
1ba8275dcb Merge pull request #3234 from BlackDex/update-rust-and-crates
Updated Rust and crates
2023-02-13 12:39:26 +01:00
BlackDex
a0a4994250 Updated Rust and crates
- Updated Rust to v1.67.0
- Updated all crates except for `cookies` and `webauthn`
2023-02-13 08:32:01 +01:00
Daniel García
32dfa41970 Merge pull request #3147 from soruh/main
add support for system mta though sendmail
2023-02-12 19:40:33 +01:00
Daniel García
f92efda0f0 Merge branch 'main' into main 2023-02-12 19:40:04 +01:00
Daniel García
3b0f643e9d Merge pull request #3210 from tessus/feature/kdf-options
add argon2 kdf fields
2023-02-12 19:23:22 +01:00
Daniel García
5bcee24f88 Merge branch 'main' into feature/kdf-options 2023-02-12 19:23:14 +01:00
soruh
9e3d7ea44c add EXE_SUFFIX to sendmail executable when not specified 2023-02-12 18:55:15 +01:00
soruh
8cc6dac893 check if SENDMAIL_COMMAND is valid using 'which' crate 2023-02-12 18:55:15 +01:00
soruh
b7c4316c77 Add support for sendmail as a mail transport 2023-02-12 18:54:59 +01:00
Daniel García
0c295d5e6e Merge pull request #3167 from BlackDex/issue-3166
Fix Javascript issue on non sqlite databases
2023-02-12 18:48:03 +01:00
Daniel García
bc49d1f90d Merge branch 'main' into issue-3166 2023-02-12 18:47:55 +01:00
Daniel García
6f6d9dee83 Merge pull request #3108 from farodin91/allow-editing/unhiding-by-group
allow editing/unhiding by group
2023-02-12 18:47:02 +01:00
Daniel García
cef5dd4a46 Merge branch 'main' into allow-editing/unhiding-by-group 2023-02-12 18:46:53 +01:00
Daniel García
79061c0eb5 Merge pull request #3231 from kpfleming/icon-blacklist-improvements
Generate distinct log messages for regex vs. IP blacklisting.
2023-02-12 18:43:26 +01:00
Daniel García
6e2c3fc1cc Merge branch 'main' into icon-blacklist-improvements 2023-02-12 18:43:19 +01:00
Daniel García
e301fe137f Merge pull request #3228 from BlockListed/fix-domain-description
Fix trailing slash not getting removed from domain
2023-02-12 18:42:55 +01:00
Daniel García
af69c83db2 Merge branch 'main' into fix-domain-description 2023-02-12 18:42:49 +01:00
Daniel García
53fa8da5b1 Merge pull request #3215 from stefan0xC/fix-post-emergency-access
don't nullify key when editing emergency access
2023-02-12 18:42:30 +01:00
Daniel García
c58aac585b Merge branch 'main' into fix-post-emergency-access 2023-02-12 18:42:21 +01:00
Daniel García
8c1117fcbf Merge pull request #3170 from jjlin/cap_net_bind_service
Allow listening on privileged ports (below 1024) as non-root
2023-02-12 18:42:00 +01:00
Daniel García
a6dd4f1206 Merge branch 'main' into cap_net_bind_service 2023-02-12 18:41:45 +01:00
Daniel García
5af1799991 Merge pull request #3145 from dlehammer/spell-jack_mitigation
"Spell-Jacking" mitigation ~ prevent sensitive data leak …
2023-02-12 18:39:54 +01:00
Daniel García
a20a641de3 Merge branch 'main' into spell-jack_mitigation 2023-02-12 18:39:27 +01:00
Daniel García
8abd38573b Merge pull request #3116 from sirux88/admin-password-reset
Admin password reset
2023-02-12 18:38:50 +01:00
Daniel García
78abdf0e9d Merge branch 'main' into admin-password-reset 2023-02-12 18:38:36 +01:00
Daniel García
dc031d8d86 Merge pull request #2561 from BlackDex/re-license
Re-License Vaultwarden to AGPLv3
2023-02-12 18:35:35 +01:00
Daniel García
de6330b09d Merge branch 'main' into re-license 2023-02-12 18:35:09 +01:00
Helmut K. C. Tessarek
68bcc7a4b8 add argon2 kdf fields 2023-02-07 13:52:52 -05:00
BlockListed
c04a1352cb remove warn when sanitizing domain 2023-02-07 18:49:26 +01:00
BlockListed
5d1c11ceba fix trailing slash in configuration builder 2023-02-07 18:42:36 +01:00
BlockListed
a2aa7c9bc2 Revert "fix trailing slash not being removed from domain"
This reverts commit 679bc7a59b.
2023-02-07 18:41:24 +01:00
Jan Jansen
b3a351ccb2 allow editing/unhiding by group
Fixes #2989

Signed-off-by: Jan Jansen <jan.jansen@gdata.de>
2023-02-07 16:20:36 +01:00
BlockListed
679bc7a59b fix trailing slash not being removed from domain 2023-02-07 13:03:28 +01:00
BlockListed
a72d0b518f remove documentation of bug since I'm fixing it 2023-02-07 12:48:48 +01:00
Kevin P. Fleming
6741b25907 Ensure that all results from check_domain_blacklist_reason are cached. 2023-02-07 05:54:06 -05:00
Kevin P. Fleming
24b5784f02 Generate distinct log messages for regex vs. IP blacklisting.
When an icon will not be downloaded due to matching a configured
blacklist, ensure that the log message indicates the type of blacklist
that was matched.
2023-02-07 05:24:23 -05:00
BlockListed
eb9b481eba improve wording of domain description 2023-02-07 08:49:05 +01:00
BlockListed
64edc49392 change description of domain configuration
Vaultwarden send won't work if the domain includes a trailing slash.
This should be documented, as it may lead to confusion amoung users.
2023-02-06 23:19:08 +01:00
sirux88
0d1753ac74 completly hide reset password policy
on email disabled instances
2023-02-05 16:47:23 +01:00
sirux88
a6558f5548 rust lang specific improvements 2023-02-05 16:34:48 +01:00
sirux88
62dfeb80f2 improved security, disabling policy usage on
email-disabled clients and some refactoring
2023-02-04 13:29:57 +01:00
sirux88
26cd5d9643 Replaced wrong mysql column type 2023-02-04 09:23:13 +01:00
Stefan Melmuk
e65fbbfc21 don't nullify key when editing emergency access
the client does not send the key on every update of an emergency access
contact so the field would be emptied on a change of the wait days or access level.
2023-02-01 23:10:09 +01:00
Jeremy Lin
a2162f4d69 Allow listening on privileged ports (below 1024) as non-root
This is done by running `setcap cap_net_bind_service=+ep` on the executable
in the build stage (doing it in the runtime stage creates an extra copy of
the executable that bloats the image). This only works when using the
BuildKit-based builder, since the `COPY` instruction doesn't copy
capabilities on the legacy builder.
2023-02-01 00:35:33 -08:00
BlackDex
c9ed9aa733 Fix Javascript issue on non sqlite databases
When a non sqlite database is used, loading the admin interface fails
because the backup button is not generated.
This PR is solves it by checking if the elements are valid.

Also made some other changes and fixed some eslint errors.
Showing `_post` errors is better now.

Update jquery to latest version.

Fixes #3166
2023-01-26 20:34:25 +01:00
Daniel Hammer
9b20decdc1 "Spell-Jacking" mitigation ~ prevent sensitive data leak from spell checker.
@see https://www.otto-js.com/news/article/chrome-and-edge-enhanced-spellcheck-features-expose-pii-even-your-passwords
2023-01-25 22:35:18 +01:00
sirux88
adaefc8628 fixes for current upstream main 2023-01-25 08:09:26 +01:00
sirux88
c6c45c4c49 working implementation 2023-01-25 08:06:21 +01:00
sirux88
95494083f2 added database migration 2023-01-25 08:06:21 +01:00
Jeremy Lin
686474f815 Disable Hadolint check for consecutive RUN instructions (DL3059)
This check doesn't seem to add enough value to justify the difficulties it
tends to create when generating `RUN` instructions from a template.
2023-01-24 13:11:13 -08:00
Jeremy Lin
2c6bd8c9dc Rename .buildx Dockerfiles to .buildkit
This is a more accurate name, since these Dockerfiles require BuildKit, not Buildx.
2023-01-24 13:11:12 -08:00
Daniel García
9366e31452 Merge pull request #3164 from jjlin/remove-arm32v6-tag
Remove `arm32v6`-specific tag
2023-01-24 21:39:25 +01:00
Jeremy Lin
96ff32fb2f Remove arm32v6-specific tag
This section of code seems to be breaking the Docker release workflow as of a
few days ago, though it's unclear why. This tag only existed to work around
an issue with Docker pulling the wrong image for ARMv6 platforms; that issue
was resolved in Docker 20.10.0, which has been out for a few years now, so it
seems like a reasonable time to drop this tag.
2023-01-24 12:33:25 -08:00
BlackDex
9342fa5744 Re-License Vaultwarden to AGPLv3
This commit prepares Vaultwarden for the Re-Licensing to AGPLv3
Solves #2450
2023-01-24 20:49:11 +01:00
Daniel García
50fc22966c Updated web vault to 2023.1.1 and rust dependencies 2023-01-24 20:39:09 +01:00
Daniel García
4fab4c74ff Merge branch 'BlackDex-update-kdf-config' 2023-01-24 20:06:30 +01:00
BlackDex
e38e1a5d5f Validate note sizes on key-rotation.
We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes #3152
2023-01-24 20:05:09 +01:00
sirux88
cc91ac6cc0 include key into user.set_password 2023-01-24 20:04:05 +01:00
BlackDex
2d8c8e18f7 Update KDF Configuration and processing
- Change default Password Hash KDF Storage from 100_000 to 600_000 iterations
- Update Password Hash when the default iteration value is different
- Validate password_iterations
- Validate client-side KDF to prevent it from being set lower than 100_000
2023-01-24 19:49:12 +01:00
Daniel García
b17e2da2cf Merge branch 'BlackDex-issue-3152' 2023-01-24 19:47:20 +01:00
sirux88
d121cce0d2 include key into user.set_password 2023-01-24 19:47:14 +01:00
Daniel García
0eba7a88fa Merge branch 'sirux88-refactoring-user-setpassword' 2023-01-24 19:46:35 +01:00
BlackDex
34ac16e9d7 Validate note sizes on key-rotation.
We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes #3152
2023-01-24 09:30:10 +01:00
sirux88
906d9e2f1a Merge branch 'refactoring-user-setpassword' of https://github.com/sirux88/vaultwarden into refactoring-user-setpassword 2023-01-14 10:16:56 +01:00
sirux88
623d84aeb5 include key into user.set_password 2023-01-14 10:16:03 +01:00
sirux88
f8122cd2ca include key into user.set_password 2023-01-13 12:10:33 +01:00
Daniel García
9b7e86efc2 Update web vault to 2023.1.0 2023-01-12 19:49:06 +01:00
Daniel García
e7ccfbdd0e Merge branch 'BlackDex-optimize-ciphersync' 2023-01-12 19:19:01 +01:00
BlackDex
acc1474394 Add avatar color support
The new web-vault v2023.1.0 supports a custom color for the avatar.
https://github.com/bitwarden/server/pull/2330

This PR adds this feature.
2023-01-12 19:18:57 +01:00
BlackDex
c90b3031a6 Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 19:18:57 +01:00
BlackDex
aaffb2e007 Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:18:57 +01:00
GeekCorner
e0e95e95e4 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:18:57 +01:00
BlackDex
fa70b440d0 Fix remaning inline format 2023-01-12 19:18:56 +01:00
Rychart Redwerkz
42acb2ebb6 Use more modern meta tag for charset encoding 2023-01-12 19:18:56 +01:00
Daniel García
174bea8d6e Merge branch 'BlackDex-add-avatar-color-feature' 2023-01-12 19:17:22 +01:00
BlackDex
f68a57950b Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 19:17:16 +01:00
BlackDex
f747bf126b Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:17:15 +01:00
GeekCorner
1ca197fd46 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:17:15 +01:00
BlackDex
63d05d929b Fix remaning inline format 2023-01-12 19:17:15 +01:00
Rychart Redwerkz
ef5bf5d326 Use more modern meta tag for charset encoding 2023-01-12 19:17:15 +01:00
Daniel García
9d6e35d803 Merge branch 'BlackDex-update-rust-fix-cve' 2023-01-12 19:16:32 +01:00
BlackDex
0cccdcab83 Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-12 19:16:28 +01:00
GeekCorner
6607faa390 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:16:28 +01:00
BlackDex
6fcf18ab51 Fix remaning inline format 2023-01-12 19:16:28 +01:00
Rychart Redwerkz
d122c10573 Use more modern meta tag for charset encoding 2023-01-12 19:16:28 +01:00
Daniel García
ae9553ca1c Merge branch 'BlackDex-add-mfa-icon-to-orgs' 2023-01-12 19:16:16 +01:00
GeekCorner
ff919039c9 fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-12 19:16:12 +01:00
BlackDex
80eb15d46a Fix remaning inline format 2023-01-12 19:16:11 +01:00
Rychart Redwerkz
c36b870c54 Use more modern meta tag for charset encoding 2023-01-12 19:16:11 +01:00
Daniel García
b7cbca590c Merge branch 'GeekCornerGH-fix/2fa_directory-csp' 2023-01-12 19:15:42 +01:00
BlackDex
606a1bbfcb Fix remaning inline format 2023-01-12 19:15:38 +01:00
Rychart Redwerkz
3e5369c8dd Use more modern meta tag for charset encoding 2023-01-12 19:15:38 +01:00
Daniel García
dd5e4cec73 Merge branch 'BlackDex-fix-remaining-inline' 2023-01-12 19:15:14 +01:00
Rychart Redwerkz
a31a040abd Use more modern meta tag for charset encoding 2023-01-12 19:15:06 +01:00
Daniel García
f0125b95c1 Merge branch 'redwerkz-patch-1' 2023-01-12 19:14:49 +01:00
BlackDex
072f2e24c2 Update Rust to v1.66.1 to patch CVE
This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.
2023-01-12 09:45:52 +01:00
BlackDex
36b5350f9b Add avatar color support
The new web-vault v2023.1.0 supports a custom color for the avatar.
https://github.com/bitwarden/server/pull/2330

This PR adds this feature.
2023-01-11 22:20:03 +01:00
BlackDex
c7489c9fdf Add MFA icon to org member overview
The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.
2023-01-11 22:13:20 +01:00
BlackDex
3181e4e96e Optimize CipherSyncData for very large vaults
As mentioned in #3111, using a very very large vault causes some issues.
Mainly because of a SQLite limit, but, it could also cause issue on
MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory
allocations.

This PR solves this by removing the need of all the cipher_uuid's just
to gather the correct attachments.

It will use the user_uuid and org_uuid's to get all attachments linked
to both, weither the user has access to them or not. This isn't an
issue, since the matching is done per cipher and the attachment data is
only returned if there is a matching cipher to where the user has access to.

I also modified some code to be able to use `::with_capacity(n)` where
possible. This prevents re-allocations if the `Vec` increases size,
which will happen a lot if there are a lot of ciphers.

According to my tests measuring the time it takes to sync, it seems to
have lowered the duration a bit more.

Fixes #3111
2023-01-11 20:23:53 +01:00
GeekCorner
2ee0d53c5f fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory 2023-01-10 09:41:35 +01:00
Rychart Redwerkz
dfa629ecc7 Use more modern meta tag for charset encoding 2023-01-10 00:24:37 +01:00
BlackDex
92dc48b882 Fix remaning inline format 2023-01-09 20:41:31 +01:00
Daniel García
367e1ce289 Merge pull request #3065 from BlackDex/future-clippy-fixes
Resolve uninlined_format_args clippy warnings
2023-01-09 20:17:06 +01:00
BlackDex
7390f34355 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 20:13:48 +01:00
Daniel García
c47d9f6593 Fix some lints: explicit Arc::clone, and unnecessary return after unreachable! 2023-01-09 19:54:25 +01:00
Daniel García
5399ee8208 Merge branch 'BlackDex-update-libraries' 2023-01-09 19:19:11 +01:00
pjsier
117045e6d3 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:59 +01:00
BlackDex
912ad64555 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:18:58 +01:00
BlackDex
00855ee31d Fix failing large note imports
When importing to Vaultwarden (or Bitwarden) notes larger then 10_000
encrypted characters are invalid. This because it for one isn't
compatible with Bitwarden. And some clients tend to break on very large
notes.

We already added a check for this limit when adding a single cipher, but
this caused issues during import, and could cause a partial imported
vault. Bitwarden does some validations before actually running it
through the import process and generates a special error message which
helps the user indicate which items are invalid during the import.

This PR adds that validation check and returns the same kind of error.
Fixes #3048
2023-01-09 19:18:19 +01:00
pjsier
c18a273b4a Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:18 +01:00
pjsier
ca24a4adf1 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:18:18 +01:00
BlackDex
a263aaa481 Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:18:18 +01:00
Rychart Redwerkz
0a20ba0020 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:18:09 +01:00
Jeremy Lin
6541600af6 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:17:46 +01:00
Daniel García
525979d5d9 Merge branch 'BlackDex-issue-3048' 2023-01-09 19:17:30 +01:00
pjsier
7dd1959eba Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:17:13 +01:00
pjsier
e266b39254 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:17:13 +01:00
BlackDex
e935989fee Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:17:13 +01:00
Rychart Redwerkz
25c401f64d Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:17:03 +01:00
Jeremy Lin
18b72da657 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:16:47 +01:00
Daniel García
e8e6c89927 Merge branch 'BlackDex-future-clippy-fixes' 2023-01-09 19:16:35 +01:00
pjsier
fd5f657334 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:16:30 +01:00
Rychart Redwerkz
da9605f2d2 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-09 19:16:30 +01:00
pjsier
7030de32d5 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:16:30 +01:00
Jeremy Lin
b67c5b77be Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:16:29 +01:00
BlackDex
d30878c4ea Resolve uninlined_format_args clippy warnings
The upcomming release of Rust 1.67.0 will warn on `uninlined_format_args`.
This PR resolves that by inlining all these items.
It also looks nicer.
2023-01-09 19:12:51 +01:00
BlackDex
6be26f0a38 Fix failing large note imports
When importing to Vaultwarden (or Bitwarden) notes larger then 10_000
encrypted characters are invalid. This because it for one isn't
compatible with Bitwarden. And some clients tend to break on very large
notes.

We already added a check for this limit when adding a single cipher, but
this caused issues during import, and could cause a partial imported
vault. Bitwarden does some validations before actually running it
through the import process and generates a special error message which
helps the user indicate which items are invalid during the import.

This PR adds that validation check and returns the same kind of error.
Fixes #3048
2023-01-09 19:11:58 +01:00
Daniel García
34a6bfaefa Merge branch 'stapelkai-main' 2023-01-09 19:11:31 +01:00
Jeremy Lin
1c8749eb4d Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-09 19:11:27 +01:00
Andrés Maldonado
1198c36a2b Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-09 19:11:27 +01:00
BlackDex
41e6c1a383 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 19:11:27 +01:00
BlackDex
0042c3e4a7 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 19:11:27 +01:00
pjsier
724190f262 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:11:26 +01:00
BlackDex
6867d23ca2 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 19:11:26 +01:00
BlackDex
de26af0c2d Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 19:11:26 +01:00
Alex Martel
3f223a7514 Remove patched multer-rs 2023-01-09 19:11:25 +01:00
Daniel García
23f5a62d61 Merge branch 'jjlin-json-response' 2023-01-09 19:11:00 +01:00
Andrés Maldonado
81e2054f59 Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-09 19:10:57 +01:00
BlackDex
f9337effa5 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 19:10:56 +01:00
BlackDex
2972904eb8 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 19:10:56 +01:00
pjsier
bdd918b4d4 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 19:10:56 +01:00
BlackDex
88085fe17b Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 19:10:56 +01:00
BlackDex
2020a302d0 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 19:10:55 +01:00
Alex Martel
ab2dd0f300 Remove patched multer-rs 2023-01-09 19:10:55 +01:00
BlackDex
8e6fd4b4a1 Update dependencies and MSRV
- Updated dependencies.
  This includes a janked openssl crate version we currently use.
- Updated MSRV to v1.61.0 because hashbrown/cached has this version restriction.
2023-01-09 19:10:11 +01:00
Daniel García
988d24927e Merge branch 'am97-main' 2023-01-09 18:25:41 +01:00
BlackDex
e945d16fcf Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-09 18:25:36 +01:00
BlackDex
f1c0aa4f83 Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 18:25:36 +01:00
pjsier
68362d06b3 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:25:36 +01:00
BlackDex
f65c0e2ac8 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:25:36 +01:00
BlackDex
0f588ced03 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:25:35 +01:00
Alex Martel
b0f03bb49c Remove patched multer-rs 2023-01-09 18:25:35 +01:00
Daniel García
5063661028 Merge branch 'BlackDex-optimize-config-loading-messages' 2023-01-09 18:25:25 +01:00
BlackDex
7e66ab78ff Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2023-01-09 18:25:18 +01:00
pjsier
665e275dc5 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:25:18 +01:00
BlackDex
a6da728cca Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:25:17 +01:00
BlackDex
04e02d7f9f Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:25:17 +01:00
Alex Martel
7c739dd58e Remove patched multer-rs 2023-01-09 18:25:17 +01:00
Daniel García
05a552910c Merge branch 'BlackDex-update-notifications' 2023-01-09 18:24:00 +01:00
pjsier
c990837066 Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2023-01-09 18:23:56 +01:00
BlackDex
57aec37507 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:23:56 +01:00
BlackDex
0c5b4476ad Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:56 +01:00
Alex Martel
17141147a8 Remove patched multer-rs 2023-01-09 18:23:55 +01:00
Daniel García
193c2fa860 Merge branch 'pjsier-fix/log-file-permissions-3055' 2023-01-09 18:23:28 +01:00
BlackDex
6d01aaa80f Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2023-01-09 18:23:24 +01:00
BlackDex
ad60eaa0f3 Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:23 +01:00
Alex Martel
d878face07 Remove patched multer-rs 2023-01-09 18:23:23 +01:00
Daniel García
8bf8388cd6 Merge branch 'BlackDex-issue-3003' 2023-01-09 18:23:11 +01:00
BlackDex
b4db853bcb Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2023-01-09 18:23:07 +01:00
Alex Martel
5ee94c0ba9 Remove patched multer-rs 2023-01-09 18:23:07 +01:00
Daniel García
f108349547 Merge branch 'BlackDex-remove-inline-js' 2023-01-09 18:22:55 +01:00
Alex Martel
d25e1ab94b Remove patched multer-rs 2023-01-09 18:22:50 +01:00
Daniel García
79fee269ee Merge branch 'manofthepeace-removePatchedMulter' 2023-01-09 18:22:39 +01:00
Rychart Redwerkz
ffe362f856 Merge pull request #2 from stapelkai/update-viewport-meta-tag
Remove `shrink-to-fit=no`
2023-01-09 02:18:29 +01:00
Rychart Redwerkz
04bb15a802 Remove shrink-to-fit=no
This was a workaroud needed for iOS versions before 9.3 and is not part of the recommended viewport meta tag anymore.
https://www.scottohara.me/blog/2018/12/11/shrink-to-fit.html
2023-01-08 23:18:55 +01:00
Jeremy Lin
4d9d649db9 Change text/plain API responses to application/json
Recent versions of the Bitwarden clients (see bitwarden/clients#3574)
won't parse non-JSON responses. The most noticeable consequence is that
`/api/accounts/revision-date` responses won't be parsed, leading to
`/api/sync` always being called, even when it's not necessary.
2023-01-07 10:41:28 -08:00
Andrés Maldonado
2897c24e83 Percent-encode org_name in links
If org_name contains spaces, the generated link will not work in some email clients unless it is percent-encoded
2023-01-03 12:51:44 +01:00
BlackDex
5964dc95f0 Optimize config loading messages
As kinda discussed here #3090, the messages regarding loading the
configuration files is a bit strange or unclear. There have been some
other reports regarding this in the past, but wasn't that big a of a
deal.

But to make the whole process it bit more nice, this PR adjusts the way
it reports issues and some small changes to the messages to make it all
a bit more clear.

- Do not report a missing `.env` file, but only send a message when using one.
- Exit instead of Panic, a panic causes a stacktrace, which isn't needed
  here. I'm using a exit code 255 here so it is different to the other
  exit's we use.
- Exit on more issues, since if we continue, it could cause
  configuration issues if the user thinks all is fine.
- Use the actual env file used in the messages instead of `.env`.
- Added a **INFO** message when loading the `config.json`.
  This makes it consistent with the info message for loading the env file.

Resolves #3090
2023-01-02 18:18:28 +01:00
BlackDex
613b2519ed Removed unsafe-inline JS from CSP and other fixes
- Removed `unsafe-inline` for javascript from CSP.
  The admin interface now uses files instead of inline javascript.
- Modified javascript to work not being inline.
- Run eslint over javascript and fixed some items.
- Added a `to_json` Handlebars helper.
  Used at the diagnostics page.
- Changed `AdminTemplateData` struct to be smaller.
  The `config` was always added, but only used at one page.
  Same goes for `can_backup` and `version`.
- Also inlined CSS.
  We can't remove the `unsafe-inline` from css, because that seems to
  break the web-vault currently. That might need some further checks.
  But for now the 404 page and all the admin pages are clear of inline scripts and styles.
2022-12-31 22:17:16 +01:00
BlackDex
996b60e43d Update WebSocket Notifications
Previously the websocket notifications were using `app_id` as the
`ContextId`. This was incorrect and should have been the device_uuid
from the client device executing the request. The clients will ignore
the websocket request if the uuid matches. This also fixes some issues
with the Desktop client which is able to modify attachments within the
same screen and causes an issue when saving the attachment afterwards.

Also changed the way to handle removed attachments, since that causes an
error saving the vault cipher afterwards, complaining about a missing
attachment. Bitwarden ignores this, and continues with the remaining
attachments (if any). This also fixes #2591 .

Further some more websocket notifications have been added to some other
functions which enhance the user experience.

- Logout users when deauthed, changed password, rotated keys
- Trigger OrgSyncKeys on user confirm and removal
- Added some extra to the send feature

Also renamed UpdateTypes to match Bitwarden naming.
2022-12-31 20:39:53 +01:00
Alex Martel
a6d09407b9 Remove patched multer-rs 2022-12-29 12:09:53 -05:00
pjsier
f2e9ddef4e Log message to stderr if LOG_FILE is not writable
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
2022-12-29 07:21:04 -06:00
BlackDex
ca417d3257 Validate YUBICO_SERVER string (#3003)
If the `YUBICO_SERVER` is defined to an empty string, the whole yubikey
implementation doesn't work anymore.

This PR adds a check for this variable that it at least starts with `https://`.

Resolves #3003
2022-12-29 12:45:18 +01:00
150 changed files with 17362 additions and 11683 deletions

View File

@@ -30,6 +30,10 @@
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database timeout
## Timeout when acquiring database connection
# DATABASE_TIMEOUT=30
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
@@ -72,6 +76,13 @@
# WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012
## Enables push notifications (requires key and id from https://bitwarden.com/host)
# PUSH_ENABLED=true
# PUSH_INSTALLATION_ID=CHANGEME
# PUSH_INSTALLATION_KEY=CHANGEME
## Don't change this unless you know what you're doing.
# PUSH_RELAY_URI=https://push.bitwarden.com
## Controls whether users are allowed to create Bitwarden Sends.
## This setting applies globally to all users.
## To control this on a per-org basis instead, use the "Disable Send" org policy.
@@ -259,9 +270,15 @@
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Token for the admin interface, preferably use a long random string
## One option is to use 'openssl rand -base64 48'
## Token for the admin interface, preferably an Argon2 PCH string
## Vaultwarden has a built-in generator by calling `vaultwarden hash`
## For details see: https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page#secure-the-admin_token
## If not set, the admin panel is disabled
## New Argon2 PHC string
## Note that for some environments, like docker-compose you need to escape all the dollar signs `$` with an extra dollar sign like `$$`
## Also, use single quotes (') instead of double quotes (") to enclose the string when needed
# ADMIN_TOKEN='$argon2id$v=19$m=65540,t=3,p=4$MmeKRnGK5RW5mJS7h3TOL89GrpLPXJPAtTK8FTqj9HM$DqsstvoSAETl9YhnsXbf43WeaUwJC6JhViIvuPoig78'
## Old plain text string (Will generate warnings in favor of Argon2)
# ADMIN_TOKEN=Vy2VyYTTsKPv8W5aEOWUbB/Bt3DEKePbHmI4m9VcemUMS2rEviDowNAFqYi1xjmp
## Enable this to bypass the admin panel security. This option is only
@@ -298,9 +315,9 @@
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Controls the PBBKDF password iterations to apply on the server
## The change only applies when the password is changed
# PASSWORD_ITERATIONS=100000
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=350000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
@@ -335,6 +352,9 @@
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
## Set the lifetime of admin sessions to this value (in minutes).
# ADMIN_SESSION_LIFETIME=20
## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP
## You can generate it here: https://upgrade.yubico.com/getapikey/
@@ -373,7 +393,7 @@
# ROCKET_WORKERS=10
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## Mail specific settings, set SMTP_HOST and SMTP_FROM to enable the mail service.
## Mail specific settings, set SMTP_FROM and either SMTP_HOST or USE_SENDMAIL to enable the mail service.
## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory
# SMTP_HOST=smtp.domain.tld
@@ -385,6 +405,11 @@
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15
# Whether to send mail via the `sendmail` command
# USE_SENDMAIL=false
# Which sendmail command to use. The one found in the $PATH is used if not specified.
# SENDMAIL_COMMAND="/path/to/sendmail"
## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections.
## Possible values: ["Plain", "Login", "Xoauth2"].
## Multiple options need to be separated by a comma ','.

View File

@@ -9,6 +9,8 @@ on:
- "Cargo.*"
- "build.rs"
- "rust-toolchain"
- "rustfmt.toml"
- "diesel.toml"
pull_request:
paths:
- ".github/workflows/build.yml"
@@ -17,51 +19,59 @@ on:
- "Cargo.*"
- "build.rs"
- "rust-toolchain"
- "rustfmt.toml"
- "diesel.toml"
jobs:
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 120
# Make warnings errors, this is to prevent warnings slipping through.
# This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes.
env:
RUSTFLAGS: "-D warnings"
CARGO_REGISTRIES_CRATES_IO_PROTOCOL: sparse
strategy:
fail-fast: false
matrix:
channel:
- "rust-toolchain" # The version defined in rust-toolchain
- "msrv" # The supported MSRV
include:
- channel: "msrv"
version: "1.60.0"
name: Build and Test ${{ matrix.channel }}
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
# End Checkout the repo
# Install dependencies
- name: "Install dependencies Ubuntu"
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config
# End Install dependencies
# Determine rust-toolchain version
- name: Init Variables
id: toolchain
shell: bash
if: ${{ matrix.channel == 'rust-toolchain' }}
run: |
if [[ "${{ matrix.channel }}" == 'rust-toolchain' ]]; then
RUST_TOOLCHAIN="$(cat rust-toolchain)"
elif [[ "${{ matrix.channel }}" == 'msrv' ]]; then
RUST_TOOLCHAIN="$(grep -oP 'rust-version.*"(\K.*?)(?=")' Cargo.toml)"
else
RUST_TOOLCHAIN="${{ matrix.channel }}"
fi
echo "RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" | tee -a "${GITHUB_OUTPUT}"
# End Determine rust-toolchain version
# Uses the rust-toolchain file to determine version
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@55c7845fad90d0ae8b2e83715cb900e5e861e8cb # master @ 2022-10-25 - 21:40 GMT+2
uses: dtolnay/rust-toolchain@b44cb146d03e8d870c57ab64b80f04586349ca5d # master @ 2023-03-28 - 06:32 GMT+2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -69,17 +79,22 @@ jobs:
# End Uses the rust-toolchain file to determine version
# Install the MSRV channel to be used
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@55c7845fad90d0ae8b2e83715cb900e5e861e8cb # master @ 2022-10-25 - 21:40 GMT+2
uses: dtolnay/rust-toolchain@b44cb146d03e8d870c57ab64b80f04586349ca5d # master @ 2023-03-28 - 06:32 GMT+2
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: ${{ matrix.version }}
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
# End Install the MSRV channel to be used
# Enable Rust Caching
- uses: Swatinem/rust-cache@359a70e43a0bb8a13953b04a90f76428b4959bb6 # v2.2.0
- uses: Swatinem/rust-cache@dd05243424bd5c0e585e4b55eb2d7615cdd32f1f # v2.5.1
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.
# Only update when really needed! Use a <year>.<month>[.<inc>] format.
prefix-key: "v2023.07-rust"
# End Enable Rust Caching
@@ -184,7 +199,7 @@ jobs:
# Upload artifact to Github Actions
- name: "Upload artifact"
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
name: vaultwarden

View File

@@ -8,12 +8,12 @@ on: [
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
# End Checkout the repo

View File

@@ -24,7 +24,7 @@ jobs:
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
@@ -38,7 +38,7 @@ jobs:
if: ${{ startsWith(github.ref, 'refs/heads/') }}
docker-build:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 120
needs: skip_check
# Start a local docker registry to be used to generate multi-arch images.
@@ -48,11 +48,23 @@ jobs:
ports:
- 5000:5000
env:
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
# Use BuildKit (https://docs.docker.com/build/buildkit/) for better
# build performance and the ability to copy extended file attributes
# (e.g., for executable capabilities) across build phases.
DOCKER_BUILDKIT: 1
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
# The *_REPO variables need to be configured as repository variables
# Append `/settings/variables/actions` to your repo url
# DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
# Check for Docker hub credentials in secrets
HAVE_DOCKERHUB_LOGIN: ${{ vars.DOCKERHUB_REPO != '' && secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# GHCR_REPO needs to be 'ghcr.io/<user>/<repo>'
# Check for Github credentials in secrets
HAVE_GHCR_LOGIN: ${{ vars.GHCR_REPO != '' && github.repository_owner != '' && secrets.GITHUB_TOKEN != '' }}
# QUAY_REPO needs to be 'quay.io/<user>/<repo>'
# Check for Quay.io credentials in secrets
HAVE_QUAY_LOGIN: ${{ vars.QUAY_REPO != '' && secrets.QUAY_USERNAME != '' && secrets.QUAY_TOKEN != '' }}
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
strategy:
matrix:
@@ -61,17 +73,10 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
with:
fetch-depth: 0
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Determine Docker Tag
- name: Init Variables
id: vars
@@ -85,34 +90,146 @@ jobs:
fi
# End Determine Docker Tag
- name: Build Debian based images
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc # v2.2.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc # v2.2.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc # v2.2.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }}
# Debian
# Docker Hub
- name: Build Debian based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' }}
if: ${{ matrix.base_image == 'debian' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Push Debian based images
- name: Push Debian based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' }}
if: ${{ matrix.base_image == 'debian' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Build Alpine based images
# GitHub Container Registry
- name: Build Debian based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' && env.HAVE_GHCR_LOGIN == 'true' }}
- name: Push Debian based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' && env.HAVE_GHCR_LOGIN == 'true' }}
# Quay.io
- name: Build Debian based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' && env.HAVE_QUAY_LOGIN == 'true' }}
- name: Push Debian based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' && env.HAVE_QUAY_LOGIN == 'true' }}
# Alpine
# Docker Hub
- name: Build Alpine based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' }}
if: ${{ matrix.base_image == 'alpine' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
- name: Push Alpine based images
- name: Push Alpine based images (docker.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.DOCKERHUB_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' }}
if: ${{ matrix.base_image == 'alpine' && env.HAVE_DOCKERHUB_LOGIN == 'true' }}
# GitHub Container Registry
- name: Build Alpine based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' && env.HAVE_GHCR_LOGIN == 'true' }}
- name: Push Alpine based images (ghcr.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.GHCR_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' && env.HAVE_GHCR_LOGIN == 'true' }}
# Quay.io
- name: Build Alpine based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' && env.HAVE_QUAY_LOGIN == 'true' }}
- name: Push Alpine based images (quay.io)
shell: bash
env:
DOCKER_REPO: "${{ vars.QUAY_REPO }}"
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' && env.HAVE_QUAY_LOGIN == 'true' }}

View File

@@ -3,5 +3,9 @@ ignored:
- DL3008
# disable explicit version for apk install
- DL3018
# disable check for consecutive `RUN` instructions
- DL3059
trustedRegistries:
- docker.io
- ghcr.io
- quay.io

View File

@@ -1,16 +1,20 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.4.0
hooks:
- id: check-yaml
- id: check-json
- id: check-toml
- id: mixed-line-ending
args: ["--fix=no"]
- id: end-of-file-fixer
exclude: "(.*js$|.*css$)"
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
- id: check-symlinks
- id: forbid-submodules
- repo: local
hooks:
- id: fmt
@@ -36,5 +40,5 @@ repos:
language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$)
files: (Cargo.toml|Cargo.lock|rust-toolchain|clippy.toml|.*\.rs$)
pass_filenames: false

1995
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,12 +3,12 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.60.0"
rust-version = "1.69.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
readme = "README.md"
license = "GPL-3.0-only"
license = "AGPL-3.0-only"
publish = false
build = "build.rs"
@@ -36,70 +36,72 @@ unstable = []
[target."cfg(not(windows))".dependencies]
# Logging
syslog = "6.0.1" # Needs to be v4 until fern is updated
syslog = "6.1.0"
[dependencies]
# Logging
log = "0.4.17"
fern = { version = "0.6.1", features = ["syslog-6"] }
log = "0.4.19"
fern = { version = "0.6.2", features = ["syslog-6"] }
tracing = { version = "0.1.37", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
backtrace = "0.3.67" # Logging panics to logfile instead stderr only
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.6", default-features = false }
dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization
once_cell = "1.16.0"
once_cell = "1.18.0"
# Numerical libraries
num-traits = "0.2.15"
num-derive = "0.3.3"
num-traits = "0.2.16"
num-derive = "0.4.0"
# Web framework
rocket = { version = "0.5.0-rc.2", features = ["tls", "json"], default-features = false }
rocket = { version = "0.5.0-rc.3", features = ["tls", "json"], default-features = false }
# rocket_ws = { version ="0.1.0-rc.3" }
rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = "ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa" } # v0.5 branch
# WebSockets libraries
tokio-tungstenite = "0.18.0"
rmpv = "1.0.0" # MessagePack library
dashmap = "5.4.0"
tokio-tungstenite = "0.19.0"
rmpv = "1.0.1" # MessagePack library
# Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.5.0"
# Async futures
futures = "0.3.25"
tokio = { version = "1.23.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
futures = "0.3.28"
tokio = { version = "1.30.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.150", features = ["derive"] }
serde_json = "1.0.89"
serde = { version = "1.0.183", features = ["derive"] }
serde_json = "1.0.104"
# A safe, extensible ORM and Query builder
diesel = { version = "2.0.2", features = ["chrono", "r2d2"] }
diesel_migrations = "2.0.0"
diesel_logger = { version = "0.2.0", optional = true }
diesel = { version = "2.1.0", features = ["chrono", "r2d2"] }
diesel_migrations = "2.1.0"
diesel_logger = { version = "0.3.0", optional = true }
# Bundled SQLite
libsqlite3-sys = { version = "0.25.2", features = ["bundled"], optional = true }
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.26.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.16.20"
# UUID generation
uuid = { version = "1.2.2", features = ["v4"] }
uuid = { version = "1.4.1", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.23", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.1"
time = "0.3.17"
chrono = { version = "0.4.26", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.3"
time = "0.3.25"
# Job scheduler
job_scheduler_ng = "2.0.3"
job_scheduler_ng = "2.0.4"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.3.3"
data-encoding = "2.4.0"
# JWT library
jsonwebtoken = "8.2.0"
jsonwebtoken = "8.3.0"
# TOTP library
totp-lite = "2.0.0"
@@ -110,56 +112,72 @@ yubico = { version = "0.11.0", features = ["online-tokio"], default-features = f
# WebAuthn libraries
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn
url = "2.3.1"
# Handling of URL's for WebAuthn and favicons
url = "2.4.0"
# Email librariese-Base, Update crates and small change.
lettre = { version = "0.10.1", features = ["smtp-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.2.0" # URL encoding library used for URL's in the emails
# Email libraries
lettre = { version = "0.10.4", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.0" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
# Template library
handlebars = { version = "4.3.5", features = ["dir_source"] }
# HTML Template library
handlebars = { version = "4.3.7", features = ["dir_source"] }
# HTTP client
reqwest = { version = "0.11.13", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.18", features = ["stream", "json", "deflate", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
# For favicon extraction from main website
html5gum = "0.5.2"
regex = { version = "1.7.0", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.2.0"
bytes = "1.3.0"
cached = "0.40.0"
# Favicon extraction libraries
html5gum = "0.5.7"
regex = { version = "1.9.3", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.0"
bytes = "1.4.0"
# Cache function results (Used for version check and favicon fetching)
cached = "0.44.0"
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.1"
cookie_store = "0.19.0"
cookie = "0.16.2"
cookie_store = "0.19.1"
# Used by U2F, JWT and Postgres
openssl = "0.10.44"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.56"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.10"
governor = "0.5.1"
paste = "1.0.14"
governor = "0.6.0"
# Check client versions for specific features.
semver = "1.0.14"
semver = "1.0.18"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.32", features = ["secure"], default-features = false, optional = true }
mimalloc = { version = "0.1.37", features = ["secure"], default-features = false, optional = true }
which = "4.4.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.1"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.2.0"
[patch.crates-io]
# Using a patched version of multer-rs (Used by Rocket) to fix attachment/send file uploads
# Issue: https://github.com/dani-garcia/vaultwarden/issues/2644
# Patch: https://github.com/BlackDex/multer-rs/commit/477d16b7fa0f361b5c2a5ba18a5b28bec6d26a8a
multer = { git = "https://github.com/BlackDex/multer-rs", rev = "477d16b7fa0f361b5c2a5ba18a5b28bec6d26a8a" }
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations
[profile.release]
strip = "debuginfo"
lto = "thin"
# Always build argon2 using opt-level 3
# This is a huge speed improvement during testing
[profile.dev.package.argon2]
opt-level = 3
# A little bit of a speedup
[profile.dev]
split-debuginfo = "unpacked"

View File

@@ -1,5 +1,5 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@@ -7,17 +7,15 @@
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
@@ -72,7 +60,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -3,11 +3,13 @@
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
---
[![Build](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml/badge.svg)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml)
[![ghcr.io](https://img.shields.io/badge/ghcr.io-download-blue)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/Quay.io-download-blue)](https://quay.io/repository/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
@@ -23,23 +25,24 @@ Image is based on [Rust implementation of Bitwarden API](https://github.com/dani
Basically full implementation of Bitwarden API is provided including:
* Organizations support
* Attachments
* Attachments and Send
* Vault API support
* Serving the static files for Vault interface
* Website icons API
* Authenticator and U2F support
* YubiKey and Duo support
* Emergency Access
## Installation
Pull the docker image and mount a volume from the host for persistent storage:
```sh
docker pull vaultwarden/server:latest
docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server:latest
docker run -d --name vaultwarden -v /vw-data/:/data/ --restart unless-stopped -p 80:80 vaultwarden/server:latest
```
This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS.
**IMPORTANT**: Most modern web browsers, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
@@ -49,9 +52,9 @@ If you have an available domain name, you can get HTTPS certificates with [Let's
See the [vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki) for more information on how to configure and run the vaultwarden server.
## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please [use the forum](https://vaultwarden.discourse.group/).
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please use [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [the forum](https://vaultwarden.discourse.group/).
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure there aren't any similar issues open, though!
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure you are on the latest version and there aren't any similar issues open, though!
If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org](https://matrix.to/#/#vaultwarden:matrix.org) room on Matrix. Feel free to join us!

View File

@@ -72,7 +72,7 @@ fn version_from_git_info() -> Result<String, std::io::Error> {
// Combined version
if let Some(exact) = exact_tag {
Ok(exact)
} else if &branch != "main" && &branch != "master" {
} else if &branch != "main" && &branch != "master" && &branch != "HEAD" {
Ok(format!("{last_tag}-{rev_short} ({branch})"))
} else {
Ok(format!("{last_tag}-{rev_short}"))

View File

@@ -2,40 +2,42 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.66-bullseye" %}
{% set rust_version = "1.71.1" %}
{% set debian_version = "bookworm" %}
{% set alpine_version = "3.17" %}
{% set build_stage_base_image = "docker.io/library/rust:%s-%s" % (rust_version, debian_version) %}
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-stable-1.66.0" %}
{% set runtime_stage_base_image = "alpine:3.17" %}
{% set build_stage_base_image = "docker.io/blackdex/rust-musl:x86_64-musl-stable-%s-openssl3" % rust_version %}
{% set runtime_stage_base_image = "docker.io/library/alpine:%s" % alpine_version %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.17" %}
{% set build_stage_base_image = "docker.io/blackdex/rust-musl:armv7-musleabihf-stable-%s-openssl3" % rust_version %}
{% set runtime_stage_base_image = "docker.io/balenalib/armv7hf-alpine:%s" % alpine_version %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% elif "armv6" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:3.17" %}
{% set build_stage_base_image = "docker.io/blackdex/rust-musl:arm-musleabi-stable-%s-openssl3" % rust_version %}
{% set runtime_stage_base_image = "docker.io/balenalib/rpi-alpine:%s" % alpine_version %}
{% set package_arch_target = "arm-unknown-linux-musleabi" %}
{% elif "arm64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-stable-1.66.0" %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:3.17" %}
{% set build_stage_base_image = "docker.io/blackdex/rust-musl:aarch64-musl-stable-%s-openssl3" % rust_version %}
{% set runtime_stage_base_image = "docker.io/balenalib/aarch64-alpine:%s" % alpine_version %}
{% set package_arch_target = "aarch64-unknown-linux-musl" %}
{% endif %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:bullseye-slim" %}
{% set runtime_stage_base_image = "docker.io/library/debian:%s-slim" % debian_version %}
{% elif "arm64" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:bullseye" %}
{% set runtime_stage_base_image = "docker.io/balenalib/aarch64-debian:%s" % debian_version %}
{% set package_arch_name = "arm64" %}
{% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:bullseye" %}
{% set runtime_stage_base_image = "docker.io/balenalib/rpi-debian:%s" % debian_version %}
{% set package_arch_name = "armel" %}
{% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:bullseye" %}
{% set runtime_stage_base_image = "docker.io/balenalib/armv7hf-debian:%s" % debian_version %}
{% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
{% set package_cross_compiler = "arm-linux-gnueabihf" %}
@@ -50,7 +52,7 @@
{% else %}
{% set package_arch_target_param = "" %}
{% endif %}
{% if "buildx" in target_file %}
{% if "buildkit" in target_file %}
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
{% else %}
{% set mount_rust_cache = "" %}
@@ -59,8 +61,8 @@
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_version = "v2022.12.0" %}
{% set vault_image_digest = "sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e" %}
{% set vault_version = "v2023.7.1" %}
{% set vault_image_digest = "sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
@@ -70,55 +72,54 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" vaultwarden/web-vault:{{ vault_version }}
# [vaultwarden/web-vault@{{ vault_image_digest }}]
# $ docker pull docker.io/vaultwarden/web-vault:{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" docker.io/vaultwarden/web-vault:{{ vault_version }}
# [docker.io/vaultwarden/web-vault@{{ vault_image_digest }}]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" vaultwarden/web-vault@{{ vault_image_digest }}
# [vaultwarden/web-vault:{{ vault_version }}]
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" docker.io/vaultwarden/web-vault@{{ vault_image_digest }}
# [docker.io/vaultwarden/web-vault:{{ vault_version }}]
#
FROM vaultwarden/web-vault@{{ vault_image_digest }} as vault
FROM docker.io/vaultwarden/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
{% if "alpine" in target_file %}
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
{% if "armv6" in target_file %}
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/{{ package_arch_target }}/lib/libatomic.a'
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
ENV RUSTFLAGS='-Clink-arg=-latomic'
{% endif %}
{% elif "arm" in target_file %}
#
# Install required build libs for {{ package_arch_name }} architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture {{ package_arch_name }} \
# Install build dependencies for the {{ package_arch_name }} architecture
RUN {{ mount_rust_cache -}} dpkg --add-architecture {{ package_arch_name }} \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
libc6-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
libmariadb3{{ package_arch_prefix }} \
libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
libmariadb3{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libssl-dev{{ package_arch_prefix }} \
#
# Make sure cargo has the right target config
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
@@ -130,16 +131,13 @@ ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}" \
OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% elif "amd64" in target_file %}
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libmariadb-dev \
libpq-dev
{% endif %}
# Creates a dummy project used to grab dependencies
@@ -178,7 +176,6 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
######################## RUNTIME IMAGE ########################
@@ -195,7 +192,6 @@ ENV ROCKET_PROFILE="release" \
{% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
{% endif %}
@@ -203,32 +199,23 @@ RUN [ "cross-build-start" ]
RUN mkdir /data \
{% if "alpine" in runtime_stage_base_image %}
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
{% else %}
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
{% if "armv6" in target_file and "alpine" not in target_file %}
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
# hadolint ignore=DL3059
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
{% endif -%}
{% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
{% endif %}

View File

@@ -8,8 +8,8 @@ all: $(OBJECTS)
%/Dockerfile.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx: Dockerfile.j2 render_template
%/Dockerfile.buildkit: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
%/Dockerfile.buildkit.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,21 +34,19 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libpq-dev
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -81,13 +76,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:bullseye-slim
FROM docker.io/library/debian:bookworm-slim
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -98,11 +92,11 @@ ENV ROCKET_PROFILE="release" \
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:x86_64-musl-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.17
FROM docker.io/library/alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -93,10 +92,10 @@ ENV ROCKET_PROFILE="release" \
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,21 +34,19 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
# Install build dependencies
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
libpq-dev
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -81,13 +76,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:bullseye-slim
FROM docker.io/library/debian:bookworm-slim
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -98,11 +92,11 @@ ENV ROCKET_PROFILE="release" \
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:x86_64-musl-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.17
FROM docker.io/library/alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -93,10 +92,10 @@ ENV ROCKET_PROFILE="release" \
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# hadolint ignore=DL3059
# Install build dependencies for the arm64 architecture
RUN dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
gcc-aarch64-linux-gnu \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
libmariadb3:arm64 \
libpq-dev:arm64 \
libpq5:arm64 \
libssl-dev:arm64 \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,34 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:bullseye
FROM docker.io/balenalib/aarch64-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:aarch64-musl-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.17
FROM docker.io/balenalib/aarch64-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -89,18 +88,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture arm64 \
# Install build dependencies for the arm64 architecture
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
gcc-aarch64-linux-gnu \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
libmariadb3:arm64 \
libpq-dev:arm64 \
libpq5:arm64 \
libssl-dev:arm64 \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,34 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:bullseye
FROM docker.io/balenalib/aarch64-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:aarch64-musl-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.17
FROM docker.io/balenalib/aarch64-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -89,18 +88,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armel architecture
RUN dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
gcc-arm-linux-gnueabi \
libc6-dev:armel \
libpq5:armel \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
libmariadb3:armel \
libpq-dev:armel \
libpq5:armel \
libssl-dev:armel \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,39 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:bullseye
FROM docker.io/balenalib/rpi-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
# hadolint ignore=DL3059
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:arm-musleabi-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,15 +34,18 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
ENV RUSTFLAGS='-Clink-arg=-latomic'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -77,13 +77,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.17
FROM docker.io/balenalib/rpi-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -91,18 +90,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture armel \
# Install build dependencies for the armel architecture
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
gcc-arm-linux-gnueabi \
libc6-dev:armel \
libpq5:armel \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
libmariadb3:armel \
libpq-dev:armel \
libpq5:armel \
libssl-dev:armel \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,39 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:bullseye
FROM docker.io/balenalib/rpi-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink.
# This symlink was there in the buster images, and for some reason this is needed.
# hadolint ignore=DL3059
RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:arm-musleabi-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,15 +34,18 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
ENV RUSTFLAGS='-Clink-arg=-latomic'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -77,13 +77,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.17
FROM docker.io/balenalib/rpi-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -91,18 +90,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# hadolint ignore=DL3059
# Install build dependencies for the armhf architecture
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
gcc-arm-linux-gnueabihf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
libmariadb3:armhf \
libpq-dev:armhf \
libpq5:armhf \
libssl-dev:armhf \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,34 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:bullseye
FROM docker.io/balenalib/armv7hf-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:armv7-musleabihf-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.17
FROM docker.io/balenalib/armv7hf-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -89,18 +88,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM rust:1.66-bullseye as build
FROM docker.io/library/rust:1.71.1-bookworm as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,28 +34,26 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture armhf \
# Install build dependencies for the armhf architecture
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
gcc-arm-linux-gnueabihf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
libmariadb3:armhf \
libpq-dev:armhf \
libpq5:armhf \
libssl-dev:armhf \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
@@ -71,7 +66,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@@ -101,34 +95,31 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:bullseye
FROM docker.io/balenalib/armv7hf-debian:bookworm
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadb-dev-compat \
libpq5 \
openssl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -2,7 +2,6 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
@@ -16,20 +15,18 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2022.12.0
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2022.12.0
# [vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e]
# $ docker pull docker.io/vaultwarden/web-vault:v2023.7.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.7.1
# [docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e
# [vaultwarden/web-vault:v2022.12.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f
# [docker.io/vaultwarden/web-vault:v2023.7.1]
#
FROM vaultwarden/web-vault@sha256:068ac863d52a5626568ae3c7f93a509f87c76b1b15821b101f2707724df9da3e as vault
FROM docker.io/vaultwarden/web-vault@sha256:b306f38fe0d54fa3d79059a737f8e1803da44ddc5f273c2aecdd6a4886211b0f as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.0 as build
FROM docker.io/blackdex/rust-musl:armv7-musleabihf-stable-1.71.1-openssl3 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
@@ -37,13 +34,16 @@ ENV DEBIAN_FRONTEND=noninteractive \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
REGISTRIES_CRATES_IO_PROTOCOL=sparse \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Use PostgreSQL v15 during Alpine/MUSL builds instead of the default v11
# Debian Bookworm already contains libpq v15
ENV PQ_LIB_DIR="/usr/local/musl/pq15/lib"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -75,13 +75,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.17
FROM docker.io/balenalib/armv7hf-alpine:3.17
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -89,18 +88,16 @@ ENV ROCKET_PROFILE="release" \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
ca-certificates \
curl \
ca-certificates
openssl \
tzdata
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data

View File

@@ -1,3 +1,5 @@
#!/usr/bin/env bash
# The default Debian-based images support these arches for all database backends.
arches=(
amd64
@@ -5,7 +7,9 @@ arches=(
armv7
arm64
)
export arches
if [[ "${DOCKER_TAG}" == *alpine ]]; then
distro_suffix=.alpine
fi
export distro_suffix

View File

@@ -1,7 +1,8 @@
#!/bin/bash
#!/usr/bin/env bash
echo ">>> Building images..."
# shellcheck source=arches.sh
source ./hooks/arches.sh
if [[ -z "${SOURCE_COMMIT}" ]]; then
@@ -23,10 +24,10 @@ LABELS=(
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
org.opencontainers.image.created="$(date --utc --iso-8601=seconds)"
org.opencontainers.image.documentation="https://github.com/dani-garcia/vaultwarden/wiki"
org.opencontainers.image.licenses="GPL-3.0-only"
org.opencontainers.image.licenses="AGPL-3.0-only"
org.opencontainers.image.revision="${SOURCE_COMMIT}"
org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}"
org.opencontainers.image.url="https://hub.docker.com/r/${DOCKER_REPO#*/}"
org.opencontainers.image.url="https://github.com/dani-garcia/vaultwarden"
org.opencontainers.image.version="${SOURCE_VERSION}"
)
LABEL_ARGS=()
@@ -34,9 +35,9 @@ for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}")
done
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildkit as template
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
buildx_suffix=.buildx
buildkit_suffix=.buildkit
fi
set -ex
@@ -45,6 +46,6 @@ for arch in "${arches[@]}"; do
docker build \
"${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
-f "docker/${arch}/Dockerfile${buildkit_suffix}${distro_suffix}" \
.
done

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -ex

View File

@@ -1,5 +1,6 @@
#!/bin/bash
#!/usr/bin/env bash
# shellcheck source=arches.sh
source ./hooks/arches.sh
export DOCKER_CLI_EXPERIMENTAL=enabled
@@ -41,7 +42,7 @@ LOCAL_REPO="${LOCAL_REGISTRY}/${REPO}"
echo ">>> Pushing images to local registry..."
for arch in ${arches[@]}; do
for arch in "${arches[@]}"; do
docker_image="${DOCKER_REPO}:${DOCKER_TAG}-${arch}"
local_image="${LOCAL_REPO}:${DOCKER_TAG}-${arch}"
docker tag "${docker_image}" "${local_image}"
@@ -71,9 +72,9 @@ tags=("${DOCKER_REPO}:${DOCKER_TAG}")
# to make it easier for users to track the latest release.
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
if [[ "${DOCKER_TAG}" == *alpine ]]; then
tags+=(${DOCKER_REPO}:alpine)
tags+=("${DOCKER_REPO}:alpine")
else
tags+=(${DOCKER_REPO}:latest)
tags+=("${DOCKER_REPO}:latest")
fi
fi
@@ -91,10 +92,10 @@ declare -A arch_to_platform=(
[arm64]="linux/arm64"
)
platforms=()
for arch in ${arches[@]}; do
for arch in "${arches[@]}"; do
platforms+=("${arch_to_platform[$arch]}")
done
platforms="$(join "," "${platforms[@]}")"
platform="$(join "," "${platforms[@]}")"
# Run the build, pushing the resulting images and multi-arch manifest list to
# Docker Hub. The Dockerfile is read from stdin to avoid sending any build
@@ -104,46 +105,7 @@ docker buildx build \
--network host \
--build-arg LOCAL_REPO="${LOCAL_REPO}" \
--build-arg DOCKER_TAG="${DOCKER_TAG}" \
--platform "${platforms}" \
--platform "${platform}" \
"${tag_args[@]}" \
--push \
- < ./docker/Dockerfile.buildx
# Add an extra arch-specific tag for `arm32v6`; Docker can't seem to properly
# auto-select that image on ARMv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Note that we use `arm32v6` instead of `armv6` to be consistent with the
# existing vaultwarden tags, which adhere to the naming conventions of the
# Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6).
# Unfortunately, these per-arch repo names aren't always consistent with the
# corresponding platform (OS/arch/variant) IDs, particularly in the case of
# 32-bit ARM arches (e.g., `linux/arm/v6` is used, not `linux/arm32/v6`).
#
# TODO: It looks like this issue should be fixed starting in Docker 20.10.0,
# so this step can be removed once fixed versions are in wider distribution.
#
# Tags:
#
# testing => testing-arm32v6
# testing-alpine => <ignored>
# x.y.z => x.y.z-arm32v6, latest-arm32v6
# x.y.z-alpine => <ignored>
#
if [[ "${DOCKER_TAG}" != *alpine ]]; then
image="${DOCKER_REPO}":"${DOCKER_TAG}"
# Fetch the multi-arch manifest list and find the digest of the armv6 image.
filter='.manifests|.[]|select(.platform.architecture=="arm" and .platform.variant=="v6")|.digest'
digest="$(docker manifest inspect "${image}" | jq -r "${filter}")"
# Pull the armv6 image by digest, retag it, and repush it.
docker pull "${DOCKER_REPO}"@"${digest}"
docker tag "${DOCKER_REPO}"@"${digest}" "${image}"-arm32v6
docker push "${image}"-arm32v6
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
docker tag "${image}"-arm32v6 "${DOCKER_REPO}:latest"-arm32v6
docker push "${DOCKER_REPO}:latest"-arm32v6
fi
fi

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color VARCHAR(7);

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE devices ADD COLUMN push_uuid TEXT;

View File

@@ -0,0 +1,10 @@
CREATE TABLE organization_api_key (
uuid CHAR(36) NOT NULL,
org_uuid CHAR(36) NOT NULL REFERENCES organizations(uuid),
atype INTEGER NOT NULL,
api_key VARCHAR(255) NOT NULL,
revision_date DATETIME NOT NULL,
PRIMARY KEY(uuid, org_uuid)
);
ALTER TABLE users ADD COLUMN external_id TEXT;

View File

@@ -0,0 +1,19 @@
CREATE TABLE auth_requests (
uuid CHAR(36) NOT NULL PRIMARY KEY,
user_uuid CHAR(36) NOT NULL,
organization_uuid CHAR(36),
request_device_identifier CHAR(36) NOT NULL,
device_type INTEGER NOT NULL,
request_ip TEXT NOT NULL,
response_device_id CHAR(36),
access_code TEXT NOT NULL,
public_key TEXT NOT NULL,
enc_key TEXT NOT NULL,
master_password_hash TEXT NOT NULL,
approved BOOLEAN,
creation_date DATETIME NOT NULL,
response_date DATETIME,
authentication_date DATETIME,
FOREIGN KEY(user_uuid) REFERENCES users(uuid),
FOREIGN KEY(organization_uuid) REFERENCES organizations(uuid)
);

View File

@@ -0,0 +1 @@
ALTER TABLE collections ADD COLUMN external_id TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color TEXT;

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE devices ADD COLUMN push_uuid TEXT;

View File

@@ -0,0 +1,10 @@
CREATE TABLE organization_api_key (
uuid CHAR(36) NOT NULL,
org_uuid CHAR(36) NOT NULL REFERENCES organizations(uuid),
atype INTEGER NOT NULL,
api_key VARCHAR(255),
revision_date TIMESTAMP NOT NULL,
PRIMARY KEY(uuid, org_uuid)
);
ALTER TABLE users ADD COLUMN external_id TEXT;

View File

@@ -0,0 +1,19 @@
CREATE TABLE auth_requests (
uuid CHAR(36) NOT NULL PRIMARY KEY,
user_uuid CHAR(36) NOT NULL,
organization_uuid CHAR(36),
request_device_identifier CHAR(36) NOT NULL,
device_type INTEGER NOT NULL,
request_ip TEXT NOT NULL,
response_device_id CHAR(36),
access_code TEXT NOT NULL,
public_key TEXT NOT NULL,
enc_key TEXT NOT NULL,
master_password_hash TEXT NOT NULL,
approved BOOLEAN,
creation_date TIMESTAMP NOT NULL,
response_date TIMESTAMP,
authentication_date TIMESTAMP,
FOREIGN KEY(user_uuid) REFERENCES users(uuid),
FOREIGN KEY(organization_uuid) REFERENCES organizations(uuid)
);

View File

@@ -0,0 +1 @@
ALTER TABLE collections ADD COLUMN external_id TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users_organizations
ADD COLUMN reset_password_key TEXT;

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN avatar_color TEXT;

View File

@@ -0,0 +1,7 @@
ALTER TABLE users
ADD COLUMN
client_kdf_memory INTEGER DEFAULT NULL;
ALTER TABLE users
ADD COLUMN
client_kdf_parallelism INTEGER DEFAULT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE devices ADD COLUMN push_uuid TEXT;

View File

@@ -0,0 +1,11 @@
CREATE TABLE organization_api_key (
uuid TEXT NOT NULL,
org_uuid TEXT NOT NULL,
atype INTEGER NOT NULL,
api_key TEXT NOT NULL,
revision_date DATETIME NOT NULL,
PRIMARY KEY(uuid, org_uuid),
FOREIGN KEY(org_uuid) REFERENCES organizations(uuid)
);
ALTER TABLE users ADD COLUMN external_id TEXT;

View File

@@ -0,0 +1,19 @@
CREATE TABLE auth_requests (
uuid TEXT NOT NULL PRIMARY KEY,
user_uuid TEXT NOT NULL,
organization_uuid TEXT,
request_device_identifier TEXT NOT NULL,
device_type INTEGER NOT NULL,
request_ip TEXT NOT NULL,
response_device_id TEXT,
access_code TEXT NOT NULL,
public_key TEXT NOT NULL,
enc_key TEXT NOT NULL,
master_password_hash TEXT NOT NULL,
approved BOOLEAN,
creation_date DATETIME NOT NULL,
response_date DATETIME,
authentication_date DATETIME,
FOREIGN KEY(user_uuid) REFERENCES users(uuid),
FOREIGN KEY(organization_uuid) REFERENCES organizations(uuid)
);

View File

@@ -0,0 +1 @@
ALTER TABLE collections ADD COLUMN external_id TEXT;

View File

@@ -1 +1 @@
1.66.0
1.71.1

View File

@@ -1,7 +1,4 @@
# version = "Two"
edition = "2021"
max_width = 120
newline_style = "Unix"
use_small_heuristics = "Off"
# struct_lit_single_line = false
# overflow_delimited_expr = true

View File

@@ -13,7 +13,7 @@ use rocket::{
};
use crate::{
api::{core::log_event, ApiResult, EmptyResult, JsonResult, NumberOrString},
api::{core::log_event, unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
@@ -33,8 +33,10 @@ pub fn routes() -> Vec<Route> {
routes![
get_users_json,
get_user_json,
get_user_by_mail_json,
post_admin_login,
admin_page,
admin_page_login,
invite_user,
logout,
delete_user,
@@ -52,7 +54,8 @@ pub fn routes() -> Vec<Route> {
organizations_overview,
delete_organization,
diagnostics,
get_diagnostics_config
get_diagnostics_config,
resend_user_invite,
]
}
@@ -144,7 +147,6 @@ fn render_admin_login(msg: Option<&str>, redirect: Option<String>) -> ApiResult<
let msg = msg.map(|msg| format!("Error: {msg}"));
let json = json!({
"page_content": "admin/login",
"version": VERSION,
"error": msg,
"redirect": redirect,
"urlpath": CONFIG.domain_path()
@@ -184,7 +186,7 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
let cookie = Cookie::build(COOKIE_NAME, jwt)
.path(admin_path())
.max_age(rocket::time::Duration::minutes(20))
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict)
.http_only(true)
.finish();
@@ -201,6 +203,19 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
fn _validate_token(token: &str) -> bool {
match CONFIG.admin_token().as_ref() {
None => false,
Some(t) if t.starts_with("$argon2") => {
use argon2::password_hash::PasswordVerifier;
match argon2::password_hash::PasswordHash::new(t) {
Ok(h) => {
// NOTE: hash params from `ADMIN_TOKEN` are used instead of what is configured in the `Argon2` instance.
argon2::Argon2::default().verify_password(token.trim().as_ref(), &h).is_ok()
}
Err(e) => {
error!("The configured Argon2 PHC in `ADMIN_TOKEN` is invalid: {e}");
false
}
}
}
Some(t) => crate::crypto::ct_eq(t.trim(), token.trim()),
}
}
@@ -208,34 +223,16 @@ fn _validate_token(token: &str) -> bool {
#[derive(Serialize)]
struct AdminTemplateData {
page_content: String,
version: Option<&'static str>,
page_data: Option<Value>,
config: Value,
can_backup: bool,
logged_in: bool,
urlpath: String,
}
impl AdminTemplateData {
fn new() -> Self {
Self {
page_content: String::from("admin/settings"),
version: VERSION,
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
page_data: None,
}
}
fn with_data(page_content: &str, page_data: Value) -> Self {
fn new(page_content: &str, page_data: Value) -> Self {
Self {
page_content: String::from(page_content),
version: VERSION,
page_data: Some(page_data),
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
}
@@ -247,7 +244,11 @@ impl AdminTemplateData {
}
fn render_admin_page() -> ApiResult<Html<String>> {
let text = AdminTemplateData::new().render()?;
let settings_json = json!({
"config": CONFIG.prepare_json(),
"can_backup": *CAN_BACKUP,
});
let text = AdminTemplateData::new("admin/settings", settings_json).render()?;
Ok(Html(text))
}
@@ -256,6 +257,11 @@ fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
render_admin_page()
}
#[get("/", rank = 2)]
fn admin_page_login() -> ApiResult<Html<String>> {
render_admin_login(None, None)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct InviteData {
@@ -314,8 +320,9 @@ fn logout(cookies: &CookieJar<'_>) -> Redirect {
#[get("/users")]
async fn get_users_json(_token: AdminToken, mut conn: DbConn) -> Json<Value> {
let mut users_json = Vec::new();
for u in User::get_all(&mut conn).await {
let users = User::get_all(&mut conn).await;
let mut users_json = Vec::with_capacity(users.len());
for u in users {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
@@ -327,8 +334,9 @@ async fn get_users_json(_token: AdminToken, mut conn: DbConn) -> Json<Value> {
#[get("/users/overview")]
async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let mut users_json = Vec::new();
for u in User::get_all(&mut conn).await {
let users = User::get_all(&mut conn).await;
let mut users_json = Vec::with_capacity(users.len());
for u in users {
let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
@@ -342,13 +350,25 @@ async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<
users_json.push(usr);
}
let text = AdminTemplateData::with_data("admin/users", json!(users_json)).render()?;
let text = AdminTemplateData::new("admin/users", json!(users_json)).render()?;
Ok(Html(text))
}
#[get("/users/by-mail/<mail>")]
async fn get_user_by_mail_json(mail: &str, _token: AdminToken, mut conn: DbConn) -> JsonResult {
if let Some(u) = User::find_by_mail(mail, &mut conn).await {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[get("/users/<uuid>")]
async fn get_user_json(uuid: String, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(&uuid, &mut conn).await?;
async fn get_user_json(uuid: &str, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(uuid, &mut conn).await?;
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
@@ -356,21 +376,21 @@ async fn get_user_json(uuid: String, _token: AdminToken, mut conn: DbConn) -> Js
}
#[post("/users/<uuid>/delete")]
async fn delete_user(uuid: String, _token: AdminToken, mut conn: DbConn, ip: ClientIp) -> EmptyResult {
let user = get_user_or_404(&uuid, &mut conn).await?;
async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let user = get_user_or_404(uuid, &mut conn).await?;
// Get the user_org records before deleting the actual user
let user_orgs = UserOrganization::find_any_state_by_user(&uuid, &mut conn).await;
let user_orgs = UserOrganization::find_any_state_by_user(uuid, &mut conn).await;
let res = user.delete(&mut conn).await;
for user_org in user_orgs {
log_event(
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
user_org.org_uuid,
&user_org.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&ip.ip,
&token.ip.ip,
&mut conn,
)
.await;
@@ -380,8 +400,20 @@ async fn delete_user(uuid: String, _token: AdminToken, mut conn: DbConn, ip: Cli
}
#[post("/users/<uuid>/deauth")]
async fn deauth_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
nt.send_logout(&user, None).await;
if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await {
match unregister_push_device(device.uuid).await {
Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e),
};
}
}
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
@@ -389,31 +421,53 @@ async fn deauth_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> Empt
}
#[post("/users/<uuid>/disable")]
async fn disable_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
async fn disable_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.enabled = false;
user.save(&mut conn).await
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[post("/users/<uuid>/enable")]
async fn enable_user(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
user.enabled = true;
user.save(&mut conn).await
}
#[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &mut conn).await?;
async fn remove_2fa(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
user.totp_recover = None;
user.save(&mut conn).await
}
#[post("/users/<uuid>/invite/resend")]
async fn resend_user_invite(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(uuid, &mut conn).await {
//TODO: replace this with user.status check when it will be available (PR#3397)
if !user.password_hash.is_empty() {
err_code!("User already accepted invitation", Status::BadRequest.code);
}
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
} else {
Ok(())
}
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[derive(Deserialize, Debug)]
struct UserOrgTypeData {
user_type: NumberOrString,
@@ -422,12 +476,7 @@ struct UserOrgTypeData {
}
#[post("/users/org_type", data = "<data>")]
async fn update_user_org_type(
data: Json<UserOrgTypeData>,
_token: AdminToken,
mut conn: DbConn,
ip: ClientIp,
) -> EmptyResult {
async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit =
@@ -442,7 +491,7 @@ async fn update_user_org_type(
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there is at least one other confirmed owner
// Removing owner permission, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&data.org_uuid, UserOrgType::Owner, &mut conn).await <= 1 {
err!("Can't change the type of the last owner")
}
@@ -465,10 +514,10 @@ async fn update_user_org_type(
log_event(
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
data.org_uuid,
&data.org_uuid,
String::from(ACTING_ADMIN_USER),
14, // Use UnknownBrowser type
&ip.ip,
&token.ip.ip,
&mut conn,
)
.await;
@@ -484,23 +533,27 @@ async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyRes
#[get("/organizations/overview")]
async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<String>> {
let mut organizations_json = Vec::new();
for o in Organization::get_all(&mut conn).await {
let organizations = Organization::get_all(&mut conn).await;
let mut organizations_json = Vec::with_capacity(organizations.len());
for o in organizations {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &mut conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await);
org["collection_count"] = json!(Collection::count_by_org(&o.uuid, &mut conn).await);
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32));
organizations_json.push(org);
}
let text = AdminTemplateData::with_data("admin/organizations", json!(organizations_json)).render()?;
let text = AdminTemplateData::new("admin/organizations", json!(organizations_json)).render()?;
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
async fn delete_organization(uuid: String, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &mut conn).await.map_res("Organization doesn't exist")?;
async fn delete_organization(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(uuid, &mut conn).await.map_res("Organization doesn't exist")?;
org.delete(&mut conn).await
}
@@ -519,10 +572,20 @@ struct GitCommit {
sha: String,
}
async fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let github_api = get_reqwest_client();
#[derive(Deserialize)]
struct TimeApi {
year: u16,
month: u8,
day: u8,
hour: u8,
minute: u8,
seconds: u8,
}
Ok(github_api.get(url).send().await?.error_for_status()?.json::<T>().await?)
async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let json_api = get_reqwest_client();
Ok(json_api.get(url).send().await?.error_for_status()?.json::<T>().await?)
}
async fn has_http_access() -> bool {
@@ -542,14 +605,13 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest")
match get_json_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest")
.await
{
Ok(r) => r.tag_name,
_ => "-".to_string(),
},
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main").await
{
match get_json_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main").await {
Ok(mut c) => {
c.sha.truncate(8);
c.sha
@@ -561,7 +623,7 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
if running_within_docker {
"-".to_string()
} else {
match get_github_api::<GitRelease>(
match get_json_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
)
.await
@@ -576,6 +638,24 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
}
}
async fn get_ntp_time(has_http_access: bool) -> String {
if has_http_access {
if let Ok(ntp_time) = get_json_api::<TimeApi>("https://www.timeapi.io/api/Time/current/zone?timeZone=UTC").await
{
return format!(
"{year}-{month:02}-{day:02} {hour:02}:{minute:02}:{seconds:02} UTC",
year = ntp_time.year,
month = ntp_time.month,
day = ntp_time.day,
hour = ntp_time.hour,
minute = ntp_time.minute,
seconds = ntp_time.seconds
);
}
}
String::from("Unable to fetch NTP time.")
}
#[get("/diagnostics")]
async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn) -> ApiResult<Html<String>> {
use chrono::prelude::*;
@@ -604,7 +684,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
// Check if we are able to resolve DNS entries
let dns_resolved = match ("github.com", 0).to_socket_addrs().map(|mut i| i.next()) {
Ok(Some(a)) => a.ip().to_string(),
_ => "Could not resolve domain name.".to_string(),
_ => "Unable to resolve domain name.".to_string(),
};
let (latest_release, latest_commit, latest_web_build) =
@@ -617,13 +697,14 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
"current_release": VERSION,
"latest_release": latest_release,
"latest_commit": latest_commit,
"web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version.version,
"web_vault_version": web_vault_version.version.trim_start_matches('v'),
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"docker_base_image": docker_base_image(),
"docker_base_image": if running_within_docker { docker_base_image() } else { "Not applicable" },
"has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
@@ -634,11 +715,14 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"db_version": get_sql_server_version(&mut conn).await,
"admin_url": format!("{}/diagnostics", admin_url()),
"overrides": &CONFIG.get_overrides().join(", "),
"host_arch": std::env::consts::ARCH,
"host_os": std::env::consts::OS,
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the server date/time check as late as possible to minimize the time difference
"ntp_time": get_ntp_time(has_http_access).await, // Run the ntp check as late as possible to minimize the time difference
});
let text = AdminTemplateData::with_data("admin/diagnostics", diagnostics_json).render()?;
let text = AdminTemplateData::new("admin/diagnostics", diagnostics_json).render()?;
Ok(Html(text))
}
@@ -668,36 +752,52 @@ async fn backup_db(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
}
}
pub struct AdminToken {}
pub struct AdminToken {
ip: ClientIp,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for AdminToken {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip,
_ => err_handler!("Error getting Client IP"),
};
if CONFIG.disable_admin_token() {
Outcome::Success(Self {})
Outcome::Success(Self {
ip,
})
} else {
let cookies = request.cookies();
let access_token = match cookies.get(COOKIE_NAME) {
Some(cookie) => cookie.value(),
None => return Outcome::Failure((Status::Unauthorized, "Unauthorized")),
};
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip.ip,
_ => err_handler!("Error getting Client IP"),
None => {
let requested_page =
request.segments::<std::path::PathBuf>(0..).unwrap_or_default().display().to_string();
// When the requested page is empty, it is `/admin`, in that case, Forward, so it will render the login page
// Else, return a 401 failure, which will be caught
if requested_page.is_empty() {
return Outcome::Forward(Status::Unauthorized);
} else {
return Outcome::Failure((Status::Unauthorized, "Unauthorized"));
}
}
};
if decode_admin(access_token).is_err() {
// Remove admin cookie
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
error!("Invalid or expired admin JWT. IP: {}.", ip);
error!("Invalid or expired admin JWT. IP: {}.", &ip.ip);
return Outcome::Failure((Status::Unauthorized, "Session expired"));
}
Outcome::Success(Self {})
Outcome::Success(Self {
ip,
})
}
}
}

View File

@@ -1,17 +1,24 @@
use crate::db::DbPool;
use chrono::Utc;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{
core::log_user_event, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
core::log_user_event, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, ClientIp, Headers},
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto,
db::{models::*, DbConn},
mail, CONFIG,
};
use rocket::{
http::Status,
request::{FromRequest, Outcome, Request},
};
pub fn routes() -> Vec<rocket::Route> {
routes![
register,
@@ -30,6 +37,7 @@ pub fn routes() -> Vec<rocket::Route> {
post_verify_email_token,
post_delete_recover,
post_delete_recover_token,
post_device_token,
delete_account,
post_delete_account,
revision_date,
@@ -39,6 +47,16 @@ pub fn routes() -> Vec<rocket::Route> {
api_key,
rotate_api_key,
get_known_device,
get_known_device_from_path,
put_avatar,
put_device_token,
put_clear_device_token,
post_clear_device_token,
post_auth_request,
get_auth_request,
put_auth_request,
get_auth_request_response,
get_auth_requests,
]
}
@@ -48,6 +66,8 @@ pub struct RegisterData {
Email: String,
Kdf: Option<i32>,
KdfIterations: Option<i32>,
KdfMemory: Option<i32>,
KdfParallelism: Option<i32>,
Key: String,
Keys: Option<KeysData>,
MasterPasswordHash: String,
@@ -124,7 +144,7 @@ pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> Json
err!("Registration email does not match invite email")
}
} else if Invitation::take(&email, &mut conn).await {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() {
for user_org in UserOrganization::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&mut conn).await?;
}
@@ -152,16 +172,18 @@ pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> Json
// Make sure we don't leave a lingering invitation.
Invitation::take(&email, &mut conn).await;
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
}
if let Some(client_kdf_type) = data.Kdf {
user.client_kdf_type = client_kdf_type;
}
user.set_password(&data.MasterPasswordHash, None);
user.akey = data.Key;
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
}
user.client_kdf_memory = data.KdfMemory;
user.client_kdf_parallelism = data.KdfParallelism;
user.set_password(&data.MasterPasswordHash, Some(data.Key), true, None);
user.password_hint = password_hint;
// Add extra fields if present
@@ -228,9 +250,35 @@ async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, mut conn:
Ok(Json(user.to_json(&mut conn).await))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct AvatarData {
AvatarColor: Option<String>,
}
#[put("/accounts/avatar", data = "<data>")]
async fn put_avatar(data: JsonUpcase<AvatarData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: AvatarData = data.into_inner().data;
// It looks like it only supports the 6 hex color format.
// If you try to add the short value it will not show that color.
// Check and force 7 chars, including the #.
if let Some(color) = &data.AvatarColor {
if color.len() != 7 {
err!("The field AvatarColor must be a HTML/Hex color code with a length of 7 characters")
}
}
let mut user = headers.user;
user.avatar_color = data.AvatarColor;
user.save(&mut conn).await?;
Ok(Json(user.to_json(&mut conn).await))
}
#[get("/users/<uuid>/public-key")]
async fn get_public_keys(uuid: String, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &mut conn).await {
async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(uuid, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -274,7 +322,7 @@ async fn post_password(
data: JsonUpcase<ChangePassData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> EmptyResult {
let data: ChangePassData = data.into_inner().data;
let mut user = headers.user;
@@ -286,14 +334,24 @@ async fn post_password(
user.password_hint = clean_password_hint(&data.MasterPasswordHint);
enforce_password_hint_setting(&user.password_hint)?;
log_user_event(EventType::UserChangedPassword as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserChangedPassword as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn)
.await;
user.set_password(
&data.NewMasterPasswordHash,
Some(data.Key),
true,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
);
user.akey = data.Key;
user.save(&mut conn).await
let save_result = user.save(&mut conn).await;
// Prevent loging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[derive(Deserialize)]
@@ -301,6 +359,8 @@ async fn post_password(
struct ChangeKdfData {
Kdf: i32,
KdfIterations: i32,
KdfMemory: Option<i32>,
KdfParallelism: Option<i32>,
MasterPasswordHash: String,
NewMasterPasswordHash: String,
@@ -308,7 +368,7 @@ struct ChangeKdfData {
}
#[post("/accounts/kdf", data = "<data>")]
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let data: ChangeKdfData = data.into_inner().data;
let mut user = headers.user;
@@ -316,11 +376,42 @@ async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: D
err!("Invalid password")
}
if data.Kdf == UserKdfType::Pbkdf2 as i32 && data.KdfIterations < 100_000 {
err!("PBKDF2 KDF iterations must be at least 100000.")
}
if data.Kdf == UserKdfType::Argon2id as i32 {
if data.KdfIterations < 1 {
err!("Argon2 KDF iterations must be at least 1.")
}
if let Some(m) = data.KdfMemory {
if !(15..=1024).contains(&m) {
err!("Argon2 memory must be between 15 MB and 1024 MB.")
}
user.client_kdf_memory = data.KdfMemory;
} else {
err!("Argon2 memory parameter is required.")
}
if let Some(p) = data.KdfParallelism {
if !(1..=16).contains(&p) {
err!("Argon2 parallelism must be between 1 and 16.")
}
user.client_kdf_parallelism = data.KdfParallelism;
} else {
err!("Argon2 parallelism parameter is required.")
}
} else {
user.client_kdf_memory = None;
user.client_kdf_parallelism = None;
}
user.client_kdf_iter = data.KdfIterations;
user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&mut conn).await
user.set_password(&data.NewMasterPasswordHash, Some(data.Key), true, None);
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[derive(Deserialize)]
@@ -343,19 +434,19 @@ struct KeyData {
}
#[post("/accounts/key", data = "<data>")]
async fn post_rotatekey(
data: JsonUpcase<KeyData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> EmptyResult {
async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let data: KeyData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
// Validate the import before continuing
// Bitwarden does not process the import if there is one item invalid.
// Since we check for the size of the encrypted note length, we need to do that here to pre-validate it.
// TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks.
Cipher::validate_notes(&data.Ciphers)?;
let user_uuid = &headers.user.uuid;
// Update folder data
@@ -388,7 +479,8 @@ async fn post_rotatekey(
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &mut conn, &ip, &nt, UpdateType::None)
// We force the users to logout after the user has been saved to try and prevent these issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &mut conn, &nt, UpdateType::None)
.await?
}
@@ -399,11 +491,23 @@ async fn post_rotatekey(
user.private_key = Some(data.PrivateKey);
user.reset_security_stamp();
user.save(&mut conn).await
let save_result = user.save(&mut conn).await;
// Prevent loging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await;
save_result
}
#[post("/accounts/security-stamp", data = "<data>")]
async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn post_sstamp(
data: JsonUpcase<PasswordData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let mut user = headers.user;
@@ -413,7 +517,11 @@ async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, mut conn:
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.save(&mut conn).await
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[derive(Deserialize)]
@@ -465,7 +573,12 @@ struct ChangeEmailData {
}
#[post("/accounts/email", data = "<data>")]
async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn post_email(
data: JsonUpcase<ChangeEmailData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: ChangeEmailData = data.into_inner().data;
let mut user = headers.user;
@@ -505,10 +618,13 @@ async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, mut con
user.email_new = None;
user.email_new_token = None;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.set_password(&data.NewMasterPasswordHash, Some(data.Key), true, None);
user.save(&mut conn).await
let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await;
save_result
}
#[post("/accounts/verify-email")]
@@ -629,9 +745,9 @@ async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut co
}
#[get("/accounts/revision-date")]
fn revision_date(headers: Headers) -> String {
fn revision_date(headers: Headers) -> JsonResult {
let revision_date = headers.user.updated_at.timestamp_millis();
revision_date.to_string()
Ok(Json(json!(revision_date)))
}
#[derive(Deserialize)]
@@ -674,7 +790,7 @@ async fn password_hint(data: JsonUpcase<PasswordHintData>, mut conn: DbConn) ->
mail::send_password_hint(email, hint).await?;
Ok(())
} else if let Some(hint) = hint {
err!(format!("Your password hint is: {}", hint));
err!(format!("Your password hint is: {hint}"));
} else {
err!(NO_HINT);
}
@@ -696,15 +812,19 @@ async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
pub async fn _prelogin(data: JsonUpcase<PreloginData>, mut conn: DbConn) -> Json<Value> {
let data: PreloginData = data.into_inner().data;
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => (user.client_kdf_type, user.client_kdf_iter),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT),
let (kdf_type, kdf_iter, kdf_mem, kdf_para) = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => (user.client_kdf_type, user.client_kdf_iter, user.client_kdf_memory, user.client_kdf_parallelism),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT, None, None),
};
Json(json!({
let result = json!({
"Kdf": kdf_type,
"KdfIterations": kdf_iter
}))
"KdfIterations": kdf_iter,
"KdfMemory": kdf_mem,
"KdfParallelism": kdf_para,
});
Json(result)
}
// https://github.com/bitwarden/server/blob/master/src/Api/Models/Request/Accounts/SecretVerificationRequestModel.cs
@@ -732,6 +852,8 @@ async fn _api_key(
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
use crate::util::format_date;
let data: SecretVerificationRequest = data.into_inner().data;
let mut user = headers.user;
@@ -746,6 +868,7 @@ async fn _api_key(
Ok(Json(json!({
"ApiKey": user.api_key,
"RevisionDate": format_date(&user.updated_at),
"Object": "apiKey",
})))
}
@@ -760,15 +883,330 @@ async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: He
_api_key(data, true, headers, conn).await
}
// This variant is deprecated: https://github.com/bitwarden/server/pull/2682
#[get("/devices/knowndevice/<email>/<uuid>")]
async fn get_known_device(email: String, uuid: String, mut conn: DbConn) -> String {
async fn get_known_device_from_path(email: &str, uuid: &str, mut conn: DbConn) -> JsonResult {
// This endpoint doesn't have auth header
if let Some(user) = User::find_by_mail(&email, &mut conn).await {
match Device::find_by_uuid_and_user(&uuid, &user.uuid, &mut conn).await {
Some(_) => String::from("true"),
_ => String::from("false"),
let mut result = false;
if let Some(user) = User::find_by_mail(email, &mut conn).await {
result = Device::find_by_uuid_and_user(uuid, &user.uuid, &mut conn).await.is_some();
}
Ok(Json(json!(result)))
}
#[get("/devices/knowndevice")]
async fn get_known_device(device: KnownDevice, conn: DbConn) -> JsonResult {
get_known_device_from_path(&device.email, &device.uuid, conn).await
}
struct KnownDevice {
email: String,
uuid: String,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for KnownDevice {
type Error = &'static str;
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let email = if let Some(email_b64) = req.headers().get_one("X-Request-Email") {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes,
Err(_) => {
return Outcome::Failure((
Status::BadRequest,
"X-Request-Email value failed to decode as base64url",
));
}
};
match String::from_utf8(email_bytes) {
Ok(email) => email,
Err(_) => {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
}
}
} else {
String::from("false")
return Outcome::Failure((Status::BadRequest, "X-Request-Email value is required"));
};
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string()
} else {
return Outcome::Failure((Status::BadRequest, "X-Device-Identifier value is required"));
};
Outcome::Success(KnownDevice {
email,
uuid,
})
}
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct PushToken {
PushToken: String,
}
#[post("/devices/identifier/<uuid>/token", data = "<data>")]
async fn post_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, conn: DbConn) -> EmptyResult {
put_device_token(uuid, data, headers, conn).await
}
#[put("/devices/identifier/<uuid>/token", data = "<data>")]
async fn put_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let data = data.into_inner().data;
let token = data.PushToken;
let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await {
Some(device) => device,
None => err!(format!("Error: device {uuid} should be present before a token can be assigned")),
};
device.push_token = Some(token);
if device.push_uuid.is_none() {
device.push_uuid = Some(uuid::Uuid::new_v4().to_string());
}
if let Err(e) = device.save(&mut conn).await {
err!(format!("An error occured while trying to save the device push token: {e}"));
}
if let Err(e) = register_push_device(headers.user.uuid, device).await {
err!(format!("An error occured while proceeding registration of a device: {e}"));
}
Ok(())
}
#[put("/devices/identifier/<uuid>/clear-token")]
async fn put_clear_device_token(uuid: &str, mut conn: DbConn) -> EmptyResult {
// This only clears push token
// https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109
// https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37
// This is somehow not implemented in any app, added it in case it is required
if !CONFIG.push_enabled() {
return Ok(());
}
if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await {
Device::clear_push_token_by_uuid(uuid, &mut conn).await?;
unregister_push_device(device.uuid).await?;
}
Ok(())
}
// On upstream server, both PUT and POST are declared. Implementing the POST method in case it would be useful somewhere
#[post("/devices/identifier/<uuid>/clear-token")]
async fn post_clear_device_token(uuid: &str, conn: DbConn) -> EmptyResult {
put_clear_device_token(uuid, conn).await
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
struct AuthRequestRequest {
accessCode: String,
deviceIdentifier: String,
email: String,
publicKey: String,
#[serde(alias = "type")]
_type: i32,
}
#[post("/auth-requests", data = "<data>")]
async fn post_auth_request(
data: Json<AuthRequestRequest>,
headers: ClientHeaders,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let user = match User::find_by_mail(&data.email, &mut conn).await {
Some(user) => user,
None => {
err!("AuthRequest doesn't exist")
}
};
let mut auth_request = AuthRequest::new(
user.uuid.clone(),
data.deviceIdentifier.clone(),
headers.device_type,
headers.ip.ip.to_string(),
data.accessCode,
data.publicKey,
);
auth_request.save(&mut conn).await?;
nt.send_auth_request(&user.uuid, &auth_request.uuid, &data.deviceIdentifier, &mut conn).await;
Ok(Json(json!({
"id": auth_request.uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": null,
"masterPasswordHash": null,
"creationDate": auth_request.creation_date.and_utc(),
"responseDate": null,
"requestApproved": false,
"origin": CONFIG.domain_origin(),
"object": "auth-request"
})))
}
#[get("/auth-requests/<uuid>")]
async fn get_auth_request(uuid: &str, mut conn: DbConn) -> JsonResult {
let auth_request = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
};
let response_date_utc = auth_request.response_date.map(|response_date| response_date.and_utc());
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc(),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
struct AuthResponseRequest {
deviceIdentifier: String,
key: String,
masterPasswordHash: String,
requestApproved: bool,
}
#[put("/auth-requests/<uuid>", data = "<data>")]
async fn put_auth_request(
uuid: &str,
data: Json<AuthResponseRequest>,
mut conn: DbConn,
ant: AnonymousNotify<'_>,
nt: Notify<'_>,
) -> JsonResult {
let data = data.into_inner();
let mut auth_request: AuthRequest = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
};
auth_request.approved = Some(data.requestApproved);
auth_request.enc_key = data.key;
auth_request.master_password_hash = data.masterPasswordHash;
auth_request.response_device_id = Some(data.deviceIdentifier.clone());
auth_request.save(&mut conn).await?;
if auth_request.approved.unwrap_or(false) {
ant.send_auth_response(&auth_request.user_uuid, &auth_request.uuid).await;
nt.send_auth_response(&auth_request.user_uuid, &auth_request.uuid, data.deviceIdentifier, &mut conn).await;
}
let response_date_utc = auth_request.response_date.map(|response_date| response_date.and_utc());
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc(),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
}
#[get("/auth-requests/<uuid>/response?<code>")]
async fn get_auth_request_response(uuid: &str, code: &str, mut conn: DbConn) -> JsonResult {
let auth_request = match AuthRequest::find_by_uuid(uuid, &mut conn).await {
Some(auth_request) => auth_request,
None => {
err!("AuthRequest doesn't exist")
}
};
if !auth_request.check_access_code(code) {
err!("Access code invalid doesn't exist")
}
let response_date_utc = auth_request.response_date.map(|response_date| response_date.and_utc());
Ok(Json(json!(
{
"id": uuid,
"publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip,
"key": auth_request.enc_key,
"masterPasswordHash": auth_request.master_password_hash,
"creationDate": auth_request.creation_date.and_utc(),
"responseDate": response_date_utc,
"requestApproved": auth_request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
}
)))
}
#[get("/auth-requests")]
async fn get_auth_requests(headers: Headers, mut conn: DbConn) -> JsonResult {
let auth_requests = AuthRequest::find_by_user(&headers.user.uuid, &mut conn).await;
Ok(Json(json!({
"data": auth_requests
.iter()
.filter(|request| request.approved.is_none())
.map(|request| {
let response_date_utc = request.response_date.map(|response_date| response_date.and_utc());
json!({
"id": request.uuid,
"publicKey": request.public_key,
"requestDeviceType": DeviceType::from_i32(request.device_type).to_string(),
"requestIpAddress": request.request_ip,
"key": request.enc_key,
"masterPasswordHash": request.master_password_hash,
"creationDate": request.creation_date.and_utc(),
"responseDate": response_date_utc,
"requestApproved": request.approved,
"origin": CONFIG.domain_origin(),
"object":"auth-request"
})
}).collect::<Vec<Value>>(),
"continuationToken": null,
"object": "list"
})))
}
pub async fn purge_auth_requests(pool: DbPool) {
debug!("Purging auth requests");
if let Ok(mut conn) = pool.get().await {
AuthRequest::purge_expired_auth_requests(&mut conn).await;
} else {
error!("Failed to get DB connection while purging trashed ciphers")
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -71,10 +71,10 @@ async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult {
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: String, mut conn: DbConn) -> JsonResult {
async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
None => err!("Emergency access not valid."),
}
@@ -93,17 +93,13 @@ struct EmergencyAccessUpdateData {
}
#[put("/emergency-access/<emer_id>", data = "<data>")]
async fn put_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
conn: DbConn,
) -> JsonResult {
async fn put_emergency_access(emer_id: &str, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
post_emergency_access(emer_id, data, conn).await
}
#[post("/emergency-access/<emer_id>", data = "<data>")]
async fn post_emergency_access(
emer_id: String,
emer_id: &str,
data: JsonUpcase<EmergencyAccessUpdateData>,
mut conn: DbConn,
) -> JsonResult {
@@ -111,7 +107,7 @@ async fn post_emergency_access(
let data: EmergencyAccessUpdateData = data.into_inner().data;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
@@ -123,7 +119,9 @@ async fn post_emergency_access(
emergency_access.atype = new_type;
emergency_access.wait_time_days = data.WaitTimeDays;
if data.KeyEncrypted.is_some() {
emergency_access.key_encrypted = data.KeyEncrypted;
}
emergency_access.save(&mut conn).await?;
Ok(Json(emergency_access.to_json()))
@@ -134,12 +132,12 @@ async fn post_emergency_access(
// region delete
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
@@ -153,7 +151,7 @@ async fn delete_emergency_access(emer_id: String, headers: Headers, mut conn: Db
}
#[post("/emergency-access/<emer_id>/delete")]
async fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_delete_emergency_access(emer_id: &str, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn).await
}
@@ -241,7 +239,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &mut conn).await {
Some(user) => match accept_invite_process(user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Some(user) => match accept_invite_process(&user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
},
@@ -253,10 +251,10 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
}
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: String, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -297,7 +295,7 @@ async fn resend_invite(emer_id: String, headers: Headers, mut conn: DbConn) -> E
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
@@ -313,12 +311,7 @@ struct AcceptData {
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(
emer_id: String,
data: JsonUpcase<AcceptData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: AcceptData = data.into_inner().data;
@@ -339,7 +332,7 @@ async fn accept_invite(
None => err!("Invited user not found"),
};
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -354,7 +347,7 @@ async fn accept_invite(
&& grantor_user.name == claims.grantor_name
&& grantor_user.email == claims.grantor_email
{
match accept_invite_process(grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
@@ -370,7 +363,7 @@ async fn accept_invite(
}
async fn accept_invite_process(
grantee_uuid: String,
grantee_uuid: &str,
emergency_access: &mut EmergencyAccess,
grantee_email: &str,
conn: &mut DbConn,
@@ -384,7 +377,7 @@ async fn accept_invite_process(
}
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(grantee_uuid);
emergency_access.grantee_uuid = Some(String::from(grantee_uuid));
emergency_access.email = None;
emergency_access.save(conn).await
}
@@ -397,7 +390,7 @@ struct ConfirmData {
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
async fn confirm_emergency_access(
emer_id: String,
emer_id: &str,
data: JsonUpcase<ConfirmData>,
headers: Headers,
mut conn: DbConn,
@@ -408,7 +401,7 @@ async fn confirm_emergency_access(
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -450,11 +443,11 @@ async fn confirm_emergency_access(
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -490,10 +483,10 @@ async fn initiate_emergency_access(emer_id: String, headers: Headers, mut conn:
}
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -528,10 +521,10 @@ async fn approve_emergency_access(emer_id: String, headers: Headers, mut conn: D
}
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
@@ -571,26 +564,33 @@ async fn reject_emergency_access(emer_id: String, headers: Headers, mut conn: Db
// region action
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, headers.user.uuid, EmergencyAccessType::View) {
if !is_valid_request(&emergency_access, &headers.user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
}
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &mut conn).await;
let cipher_sync_data =
CipherSyncData::new(&emergency_access.grantor_uuid, &ciphers, CipherSyncType::User, &mut conn).await;
let cipher_sync_data = CipherSyncData::new(&emergency_access.grantor_uuid, CipherSyncType::User, &mut conn).await;
let mut ciphers_json = Vec::new();
let mut ciphers_json = Vec::with_capacity(ciphers.len());
for c in ciphers {
ciphers_json
.push(c.to_json(&headers.host, &emergency_access.grantor_uuid, Some(&cipher_sync_data), &mut conn).await);
ciphers_json.push(
c.to_json(
&headers.host,
&emergency_access.grantor_uuid,
Some(&cipher_sync_data),
CipherSyncType::User,
&mut conn,
)
.await,
);
}
Ok(Json(json!({
@@ -601,16 +601,16 @@ async fn view_emergency_access(emer_id: String, headers: Headers, mut conn: DbCo
}
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
@@ -619,12 +619,16 @@ async fn takeover_emergency_access(emer_id: String, headers: Headers, mut conn:
None => err!("Grantor user not found."),
};
Ok(Json(json!({
let result = json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KdfMemory": grantor_user.client_kdf_memory,
"KdfParallelism": grantor_user.client_kdf_parallelism,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
})))
});
Ok(Json(result))
}
#[derive(Deserialize)]
@@ -636,7 +640,7 @@ struct EmergencyAccessPasswordData {
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
async fn password_emergency_access(
emer_id: String,
emer_id: &str,
data: JsonUpcase<EmergencyAccessPasswordData>,
headers: Headers,
mut conn: DbConn,
@@ -645,15 +649,15 @@ async fn password_emergency_access(
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
let key = data.Key;
//let key = &data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
@@ -663,8 +667,7 @@ async fn password_emergency_access(
};
// change grantor_user password
grantor_user.set_password(new_master_password_hash, None);
grantor_user.akey = key;
grantor_user.set_password(new_master_password_hash, Some(data.Key), true, None);
grantor_user.save(&mut conn).await?;
// Disable TwoFactor providers since they will otherwise block logins
@@ -682,14 +685,14 @@ async fn password_emergency_access(
// endregion
#[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: String, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn policies_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &mut conn).await {
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
@@ -710,10 +713,11 @@ async fn policies_emergency_access(emer_id: String, headers: Headers, mut conn:
fn is_valid_request(
emergency_access: &EmergencyAccess,
requesting_user_uuid: String,
requesting_user_uuid: &str,
requested_access_type: EmergencyAccessType,
) -> bool {
emergency_access.grantee_uuid == Some(requesting_user_uuid)
emergency_access.grantee_uuid.is_some()
&& emergency_access.grantee_uuid.as_ref().unwrap() == requesting_user_uuid
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
&& emergency_access.atype == requested_access_type as i32
}

View File

@@ -6,7 +6,7 @@ use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcaseVec},
auth::{AdminHeaders, ClientIp, Headers},
auth::{AdminHeaders, Headers},
db::{
models::{Cipher, Event, UserOrganization},
DbConn, DbPool,
@@ -32,7 +32,7 @@ struct EventRange {
// Upstream: https://github.com/bitwarden/server/blob/9ecf69d9cabce732cf2c57976dd9afa5728578fb/src/Api/Controllers/EventsController.cs#LL84C35-L84C41
#[get("/organizations/<org_id>/events?<data..>")]
async fn get_org_events(org_id: String, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
@@ -45,7 +45,7 @@ async fn get_org_events(org_id: String, data: EventRange, _headers: AdminHeaders
parse_date(&data.end)
};
Event::find_by_organization_uuid(&org_id, &start_date, &end_date, &mut conn)
Event::find_by_organization_uuid(org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -60,14 +60,14 @@ async fn get_org_events(org_id: String, data: EventRange, _headers: AdminHeaders
}
#[get("/ciphers/<cipher_id>/events?<data..>")]
async fn get_cipher_events(cipher_id: String, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let mut events_json = Vec::with_capacity(0);
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, &cipher_id, &mut conn).await {
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, cipher_id, &mut conn).await {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
@@ -75,7 +75,7 @@ async fn get_cipher_events(cipher_id: String, data: EventRange, headers: Headers
parse_date(&data.end)
};
events_json = Event::find_by_cipher_uuid(&cipher_id, &start_date, &end_date, &mut conn)
events_json = Event::find_by_cipher_uuid(cipher_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -93,8 +93,8 @@ async fn get_cipher_events(cipher_id: String, data: EventRange, headers: Headers
#[get("/organizations/<org_id>/users/<user_org_id>/events?<data..>")]
async fn get_user_events(
org_id: String,
user_org_id: String,
org_id: &str,
user_org_id: &str,
data: EventRange,
_headers: AdminHeaders,
mut conn: DbConn,
@@ -111,7 +111,7 @@ async fn get_user_events(
parse_date(&data.end)
};
Event::find_by_org_and_user_org(&org_id, &user_org_id, &start_date, &end_date, &mut conn)
Event::find_by_org_and_user_org(org_id, user_org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -161,12 +161,7 @@ struct EventCollection {
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Events/Controllers/CollectController.cs
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
#[post("/collect", format = "application/json", data = "<data>")]
async fn post_events_collect(
data: JsonUpcaseVec<EventCollection>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> EmptyResult {
async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.org_events_enabled() {
return Ok(());
}
@@ -180,7 +175,7 @@ async fn post_events_collect(
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&headers.ip.ip,
&mut conn,
)
.await;
@@ -190,11 +185,11 @@ async fn post_events_collect(
_log_event(
event.Type,
org_uuid,
String::from(org_uuid),
org_uuid,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&headers.ip.ip,
&mut conn,
)
.await;
@@ -207,11 +202,11 @@ async fn post_events_collect(
_log_event(
event.Type,
cipher_uuid,
org_uuid,
&org_uuid,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
&ip.ip,
&headers.ip.ip,
&mut conn,
)
.await;
@@ -267,7 +262,7 @@ async fn _log_user_event(
pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
org_uuid: &str,
act_user_uuid: String,
device_type: i32,
ip: &IpAddr,
@@ -283,7 +278,7 @@ pub async fn log_event(
async fn _log_event(
event_type: i32,
source_uuid: &str,
org_uuid: String,
org_uuid: &str,
act_user_uuid: &str,
device_type: i32,
event_date: Option<NaiveDateTime>,
@@ -319,7 +314,7 @@ async fn _log_event(
_ => {}
}
event.org_uuid = Some(org_uuid);
event.org_uuid = Some(String::from(org_uuid));
event.act_user_uuid = Some(String::from(act_user_uuid));
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());

View File

@@ -24,8 +24,8 @@ async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
}
#[get("/folders/<uuid>")]
async fn get_folder(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
async fn get_folder(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -50,14 +50,14 @@ async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, mut conn:
let mut folder = Folder::new(headers.user.uuid, data.Name);
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderCreate, &folder).await;
nt.send_folder_update(UpdateType::SyncFolderCreate, &folder, &headers.device.uuid, &mut conn).await;
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>", data = "<data>")]
async fn post_folder(
uuid: String,
uuid: &str,
data: JsonUpcase<FolderData>,
headers: Headers,
conn: DbConn,
@@ -68,7 +68,7 @@ async fn post_folder(
#[put("/folders/<uuid>", data = "<data>")]
async fn put_folder(
uuid: String,
uuid: &str,
data: JsonUpcase<FolderData>,
headers: Headers,
mut conn: DbConn,
@@ -76,7 +76,7 @@ async fn put_folder(
) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
let mut folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -88,19 +88,19 @@ async fn put_folder(
folder.name = data.Name;
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderUpdate, &folder).await;
nt.send_folder_update(UpdateType::SyncFolderUpdate, &folder, &headers.device.uuid, &mut conn).await;
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>/delete")]
async fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
async fn delete_folder_post(uuid: &str, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(uuid, headers, conn, nt).await
}
#[delete("/folders/<uuid>")]
async fn delete_folder(uuid: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &mut conn).await {
async fn delete_folder(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -112,6 +112,6 @@ async fn delete_folder(uuid: String, headers: Headers, mut conn: DbConn, nt: Not
// Delete the actual folder entry
folder.delete(&mut conn).await?;
nt.send_folder_update(UpdateType::FolderDelete, &folder).await;
nt.send_folder_update(UpdateType::SyncFolderDelete, &folder, &headers.device.uuid, &mut conn).await;
Ok(())
}

View File

@@ -4,18 +4,18 @@ mod emergency_access;
mod events;
mod folders;
mod organizations;
mod public;
mod sends;
pub mod two_factor;
pub use ciphers::purge_trashed_ciphers;
pub use ciphers::{CipherSyncData, CipherSyncType};
pub use accounts::purge_auth_requests;
pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncType};
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> {
let mut device_token_routes = routes![clear_device_token, put_device_token];
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version, config];
@@ -29,7 +29,7 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes());
routes.append(&mut sends::routes());
routes.append(&mut device_token_routes);
routes.append(&mut public::routes());
routes.append(&mut eq_domains_routes);
routes.append(&mut hibp_routes);
routes.append(&mut meta_routes);
@@ -47,50 +47,17 @@ pub fn events_routes() -> Vec<Route> {
//
// Move this somewhere else
//
use rocket::serde::json::Json;
use rocket::Catcher;
use rocket::Route;
use rocket::{serde::json::Json, Catcher, Route};
use serde_json::Value;
use crate::{
api::{JsonResult, JsonUpcase},
api::{JsonResult, JsonUpcase, Notify, UpdateType},
auth::Headers,
db::DbConn,
error::Error,
util::get_reqwest_client,
};
#[put("/devices/identifier/<uuid>/clear-token")]
fn clear_device_token(uuid: String) -> &'static str {
// This endpoint doesn't have auth header
let _ = uuid;
// uuid is not related to deviceId
// This only clears push token
// https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109
// https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37
""
}
#[put("/devices/identifier/<uuid>/token", data = "<data>")]
fn put_device_token(uuid: String, data: JsonUpcase<Value>, headers: Headers) -> Json<Value> {
let _data: Value = data.into_inner().data;
// Data has a single string value "PushToken"
let _ = uuid;
// uuid is not related to deviceId
// TODO: This should save the push token, but we don't have push functionality
Json(json!({
"Id": headers.device.uuid,
"Name": headers.device.name,
"Type": headers.device.atype,
"Identifier": headers.device.uuid,
"CreationDate": crate::util::format_date(&headers.device.created_at),
}))
}
#[derive(Serialize, Deserialize, Debug)]
#[allow(non_snake_case)]
struct GlobalDomain {
@@ -138,7 +105,12 @@ struct EquivDomainData {
}
#[post("/settings/domains", data = "<data>")]
async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn post_eq_domains(
data: JsonUpcase<EquivDomainData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: EquivDomainData = data.into_inner().data;
let excluded_globals = data.ExcludedGlobalEquivalentDomains.unwrap_or_default();
@@ -152,19 +124,25 @@ async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, mu
user.save(&mut conn).await?;
nt.send_user_update(UpdateType::SyncSettings, &user).await;
Ok(Json(json!({})))
}
#[put("/settings/domains", data = "<data>")]
async fn put_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
post_eq_domains(data, headers, conn).await
async fn put_eq_domains(
data: JsonUpcase<EquivDomainData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
post_eq_domains(data, headers, conn, nt).await
}
#[get("/hibp/breach?<username>")]
async fn hibp_breach(username: String) -> JsonResult {
async fn hibp_breach(username: &str) -> JsonResult {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{}?truncateResponse=false&includeUnverified=false",
username
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
if let Some(api_key) = crate::CONFIG.hibp_api_key() {
@@ -186,7 +164,7 @@ async fn hibp_breach(username: String) -> JsonResult {
"Domain": "haveibeenpwned.com",
"BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{username}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{username}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>"),
"LogoPath": "vw_static/hibp.png",
"PwnCount": 0,
"DataClasses": [
@@ -229,6 +207,7 @@ fn config() -> Json<Value> {
"notifications": format!("{domain}/notifications"),
"sso": "",
},
"object": "config",
}))
}

File diff suppressed because it is too large Load Diff

238
src/api/core/public.rs Normal file
View File

@@ -0,0 +1,238 @@
use chrono::Utc;
use rocket::{
request::{self, FromRequest, Outcome},
Request, Route,
};
use std::collections::HashSet;
use crate::{
api::{EmptyResult, JsonUpcase},
auth,
db::{models::*, DbConn},
mail, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![ldap_import]
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct OrgImportGroupData {
Name: String,
ExternalId: String,
MemberExternalIds: Vec<String>,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct OrgImportUserData {
Email: String,
ExternalId: String,
Deleted: bool,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct OrgImportData {
Groups: Vec<OrgImportGroupData>,
Members: Vec<OrgImportUserData>,
OverwriteExisting: bool,
// LargeImport: bool, // For now this will not be used, upstream uses this to prevent syncs of more then 2000 users or groups without the flag set.
}
#[post("/public/organization/import", data = "<data>")]
async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut conn: DbConn) -> EmptyResult {
// Most of the logic for this function can be found here
// https://github.com/bitwarden/server/blob/fd892b2ff4547648a276734fb2b14a8abae2c6f5/src/Core/Services/Implementations/OrganizationService.cs#L1797
let org_id = token.0;
let data = data.into_inner().data;
for user_data in &data.Members {
if user_data.Deleted {
// If user is marked for deletion and it exists, revoke it
if let Some(mut user_org) =
UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &mut conn).await
{
user_org.revoke();
user_org.save(&mut conn).await?;
}
// If user is part of the organization, restore it
} else if let Some(mut user_org) =
UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &mut conn).await
{
if user_org.status < UserOrgStatus::Revoked as i32 {
user_org.restore();
user_org.save(&mut conn).await?;
}
} else {
// If user is not part of the organization
let user = match User::find_by_mail(&user_data.Email, &mut conn).await {
Some(user) => user, // exists in vaultwarden
None => {
// doesn't exist in vaultwarden
let mut new_user = User::new(user_data.Email.clone());
new_user.set_external_id(Some(user_data.ExternalId.clone()));
new_user.save(&mut conn).await?;
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(&new_user.email);
invitation.save(&mut conn).await?;
}
new_user
}
};
let user_org_status = if CONFIG.mail_enabled() || user.password_hash.is_empty() {
UserOrgStatus::Invited as i32
} else {
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
};
let mut new_org_user = UserOrganization::new(user.uuid.clone(), org_id.clone());
new_org_user.access_all = false;
new_org_user.atype = UserOrgType::User as i32;
new_org_user.status = user_org_status;
new_org_user.save(&mut conn).await?;
if CONFIG.mail_enabled() {
let (org_name, org_email) = match Organization::find_by_uuid(&org_id, &mut conn).await {
Some(org) => (org.name, org.billing_email),
None => err!("Error looking up organization"),
};
mail::send_invite(
&user_data.Email,
&user.uuid,
Some(org_id.clone()),
Some(new_org_user.uuid),
&org_name,
Some(org_email),
)
.await?;
}
}
}
if CONFIG.org_groups_enabled() {
for group_data in &data.Groups {
let group_uuid = match Group::find_by_external_id(&group_data.ExternalId, &mut conn).await {
Some(group) => group.uuid,
None => {
let mut group =
Group::new(org_id.clone(), group_data.Name.clone(), false, Some(group_data.ExternalId.clone()));
group.save(&mut conn).await?;
group.uuid
}
};
GroupUser::delete_all_by_group(&group_uuid, &mut conn).await?;
for ext_id in &group_data.MemberExternalIds {
if let Some(user) = User::find_by_external_id(ext_id, &mut conn).await {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &mut conn).await
{
let mut group_user = GroupUser::new(group_uuid.clone(), user_org.uuid.clone());
group_user.save(&mut conn).await?;
}
}
}
}
} else {
warn!("Group support is disabled, groups will not be imported!");
}
// If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true)
if data.OverwriteExisting {
// Generate a HashSet to quickly verify if a member is listed or not.
let sync_members: HashSet<String> = data.Members.into_iter().map(|m| m.ExternalId).collect();
for user_org in UserOrganization::find_by_org(&org_id, &mut conn).await {
if let Some(user_external_id) =
User::find_by_uuid(&user_org.user_uuid, &mut conn).await.map(|u| u.external_id)
{
if user_external_id.is_some() && !sync_members.contains(&user_external_id.unwrap()) {
if user_org.atype == UserOrgType::Owner && user_org.status == UserOrgStatus::Confirmed as i32 {
// Removing owner, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&org_id, UserOrgType::Owner, &mut conn)
.await
<= 1
{
warn!("Can't delete the last owner");
continue;
}
}
user_org.delete(&mut conn).await?;
}
}
}
}
Ok(())
}
pub struct PublicToken(String);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for PublicToken {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
let headers = request.headers();
// Get access_token
let access_token: &str = match headers.get_one("Authorization") {
Some(a) => match a.rsplit("Bearer ").next() {
Some(split) => split,
None => err_handler!("No access token provided"),
},
None => err_handler!("No access token provided"),
};
// Check JWT token is valid and get device and user from it
let claims = match auth::decode_api_org(access_token) {
Ok(claims) => claims,
Err(_) => err_handler!("Invalid claim"),
};
// Check if time is between claims.nbf and claims.exp
let time_now = Utc::now().naive_utc().timestamp();
if time_now < claims.nbf {
err_handler!("Token issued in the future");
}
if time_now > claims.exp {
err_handler!("Token expired");
}
// Check if claims.iss is host|claims.scope[0]
let host = match auth::Host::from_request(request).await {
Outcome::Success(host) => host,
_ => err_handler!("Error getting Host"),
};
let complete_host = format!("{}|{}", host.host, claims.scope[0]);
if complete_host != claims.iss {
err_handler!("Token not issued by this server");
}
// Check if claims.sub is org_api_key.uuid
// Check if claims.client_sub is org_api_key.org_uuid
let conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let org_uuid = match claims.client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err_handler!("Malformed client_id"),
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, &conn).await {
Some(org_api_key) => org_api_key,
None => err_handler!("Invalid client_id"),
};
if org_api_key.org_uuid != claims.client_sub {
err_handler!("Token not issued for this org");
}
if org_api_key.uuid != claims.sub {
err_handler!("Token not issued for this client");
}
Outcome::Success(PublicToken(claims.client_sub))
}
}

View File

@@ -154,8 +154,8 @@ async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
}
#[get("/sends/<uuid>")]
async fn get_send(uuid: String, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &mut conn).await {
async fn get_send(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(uuid, &mut conn).await {
Some(send) => send,
None => err!("Send not found"),
};
@@ -180,7 +180,14 @@ async fn post_send(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbCon
let mut send = create_send(data, headers.user.uuid)?;
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
nt.send_send_update(
UpdateType::SyncSendCreate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
}
@@ -228,19 +235,6 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
err!("Send content is not a file");
}
// There is a bug regarding uploading attachments/sends using the Mobile clients
// See: https://github.com/dani-garcia/vaultwarden/issues/2644 && https://github.com/bitwarden/mobile/issues/2018
// This has been fixed via a PR: https://github.com/bitwarden/mobile/pull/2031, but hasn't landed in a new release yet.
// On the vaultwarden side this is temporarily fixed by using a custom multer library
// See: https://github.com/dani-garcia/vaultwarden/pull/2675
// In any case we will match TempFile::File and not TempFile::Buffered, since Buffered will alter the contents.
if let TempFile::Buffered {
content: _,
} = &data
{
err!("Error reading send file data. Please try an other client.");
}
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
@@ -265,7 +259,14 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
// Save the changes in the database
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
nt.send_send_update(
UpdateType::SyncSendCreate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
}
@@ -328,8 +329,8 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
// https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L243
#[post("/sends/<send_uuid>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
async fn post_send_file_v2_data(
send_uuid: String,
file_id: String,
send_uuid: &str,
file_id: &str,
data: Form<UploadDataV2<'_>>,
headers: Headers,
mut conn: DbConn,
@@ -339,32 +340,29 @@ async fn post_send_file_v2_data(
let mut data = data.into_inner();
// There is a bug regarding uploading attachments/sends using the Mobile clients
// See: https://github.com/dani-garcia/vaultwarden/issues/2644 && https://github.com/bitwarden/mobile/issues/2018
// This has been fixed via a PR: https://github.com/bitwarden/mobile/pull/2031, but hasn't landed in a new release yet.
// On the vaultwarden side this is temporarily fixed by using a custom multer library
// See: https://github.com/dani-garcia/vaultwarden/pull/2675
// In any case we will match TempFile::File and not TempFile::Buffered, since Buffered will alter the contents.
if let TempFile::Buffered {
content: _,
} = &data.data
{
err!("Error reading attachment data. Please try an other client.");
let Some(send) = Send::find_by_uuid(send_uuid, &mut conn).await else { err!("Send not found. Unable to save the file.") };
let Some(send_user_id) = &send.user_uuid else {err!("Sends are only supported for users at the moment")};
if send_user_id != &headers.user.uuid {
err!("Send doesn't belong to user");
}
if let Some(send) = Send::find_by_uuid(&send_uuid, &mut conn).await {
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send_uuid);
let file_path = folder_path.join(&file_id);
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_uuid);
let file_path = folder_path.join(file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&mut conn).await).await;
} else {
err!("Send not found. Unable to save the file.");
}
nt.send_send_update(
UpdateType::SyncSendCreate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(())
}
@@ -377,12 +375,13 @@ pub struct SendAccessData {
#[post("/sends/access/<access_id>", data = "<data>")]
async fn post_access(
access_id: String,
access_id: &str,
data: JsonUpcase<SendAccessData>,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &mut conn).await {
let mut send = match Send::find_by_access_id(access_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -422,18 +421,28 @@ async fn post_access(
send.save(&mut conn).await?;
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"),
&mut conn,
)
.await;
Ok(Json(send.to_json_access(&mut conn).await))
}
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
async fn post_access_file(
send_id: String,
file_id: String,
send_id: &str,
file_id: &str,
data: JsonUpcase<SendAccessData>,
host: Host,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_uuid(&send_id, &mut conn).await {
let mut send = match Send::find_by_uuid(send_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -470,7 +479,16 @@ async fn post_access_file(
send.save(&mut conn).await?;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"),
&mut conn,
)
.await;
let token_claims = crate::auth::generate_send_claims(send_id, file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({
"Object": "send-fileDownload",
@@ -480,9 +498,9 @@ async fn post_access_file(
}
#[get("/sends/<send_id>/<file_id>?<t>")]
async fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(&t) {
if claims.sub == format!("{}/{}", send_id, file_id) {
async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(t) {
if claims.sub == format!("{send_id}/{file_id}") {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok();
}
}
@@ -491,7 +509,7 @@ async fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> O
#[put("/sends/<id>", data = "<data>")]
async fn put_send(
id: String,
id: &str,
data: JsonUpcase<SendData>,
headers: Headers,
mut conn: DbConn,
@@ -502,7 +520,7 @@ async fn put_send(
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -550,14 +568,21 @@ async fn put_send(
}
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
}
#[delete("/sends/<id>")]
async fn delete_send(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &mut conn).await {
async fn delete_send(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -567,16 +592,23 @@ async fn delete_send(id: String, headers: Headers, mut conn: DbConn, nt: Notify<
}
send.delete(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&mut conn).await).await;
nt.send_send_update(
UpdateType::SyncSendDelete,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(())
}
#[put("/sends/<id>/remove-password")]
async fn put_remove_password(id: String, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
async fn put_remove_password(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(&id, &mut conn).await {
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -587,7 +619,14 @@ async fn put_remove_password(id: String, headers: Headers, mut conn: DbConn, nt:
send.set_password(None);
send.save(&mut conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&mut conn).await).await;
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
}

View File

@@ -57,7 +57,6 @@ struct EnableAuthenticatorData {
async fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
mut conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data;
@@ -82,11 +81,11 @@ async fn activate_authenticator(
}
// Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &mut conn).await?;
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &headers.ip, &mut conn).await?;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -99,10 +98,9 @@ async fn activate_authenticator(
async fn activate_authenticator_put(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
) -> JsonResult {
activate_authenticator(data, headers, ip, conn).await
activate_authenticator(data, headers, conn).await
}
pub async fn validate_totp_code_str(

View File

@@ -8,7 +8,7 @@ use crate::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
PasswordData,
},
auth::{ClientIp, Headers},
auth::Headers,
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType, User},
@@ -155,7 +155,7 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
}
#[post("/two-factor/duo", data = "<data>")]
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut conn: DbConn, ip: ClientIp) -> JsonResult {
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user;
@@ -178,7 +178,7 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut con
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -190,8 +190,8 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut con
}
#[put("/two-factor/duo", data = "<data>")]
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn, ip: ClientIp) -> JsonResult {
activate_duo(data, headers, conn, ip).await
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn).await
}
async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
@@ -270,11 +270,11 @@ pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult
let duo_sign = sign_duo_values(&sk, email, &ik, DUO_PREFIX, now + DUO_EXPIRE);
let app_sign = sign_duo_values(&ak, email, &ik, APP_PREFIX, now + APP_EXPIRE);
Ok((format!("{}:{}", duo_sign, app_sign), host))
Ok((format!("{duo_sign}:{app_sign}"), host))
}
fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64) -> String {
let val = format!("{}|{}|{}", email, ikey, expire);
let val = format!("{email}|{ikey}|{expire}");
let cookie = format!("{}|{}", prefix, BASE64.encode(val.as_bytes()));
format!("{}|{}", cookie, crypto::hmac_sign(key, &cookie))
@@ -327,7 +327,7 @@ fn parse_duo_values(key: &str, val: &str, ikey: &str, prefix: &str, time: i64) -
let u_b64 = split[1];
let u_sig = split[2];
let sig = crypto::hmac_sign(key, &format!("{}|{}", u_prefix, u_b64));
let sig = crypto::hmac_sign(key, &format!("{u_prefix}|{u_b64}"));
if !crypto::ct_eq(crypto::hmac_sign(key, &sig), crypto::hmac_sign(key, u_sig)) {
err!("Duo signatures don't match")

View File

@@ -7,7 +7,7 @@ use crate::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::{ClientIp, Headers},
auth::Headers,
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType},
@@ -90,7 +90,7 @@ async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: D
let twofactor_data = EmailTokenData::from_json(&x.data)?;
(true, json!(twofactor_data.email))
}
_ => (false, json!(null)),
_ => (false, serde_json::value::Value::Null),
};
Ok(Json(json!({
@@ -150,7 +150,7 @@ struct EmailData {
/// Verify email belongs to user and can be used for 2FA email codes.
#[put("/two-factor/email", data = "<data>")]
async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn, ip: ClientIp) -> JsonResult {
async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EmailData = data.into_inner().data;
let mut user = headers.user;
@@ -180,7 +180,7 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn,
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Email": email_data.email,
@@ -304,7 +304,7 @@ pub fn obscure_email(email: &str) -> String {
_ => {
let stars = "*".repeat(name_size - 2);
name.truncate(2);
format!("{}{}", name, stars)
format!("{name}{stars}")
}
};

View File

@@ -6,7 +6,7 @@ use serde_json::Value;
use crate::{
api::{core::log_user_event, JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::{ClientHeaders, ClientIp, Headers},
auth::{ClientHeaders, Headers},
crypto,
db::{models::*, DbConn, DbPool},
mail, CONFIG,
@@ -73,12 +73,7 @@ struct RecoverTwoFactor {
}
#[post("/two-factor/recover", data = "<data>")]
async fn recover(
data: JsonUpcase<RecoverTwoFactor>,
client_headers: ClientHeaders,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
async fn recover(data: JsonUpcase<RecoverTwoFactor>, client_headers: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner().data;
use crate::db::models::User;
@@ -102,12 +97,19 @@ async fn recover(
// Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
log_user_event(EventType::UserRecovered2fa as i32, &user.uuid, client_headers.device_type, &ip.ip, &mut conn).await;
log_user_event(
EventType::UserRecovered2fa as i32,
&user.uuid,
client_headers.device_type,
&client_headers.ip.ip,
&mut conn,
)
.await;
// Remove the recovery code, not needed without twofactors
user.totp_recover = None;
user.save(&mut conn).await?;
Ok(Json(json!({})))
Ok(Json(Value::Object(serde_json::Map::new())))
}
async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
@@ -126,12 +128,7 @@ struct DisableTwoFactorData {
}
#[post("/two-factor/disable", data = "<data>")]
async fn disable_twofactor(
data: JsonUpcase<DisableTwoFactorData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user;
@@ -144,7 +141,8 @@ async fn disable_twofactor(
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
twofactor.delete(&mut conn).await?;
log_user_event(EventType::UserDisabled2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserDisabled2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn)
.await;
}
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty();
@@ -173,13 +171,8 @@ async fn disable_twofactor(
}
#[put("/two-factor/disable", data = "<data>")]
async fn disable_twofactor_put(
data: JsonUpcase<DisableTwoFactorData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
disable_twofactor(data, headers, conn, ip).await
async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn).await
}
pub async fn send_incomplete_2fa_notifications(pool: DbPool) {

View File

@@ -9,7 +9,7 @@ use crate::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
},
auth::{ClientIp, Headers},
auth::Headers,
db::{
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
@@ -242,12 +242,7 @@ impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
}
#[post("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn(
data: JsonUpcase<EnableWebauthnData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
@@ -286,7 +281,7 @@ async fn activate_webauthn(
.await?;
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -297,13 +292,8 @@ async fn activate_webauthn(
}
#[put("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn_put(
data: JsonUpcase<EnableWebauthnData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
activate_webauthn(data, headers, conn, ip).await
async fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn).await
}
#[derive(Deserialize, Debug)]

View File

@@ -8,7 +8,7 @@ use crate::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
},
auth::{ClientIp, Headers},
auth::Headers,
db::{
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
@@ -47,7 +47,7 @@ fn parse_yubikeys(data: &EnableYubikeyData) -> Vec<String> {
}
fn jsonify_yubikeys(yubikeys: Vec<String>) -> serde_json::Value {
let mut result = json!({});
let mut result = Value::Object(serde_json::Map::new());
for (i, key) in yubikeys.into_iter().enumerate() {
result[format!("Key{}", i + 1)] = Value::String(key);
@@ -118,12 +118,7 @@ async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, mut
}
#[post("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey(
data: JsonUpcase<EnableYubikeyData>,
headers: Headers,
mut conn: DbConn,
ip: ClientIp,
) -> JsonResult {
async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user;
@@ -169,7 +164,7 @@ async fn activate_yubikey(
_generate_recover_code(&mut user, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &ip.ip, &mut conn).await;
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
@@ -181,13 +176,8 @@ async fn activate_yubikey(
}
#[put("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey_put(
data: JsonUpcase<EnableYubikeyData>,
headers: Headers,
conn: DbConn,
ip: ClientIp,
) -> JsonResult {
activate_yubikey(data, headers, conn, ip).await
async fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn).await
}
pub async fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {

View File

@@ -19,7 +19,7 @@ use tokio::{
net::lookup_host,
};
use html5gum::{Emitter, EndTag, HtmlString, InfallibleTokenizer, Readable, StartTag, StringReader, Tokenizer};
use html5gum::{Emitter, HtmlString, InfallibleTokenizer, Readable, StringReader, Tokenizer};
use crate::{
error::Error,
@@ -46,10 +46,15 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the cookie store
let cookie_store = Arc::new(Jar::default());
let icon_download_timeout = Duration::from_secs(CONFIG.icon_download_timeout());
let pool_idle_timeout = Duration::from_secs(10);
// Reuse the client between requests
let client = get_reqwest_client_builder()
.cookie_provider(Arc::clone(&cookie_store))
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(true)
.default_headers(default_headers.clone());
match client.build() {
@@ -58,9 +63,11 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder()
.cookie_provider(cookie_store)
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers)
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(false)
.default_headers(default_headers)
.build()
.expect("Failed to build client")
}
@@ -79,7 +86,7 @@ async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
return None;
}
if is_domain_blacklisted(domain).await {
if check_domain_blacklist_reason(domain).await.is_some() {
return None;
}
@@ -97,15 +104,15 @@ async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
}
#[get("/<domain>/icon.png")]
async fn icon_external(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG._icon_service_url()).await
async fn icon_external(domain: &str) -> Option<Redirect> {
icon_redirect(domain, &CONFIG._icon_service_url()).await
}
#[get("/<domain>/icon.png")]
async fn icon_internal(domain: String) -> Cached<(ContentType, Vec<u8>)> {
async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(&domain) {
if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain);
return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
@@ -114,7 +121,7 @@ async fn icon_internal(domain: String) -> Cached<(ContentType, Vec<u8>)> {
);
}
match get_icon(&domain).await {
match get_icon(domain).await {
Some((icon, icon_type)) => {
Cached::ttl((ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
}
@@ -130,7 +137,7 @@ fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "_-.";
// If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = url::Url::parse(format!("https://{}", domain).as_str()) {
if let Err(parse_error) = url::Url::parse(format!("https://{domain}").as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
return false;
} else if domain.is_empty()
@@ -258,9 +265,15 @@ mod tests {
}
}
#[derive(Clone)]
enum DomainBlacklistReason {
Regex,
IP,
}
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
async fn is_domain_blacklisted(domain: &str) -> bool {
async fn check_domain_blacklist_reason(domain: &str) -> Option<DomainBlacklistReason> {
// First check the blacklist regex if there is a match.
// This prevents the blocked domain(s) from being leaked via a DNS lookup.
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
@@ -284,7 +297,7 @@ async fn is_domain_blacklisted(domain: &str) -> bool {
if is_match {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
return true;
return Some(DomainBlacklistReason::Regex);
}
}
@@ -293,13 +306,13 @@ async fn is_domain_blacklisted(domain: &str) -> bool {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return true;
return Some(DomainBlacklistReason::IP);
}
}
}
}
false
None
}
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
@@ -409,38 +422,34 @@ fn get_favicons_node(
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
const ATTR_REL: &[u8] = b"rel";
const ATTR_HREF: &[u8] = b"href";
const ATTR_SIZES: &[u8] = b"sizes";
let mut base_url = url.clone();
let mut icon_tags: Vec<StartTag> = Vec::new();
let mut icon_tags: Vec<Tag> = Vec::new();
for token in dom {
match token {
FaviconToken::StartTag(tag) => {
if *tag.name == TAG_LINK
&& tag.attributes.contains_key(ATTR_REL)
&& tag.attributes.contains_key(ATTR_HREF)
{
let rel_value = std::str::from_utf8(tag.attributes.get(ATTR_REL).unwrap())
.unwrap_or_default()
.to_ascii_lowercase();
if rel_value.contains("icon") && !rel_value.contains("mask-icon") {
icon_tags.push(tag);
let tag_name: &[u8] = &token.tag.name;
match tag_name {
TAG_LINK => {
icon_tags.push(token.tag);
}
} else if *tag.name == TAG_BASE && tag.attributes.contains_key(ATTR_HREF) {
let href = std::str::from_utf8(tag.attributes.get(ATTR_HREF).unwrap()).unwrap_or_default();
TAG_BASE => {
base_url = if let Some(href) = token.tag.attributes.get(ATTR_HREF) {
let href = std::str::from_utf8(href).unwrap_or_default();
debug!("Found base href: {href}");
base_url = match base_url.join(href) {
match base_url.join(href) {
Ok(inner_url) => inner_url,
_ => url.clone(),
_ => continue,
}
} else {
continue;
};
}
}
FaviconToken::EndTag(tag) => {
if *tag.name == TAG_HEAD {
TAG_HEAD if token.closing => {
break;
}
_ => {
continue;
}
}
}
@@ -564,8 +573,10 @@ async fn get_page(url: &str) -> Result<Response, Error> {
}
async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
warn!("Favicon '{}' resolves to a blacklisted domain or IP!", url);
match check_domain_blacklist_reason(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
Some(DomainBlacklistReason::Regex) => warn!("Favicon '{}' is from a blacklisted domain!", url),
Some(DomainBlacklistReason::IP) => warn!("Favicon '{}' is hosted on a non-global IP!", url),
None => (),
}
let mut client = CLIENT.get(url);
@@ -575,7 +586,7 @@ async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Err
match client.send().await {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{}", e)),
Err(e) => err_silent!(format!("{e}")),
}
}
@@ -659,8 +670,10 @@ fn parse_sizes(sizes: &str) -> (u16, u16) {
}
async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
if is_domain_blacklisted(domain).await {
err_silent!("Domain is blacklisted", domain)
match check_domain_blacklist_reason(domain).await {
Some(DomainBlacklistReason::Regex) => err_silent!("Domain is blacklisted", domain),
Some(DomainBlacklistReason::IP) => err_silent!("Host resolves to a non-global IP", domain),
None => (),
}
let icon_result = get_icon_url(domain).await?;
@@ -672,7 +685,7 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
for icon in icon_result.iconlist.iter().take(5) {
if icon.href.starts_with("data:image") {
let datauri = DataUrl::process(&icon.href).unwrap();
let Ok(datauri) = DataUrl::process(&icon.href) else {continue};
// Check if we are able to decode the data uri
let mut body = BytesMut::new();
match datauri.decode::<_, ()>(|bytes| {
@@ -797,7 +810,7 @@ impl reqwest::cookie::CookieStore for Jar {
let cookie_store = self.0.read().unwrap();
let s = cookie_store
.get_request_values(url)
.map(|(name, value)| format!("{}={}", name, value))
.map(|(name, value)| format!("{name}={value}"))
.collect::<Vec<_>>()
.join("; ");
@@ -810,43 +823,64 @@ impl reqwest::cookie::CookieStore for Jar {
}
/// Custom FaviconEmitter for the html5gum parser.
/// The FaviconEmitter is using an almost 1:1 copy of the DefaultEmitter with some small changes.
/// The FaviconEmitter is using an optimized version of the DefaultEmitter.
/// This prevents emitting tags like comments, doctype and also strings between the tags.
/// But it will also only emit the tags we need and only if they have the correct attributes
/// Therefor parsing the HTML content is faster.
use std::collections::{BTreeSet, VecDeque};
use std::collections::BTreeMap;
#[derive(Debug)]
enum FaviconToken {
StartTag(StartTag),
EndTag(EndTag),
#[derive(Default)]
pub struct Tag {
/// The tag's name, such as `"link"` or `"base"`.
pub name: HtmlString,
/// A mapping for any HTML attributes this start tag may have.
///
/// Duplicate attributes are ignored after the first one as per WHATWG spec.
pub attributes: BTreeMap<HtmlString, HtmlString>,
}
#[derive(Default, Debug)]
struct FaviconToken {
tag: Tag,
closing: bool,
}
#[derive(Default)]
struct FaviconEmitter {
current_token: Option<FaviconToken>,
last_start_tag: HtmlString,
current_attribute: Option<(HtmlString, HtmlString)>,
seen_attributes: BTreeSet<HtmlString>,
emitted_tokens: VecDeque<FaviconToken>,
emit_token: bool,
}
impl FaviconEmitter {
fn emit_token(&mut self, token: FaviconToken) {
self.emitted_tokens.push_front(token);
fn flush_current_attribute(&mut self, emit_current_tag: bool) {
const ATTR_HREF: &[u8] = b"href";
const ATTR_REL: &[u8] = b"rel";
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
if let Some(ref mut token) = self.current_token {
let tag_name: &[u8] = &token.tag.name;
if self.current_attribute.is_some() && (tag_name == TAG_BASE || tag_name == TAG_LINK) {
let (k, v) = self.current_attribute.take().unwrap();
token.tag.attributes.entry(k).and_modify(|_| {}).or_insert(v);
}
fn flush_current_attribute(&mut self) {
if let Some((k, v)) = self.current_attribute.take() {
match self.current_token {
Some(FaviconToken::StartTag(ref mut tag)) => {
tag.attributes.entry(k).and_modify(|_| {}).or_insert(v);
let tag_attr = &token.tag.attributes;
match tag_name {
TAG_HEAD if token.closing => self.emit_token = true,
TAG_BASE if tag_attr.contains_key(ATTR_HREF) => self.emit_token = true,
TAG_LINK if emit_current_tag && tag_attr.contains_key(ATTR_REL) && tag_attr.contains_key(ATTR_HREF) => {
let rel_value =
std::str::from_utf8(token.tag.attributes.get(ATTR_REL).unwrap()).unwrap_or_default();
if rel_value.contains("icon") && !rel_value.contains("mask-icon") {
self.emit_token = true
}
Some(FaviconToken::EndTag(_)) => {
self.seen_attributes.insert(k);
}
_ => {
debug_assert!(false);
}
_ => (),
}
}
}
@@ -861,86 +895,71 @@ impl Emitter for FaviconEmitter {
}
fn pop_token(&mut self) -> Option<Self::Token> {
self.emitted_tokens.pop_back()
}
fn init_start_tag(&mut self) {
self.current_token = Some(FaviconToken::StartTag(StartTag::default()));
}
fn init_end_tag(&mut self) {
self.current_token = Some(FaviconToken::EndTag(EndTag::default()));
self.seen_attributes.clear();
}
fn emit_current_tag(&mut self) -> Option<html5gum::State> {
self.flush_current_attribute();
let mut token = self.current_token.take().unwrap();
let mut emit = false;
match token {
FaviconToken::EndTag(ref mut tag) => {
// Always clean seen attributes
self.seen_attributes.clear();
// Only trigger an emit for the </head> tag.
// This is matched, and will break the for-loop.
if *tag.name == b"head" {
emit = true;
}
}
FaviconToken::StartTag(ref mut tag) => {
// Only trriger an emit for <link> and <base> tags.
// These are the only tags we want to parse.
if *tag.name == b"link" || *tag.name == b"base" {
self.set_last_start_tag(Some(&tag.name));
emit = true;
} else {
self.set_last_start_tag(None);
}
}
}
// Only emit the tags we want to parse.
if emit {
self.emit_token(token);
if self.emit_token {
self.emit_token = false;
return self.current_token.take();
}
None
}
fn push_tag_name(&mut self, s: &[u8]) {
match self.current_token {
Some(
FaviconToken::StartTag(StartTag {
ref mut name,
..
})
| FaviconToken::EndTag(EndTag {
ref mut name,
..
}),
) => {
name.extend(s);
fn init_start_tag(&mut self) {
self.current_token = Some(FaviconToken {
tag: Tag::default(),
closing: false,
});
}
_ => debug_assert!(false),
fn init_end_tag(&mut self) {
self.current_token = Some(FaviconToken {
tag: Tag::default(),
closing: true,
});
}
fn emit_current_tag(&mut self) -> Option<html5gum::State> {
self.flush_current_attribute(true);
self.last_start_tag.clear();
if self.current_token.is_some() && !self.current_token.as_ref().unwrap().closing {
self.last_start_tag.extend(&*self.current_token.as_ref().unwrap().tag.name);
}
html5gum::naive_next_state(&self.last_start_tag)
}
fn push_tag_name(&mut self, s: &[u8]) {
if let Some(ref mut token) = self.current_token {
token.tag.name.extend(s);
}
}
fn init_attribute(&mut self) {
self.flush_current_attribute();
self.current_attribute = Some(Default::default());
self.flush_current_attribute(false);
self.current_attribute = match &self.current_token {
Some(token) => {
let tag_name: &[u8] = &token.tag.name;
match tag_name {
b"link" | b"head" | b"base" => Some(Default::default()),
_ => None,
}
}
_ => None,
};
}
fn push_attribute_name(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().0.extend(s);
if let Some(attr) = &mut self.current_attribute {
attr.0.extend(s)
}
}
fn push_attribute_value(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().1.extend(s);
if let Some(attr) = &mut self.current_attribute {
attr.1.extend(s)
}
}
fn current_is_appropriate_end_tag_token(&mut self) -> bool {
match self.current_token {
Some(FaviconToken::EndTag(ref tag)) => !self.last_start_tag.is_empty() && self.last_start_tag == tag.name,
match &self.current_token {
Some(token) if token.closing => !self.last_start_tag.is_empty() && self.last_start_tag == token.tag.name,
_ => false,
}
}

View File

@@ -14,7 +14,7 @@ use crate::{
core::two_factor::{duo, email, email::EmailTokenData, yubikey},
ApiResult, EmptyResult, JsonResult, JsonUpcase,
},
auth::{ClientHeaders, ClientIp},
auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp},
db::{models::*, DbConn},
error::MapResult,
mail, util, CONFIG,
@@ -25,7 +25,7 @@ pub fn routes() -> Vec<Route> {
}
#[post("/connect/token", data = "<data>")]
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn, ip: ClientIp) -> JsonResult {
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: ConnectData = data.into_inner();
let mut user_uuid: Option<String> = None;
@@ -45,14 +45,18 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, &mut user_uuid, &mut conn, &ip).await
_password_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.client_secret, "client_secret cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_api_key_login(data, &mut user_uuid, &mut conn, &ip).await
_check_is_some(&data.device_identifier, "device_identifier cannot be blank")?;
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_api_key_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
}
t => err!("Invalid type", t),
};
@@ -64,14 +68,21 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
EventType::UserLoggedIn as i32,
&user_uuid,
client_header.device_type,
&ip.ip,
&client_header.ip.ip,
&mut conn,
)
.await;
}
Err(e) => {
if let Some(ev) = e.get_event() {
log_user_event(ev.event as i32, &user_uuid, client_header.device_type, &ip.ip, &mut conn).await
log_user_event(
ev.event as i32,
&user_uuid,
client_header.device_type,
&client_header.ip.ip,
&mut conn,
)
.await
}
}
}
@@ -96,7 +107,7 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(conn).await?;
Ok(Json(json!({
let result = json!({
"access_token": access_token,
"expires_in": expires_in,
"token_type": "Bearer",
@@ -106,10 +117,14 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": scope,
"unofficialServer": true,
})))
});
Ok(Json(result))
}
async fn _password_login(
@@ -130,7 +145,7 @@ async fn _password_login(
// Get the user
let username = data.username.as_ref().unwrap().trim();
let user = match User::find_by_mail(username, conn).await {
let mut user = match User::find_by_mail(username, conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
};
@@ -140,7 +155,27 @@ async fn _password_login(
// Check password
let password = data.password.as_ref().unwrap();
if !user.check_valid_password(password) {
if let Some(auth_request_uuid) = data.auth_request.clone() {
if let Some(auth_request) = AuthRequest::find_by_uuid(auth_request_uuid.as_str(), conn).await {
if !auth_request.check_access_code(password) {
err!(
"Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
} else {
err!(
"Auth request not found. Try again.",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
} else if !user.check_valid_password(password) {
err!(
"Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
@@ -150,6 +185,16 @@ async fn _password_login(
)
}
// Change the KDF Iterations
if user.password_iterations != CONFIG.password_iterations() {
user.password_iterations = CONFIG.password_iterations();
user.set_password(password, None, false, None);
if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e);
}
}
// Check if the user is disabled
if !user.enabled {
err!(
@@ -172,7 +217,6 @@ async fn _password_login(
if resend_limit == 0 || user.login_verify_count < resend_limit {
// We want to send another email verification if we require signups to verify
// their email address, and we haven't sent them a reminder in a while...
let mut user = user;
user.last_verifying_at = Some(now);
user.login_verify_count += 1;
@@ -231,9 +275,15 @@ async fn _password_login(
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false,// TODO: Same as above
"scope": scope,
"unofficialServer": true,
"UserDecryptionOptions": {
"HasMasterPassword": !user.password_hash.is_empty(),
"Object": "userDecryptionOptions"
},
});
if let Some(token) = twofactor_token {
@@ -250,16 +300,23 @@ async fn _api_key_login(
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api" {
err!("Scope not supported")
}
let scope_vec = vec!["api".into()];
// Ratelimit the login
crate::ratelimit::check_limit_login(&ip.ip)?;
// Validate scope
match data.scope.as_ref().unwrap().as_ref() {
"api" => _user_api_key_login(data, user_uuid, conn, ip).await,
"api.organization" => _organization_api_key_login(data, conn, ip).await,
_ => err!("Scope not supported"),
}
}
async fn _user_api_key_login(
data: ConnectData,
user_uuid: &mut Option<String>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let client_user_uuid = match client_id.strip_prefix("user.") {
@@ -316,6 +373,7 @@ async fn _api_key_login(
}
// Common
let scope_vec = vec!["api".into()];
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(conn).await?;
@@ -324,7 +382,7 @@ async fn _api_key_login(
// Note: No refresh_token is returned. The CLI just repeats the
// client_credentials login flow when the existing token expires.
Ok(Json(json!({
let result = json!({
"access_token": access_token,
"expires_in": expires_in,
"token_type": "Bearer",
@@ -333,8 +391,42 @@ async fn _api_key_login(
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: Same as above
"scope": scope,
"scope": "api",
"unofficialServer": true,
});
Ok(Json(result))
}
async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &ClientIp) -> JsonResult {
// Get the org via the client_id
let client_id = data.client_id.as_ref().unwrap();
let org_uuid = match client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, conn).await {
Some(org_api_key) => org_api_key,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
};
// Check API key.
let client_secret = data.client_secret.as_ref().unwrap();
if !org_api_key.check_valid_api_key(client_secret) {
err!("Incorrect client_secret", format!("IP: {}. Organization: {}.", ip.ip, org_api_key.org_uuid))
}
let claim = generate_organization_api_key_login_claims(org_api_key.uuid, org_api_key.org_uuid);
let access_token = crate::auth::encode_jwt(&claim);
Ok(Json(json!({
"access_token": access_token,
"expires_in": 3600,
"token_type": "Bearer",
"scope": "api.organization",
"unofficialServer": true,
})))
}
@@ -578,6 +670,8 @@ struct ConnectData {
#[field(name = uncased("two_factor_remember"))]
#[field(name = uncased("twofactorremember"))]
two_factor_remember: Option<i32>,
#[field(name = uncased("authrequest"))]
auth_request: Option<String>,
}
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {

View File

@@ -3,6 +3,7 @@ pub mod core;
mod icons;
mod identity;
mod notifications;
mod push;
mod web;
use rocket::serde::json::Json;
@@ -12,6 +13,7 @@ pub use crate::api::{
admin::catchers as admin_catchers,
admin::routes as admin_routes,
core::catchers as core_catchers,
core::purge_auth_requests,
core::purge_sends,
core::purge_trashed_ciphers,
core::routes as core_routes,
@@ -21,7 +23,11 @@ pub use crate::api::{
icons::routes as icons_routes,
identity::routes as identity_routes,
notifications::routes as notifications_routes,
notifications::{start_notification_server, Notify, UpdateType},
notifications::{start_notification_server, AnonymousNotify, Notify, UpdateType, WS_ANONYMOUS_SUBSCRIPTIONS},
push::{
push_cipher_update, push_folder_update, push_logout, push_send_update, push_user_update, register_push_device,
unregister_push_device,
},
web::catchers as web_catchers,
web::routes as web_routes,
web::static_files,

View File

@@ -1,17 +1,15 @@
use std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
net::{IpAddr, SocketAddr},
sync::Arc,
time::Duration,
};
use chrono::NaiveDateTime;
use futures::{SinkExt, StreamExt};
use chrono::{NaiveDateTime, Utc};
use rmpv::Value;
use rocket::{serde::json::Json, Route};
use serde_json::Value as JsonValue;
use rocket::{
futures::{SinkExt, StreamExt},
Route,
};
use tokio::{
net::{TcpListener, TcpStream},
sync::mpsc::Sender,
@@ -22,56 +20,234 @@ use tokio_tungstenite::{
};
use crate::{
api::EmptyResult,
auth::Headers,
db::models::{Cipher, Folder, Send, User},
auth::{ClientIp, WsAccessTokenHeader},
db::{
models::{Cipher, Folder, Send as DbSend, User},
DbConn,
},
Error, CONFIG,
};
use once_cell::sync::Lazy;
static WS_USERS: Lazy<Arc<WebSocketUsers>> = Lazy::new(|| {
Arc::new(WebSocketUsers {
map: Arc::new(dashmap::DashMap::new()),
})
});
pub static WS_ANONYMOUS_SUBSCRIPTIONS: Lazy<Arc<AnonymousWebSocketSubscriptions>> = Lazy::new(|| {
Arc::new(AnonymousWebSocketSubscriptions {
map: Arc::new(dashmap::DashMap::new()),
})
});
use super::{
push::push_auth_request, push::push_auth_response, push_cipher_update, push_folder_update, push_logout,
push_send_update, push_user_update,
};
pub fn routes() -> Vec<Route> {
routes![negotiate, websockets_err]
routes![websockets_hub, anonymous_websockets_hub]
}
#[get("/hub")]
fn websockets_err() -> EmptyResult {
static SHOW_WEBSOCKETS_MSG: AtomicBool = AtomicBool::new(true);
#[derive(FromForm, Debug)]
struct WsAccessToken {
access_token: Option<String>,
}
if CONFIG.websocket_enabled()
&& SHOW_WEBSOCKETS_MSG.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed).is_ok()
{
err!(
"
###########################################################
'/notifications/hub' should be proxied to the websocket server or notifications won't work.
Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false.
###########################################################################################\n"
)
struct WSEntryMapGuard {
users: Arc<WebSocketUsers>,
user_uuid: String,
entry_uuid: uuid::Uuid,
addr: IpAddr,
}
impl WSEntryMapGuard {
fn new(users: Arc<WebSocketUsers>, user_uuid: String, entry_uuid: uuid::Uuid, addr: IpAddr) -> Self {
Self {
users,
user_uuid,
entry_uuid,
addr,
}
}
}
impl Drop for WSEntryMapGuard {
fn drop(&mut self) {
info!("Closing WS connection from {}", self.addr);
if let Some(mut entry) = self.users.map.get_mut(&self.user_uuid) {
entry.retain(|(uuid, _)| uuid != &self.entry_uuid);
}
}
}
struct WSAnonymousEntryMapGuard {
subscriptions: Arc<AnonymousWebSocketSubscriptions>,
token: String,
addr: IpAddr,
}
impl WSAnonymousEntryMapGuard {
fn new(subscriptions: Arc<AnonymousWebSocketSubscriptions>, token: String, addr: IpAddr) -> Self {
Self {
subscriptions,
token,
addr,
}
}
}
impl Drop for WSAnonymousEntryMapGuard {
fn drop(&mut self) {
info!("Closing WS connection from {}", self.addr);
self.subscriptions.map.remove(&self.token);
}
}
#[get("/hub?<data..>")]
fn websockets_hub<'r>(
ws: rocket_ws::WebSocket,
data: WsAccessToken,
ip: ClientIp,
header_token: WsAccessTokenHeader,
) -> Result<rocket_ws::Stream!['r], Error> {
let addr = ip.ip;
info!("Accepting Rocket WS connection from {addr}");
let token = if let Some(token) = data.access_token {
token
} else if let Some(token) = header_token.access_token {
token
} else {
Err(Error::empty())
err_code!("Invalid claim", 401)
};
let Ok(claims) = crate::auth::decode_login(&token) else { err_code!("Invalid token", 401) };
let (mut rx, guard) = {
let users = Arc::clone(&WS_USERS);
// Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(claims.sub.clone()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, claims.sub, entry_uuid, addr))
};
Ok({
rocket_ws::Stream! { ws => {
let mut ws = ws;
let _guard = guard;
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
tokio::select! {
res = ws.next() => {
match res {
Some(Ok(message)) => {
match message {
// Respond to any pings
Message::Ping(ping) => yield Message::Pong(ping),
Message::Pong(_) => {/* Ignored */},
// We should receive an initial message with the protocol and version, and we will reply to it
Message::Text(ref message) => {
let msg = message.strip_suffix(RECORD_SEPARATOR as char).unwrap_or(message);
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE);
continue;
}
}
// Just echo anything else the client sends
_ => yield message,
}
}
_ => break,
}
}
res = rx.recv() => {
match res {
Some(res) => yield res,
None => break,
}
}
_ = interval.tick() => yield Message::Ping(create_ping())
}
}
}}
})
}
#[post("/hub/negotiate")]
fn negotiate(_headers: Headers) -> Json<JsonValue> {
use crate::crypto;
use data_encoding::BASE64URL;
#[get("/anonymous-hub?<token..>")]
fn anonymous_websockets_hub<'r>(
ws: rocket_ws::WebSocket,
token: String,
ip: ClientIp,
) -> Result<rocket_ws::Stream!['r], Error> {
let addr = ip.ip;
info!("Accepting Anonymous Rocket WS connection from {addr}");
let conn_id = crypto::encode_random_bytes::<16>(BASE64URL);
let mut available_transports: Vec<JsonValue> = Vec::new();
let (mut rx, guard) = {
let subscriptions = Arc::clone(&WS_ANONYMOUS_SUBSCRIPTIONS);
if CONFIG.websocket_enabled() {
available_transports.push(json!({"transport":"WebSockets", "transferFormats":["Text","Binary"]}));
// Add a channel to send messages to this client to the map
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
subscriptions.map.insert(token.clone(), tx);
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSAnonymousEntryMapGuard::new(subscriptions, token, addr))
};
Ok({
rocket_ws::Stream! { ws => {
let mut ws = ws;
let _guard = guard;
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
tokio::select! {
res = ws.next() => {
match res {
Some(Ok(message)) => {
match message {
// Respond to any pings
Message::Ping(ping) => yield Message::Pong(ping),
Message::Pong(_) => {/* Ignored */},
// We should receive an initial message with the protocol and version, and we will reply to it
Message::Text(ref message) => {
let msg = message.strip_suffix(RECORD_SEPARATOR as char).unwrap_or(message);
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE);
continue;
}
}
// Just echo anything else the client sends
_ => yield message,
}
}
_ => break,
}
}
// TODO: Implement transports
// Rocket WS support: https://github.com/SergioBenitez/Rocket/issues/90
// Rocket SSE support: https://github.com/SergioBenitez/Rocket/issues/33
// {"transport":"ServerSentEvents", "transferFormats":["Text"]},
// {"transport":"LongPolling", "transferFormats":["Text","Binary"]}
Json(json!({
"connectionId": conn_id,
"availableTransports": available_transports
}))
res = rx.recv() => {
match res {
Some(res) => yield res,
None => break,
}
}
_ = interval.tick() => yield Message::Ping(create_ping())
}
}
}}
})
}
//
@@ -152,8 +328,8 @@ impl WebSocketUsers {
async fn send_update(&self, user_uuid: &str, data: &[u8]) {
if let Some(user) = self.map.get(user_uuid).map(|v| v.clone()) {
for (_, sender) in user.iter() {
if sender.send(Message::binary(data)).await.is_err() {
// TODO: Delete from map here too?
if let Err(e) = sender.send(Message::binary(data)).await {
error!("Error sending WS update {e}");
}
}
}
@@ -164,12 +340,37 @@ impl WebSocketUsers {
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
ut,
None,
);
self.send_update(&user.uuid, &data).await;
if CONFIG.push_enabled() {
push_user_update(ut, user);
}
}
pub async fn send_folder_update(&self, ut: UpdateType, folder: &Folder) {
pub async fn send_logout(&self, user: &User, acting_device_uuid: Option<String>) {
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
UpdateType::LogOut,
acting_device_uuid.clone(),
);
self.send_update(&user.uuid, &data).await;
if CONFIG.push_enabled() {
push_logout(user, acting_device_uuid);
}
}
pub async fn send_folder_update(
&self,
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
conn: &mut DbConn,
) {
let data = create_update(
vec![
("Id".into(), folder.uuid.clone().into()),
@@ -177,32 +378,67 @@ impl WebSocketUsers {
("RevisionDate".into(), serialize_date(folder.updated_at)),
],
ut,
Some(acting_device_uuid.into()),
);
self.send_update(&folder.user_uuid, &data).await;
if CONFIG.push_enabled() {
push_folder_update(ut, folder, acting_device_uuid, conn).await;
}
}
pub async fn send_cipher_update(&self, ut: UpdateType, cipher: &Cipher, user_uuids: &[String]) {
let user_uuid = convert_option(cipher.user_uuid.clone());
pub async fn send_cipher_update(
&self,
ut: UpdateType,
cipher: &Cipher,
user_uuids: &[String],
acting_device_uuid: &String,
collection_uuids: Option<Vec<String>>,
conn: &mut DbConn,
) {
let org_uuid = convert_option(cipher.organization_uuid.clone());
// Depending if there are collections provided or not, we need to have different values for the following variables.
// The user_uuid should be `null`, and the revision date should be set to now, else the clients won't sync the collection change.
let (user_uuid, collection_uuids, revision_date) = if let Some(collection_uuids) = collection_uuids {
(
Value::Nil,
Value::Array(collection_uuids.into_iter().map(|v| v.into()).collect::<Vec<rmpv::Value>>()),
serialize_date(Utc::now().naive_utc()),
)
} else {
(convert_option(cipher.user_uuid.clone()), Value::Nil, serialize_date(cipher.updated_at))
};
let data = create_update(
vec![
("Id".into(), cipher.uuid.clone().into()),
("UserId".into(), user_uuid),
("OrganizationId".into(), org_uuid),
("CollectionIds".into(), Value::Nil),
("RevisionDate".into(), serialize_date(cipher.updated_at)),
("CollectionIds".into(), collection_uuids),
("RevisionDate".into(), revision_date),
],
ut,
Some(acting_device_uuid.into()),
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
push_cipher_update(ut, cipher, acting_device_uuid, conn).await;
}
}
pub async fn send_send_update(&self, ut: UpdateType, send: &Send, user_uuids: &[String]) {
pub async fn send_send_update(
&self,
ut: UpdateType,
send: &DbSend,
user_uuids: &[String],
acting_device_uuid: &String,
conn: &mut DbConn,
) {
let user_uuid = convert_option(send.user_uuid.clone());
let data = create_update(
@@ -212,11 +448,78 @@ impl WebSocketUsers {
("RevisionDate".into(), serialize_date(send.revision_date)),
],
ut,
None,
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
push_send_update(ut, send, acting_device_uuid, conn).await;
}
}
pub async fn send_auth_request(
&self,
user_uuid: &String,
auth_request_uuid: &String,
acting_device_uuid: &String,
conn: &mut DbConn,
) {
let data = create_update(
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequest,
Some(acting_device_uuid.to_string()),
);
self.send_update(user_uuid, &data).await;
if CONFIG.push_enabled() {
push_auth_request(user_uuid.to_string(), auth_request_uuid.to_string(), conn).await;
}
}
pub async fn send_auth_response(
&self,
user_uuid: &String,
auth_response_uuid: &str,
approving_device_uuid: String,
conn: &mut DbConn,
) {
let data = create_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequestResponse,
approving_device_uuid.clone().into(),
);
self.send_update(auth_response_uuid, &data).await;
if CONFIG.push_enabled() {
push_auth_response(user_uuid.to_string(), auth_response_uuid.to_string(), approving_device_uuid, conn)
.await;
}
}
}
#[derive(Clone)]
pub struct AnonymousWebSocketSubscriptions {
map: Arc<dashmap::DashMap<String, Sender<Message>>>,
}
impl AnonymousWebSocketSubscriptions {
async fn send_update(&self, token: &str, data: &[u8]) {
if let Some(sender) = self.map.get(token).map(|v| v.clone()) {
if let Err(e) = sender.send(Message::binary(data)).await {
error!("Error sending WS update {e}");
}
}
}
pub async fn send_auth_response(&self, user_uuid: &String, auth_response_uuid: &str) {
let data = create_anonymous_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequestResponse,
user_uuid.to_string(),
);
self.send_update(auth_response_uuid, &data).await;
}
}
@@ -228,14 +531,14 @@ impl WebSocketUsers {
"ReceiveMessage", // Target
[ // Arguments
{
"ContextId": "app_id",
"ContextId": acting_device_uuid || Nil,
"Type": ut as i32,
"Payload": {}
}
]
]
*/
fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType) -> Vec<u8> {
fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uuid: Option<String>) -> Vec<u8> {
use rmpv::Value as V;
let value = V::Array(vec![
@@ -244,7 +547,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType) -> Vec<u8> {
V::Nil,
"ReceiveMessage".into(),
V::Array(vec![V::Map(vec![
("ContextId".into(), "app_id".into()),
("ContextId".into(), acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| V::Nil)),
("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()),
])]),
@@ -253,24 +556,42 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType) -> Vec<u8> {
serialize(value)
}
fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id: String) -> Vec<u8> {
use rmpv::Value as V;
let value = V::Array(vec![
1.into(),
V::Map(vec![]),
V::Nil,
"AuthRequestResponseRecieved".into(),
V::Array(vec![V::Map(vec![
("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()),
("UserId".into(), user_id.into()),
])]),
]);
serialize(value)
}
fn create_ping() -> Vec<u8> {
serialize(Value::Array(vec![6.into()]))
}
#[allow(dead_code)]
#[derive(Eq, PartialEq)]
#[derive(Copy, Clone, Eq, PartialEq)]
pub enum UpdateType {
CipherUpdate = 0,
CipherCreate = 1,
LoginDelete = 2,
FolderDelete = 3,
Ciphers = 4,
SyncCipherUpdate = 0,
SyncCipherCreate = 1,
SyncLoginDelete = 2,
SyncFolderDelete = 3,
SyncCiphers = 4,
Vault = 5,
OrgKeys = 6,
FolderCreate = 7,
FolderUpdate = 8,
CipherDelete = 9,
SyncVault = 5,
SyncOrgKeys = 6,
SyncFolderCreate = 7,
SyncFolderUpdate = 8,
SyncCipherDelete = 9,
SyncSettings = 10,
LogOut = 11,
@@ -279,18 +600,19 @@ pub enum UpdateType {
SyncSendUpdate = 13,
SyncSendDelete = 14,
AuthRequest = 15,
AuthRequestResponse = 16,
None = 100,
}
pub type Notify<'a> = &'a rocket::State<WebSocketUsers>;
pub fn start_notification_server() -> WebSocketUsers {
let users = WebSocketUsers {
map: Arc::new(dashmap::DashMap::new()),
};
pub type Notify<'a> = &'a rocket::State<Arc<WebSocketUsers>>;
pub type AnonymousNotify<'a> = &'a rocket::State<Arc<AnonymousWebSocketSubscriptions>>;
pub fn start_notification_server() -> Arc<WebSocketUsers> {
let users = Arc::clone(&WS_USERS);
if CONFIG.websocket_enabled() {
let users2 = users.clone();
let users2 = Arc::<WebSocketUsers>::clone(&users);
tokio::spawn(async move {
let addr = (CONFIG.websocket_address(), CONFIG.websocket_port());
info!("Starting WebSockets server on {}:{}", addr.0, addr.1);
@@ -302,7 +624,7 @@ pub fn start_notification_server() -> WebSocketUsers {
loop {
tokio::select! {
Ok((stream, addr)) = listener.accept() => {
tokio::spawn(handle_connection(stream, users2.clone(), addr));
tokio::spawn(handle_connection(stream, Arc::<WebSocketUsers>::clone(&users2), addr));
}
_ = &mut shutdown_rx => {
@@ -318,7 +640,7 @@ pub fn start_notification_server() -> WebSocketUsers {
users
}
async fn handle_connection(stream: TcpStream, users: WebSocketUsers, addr: SocketAddr) -> Result<(), Error> {
async fn handle_connection(stream: TcpStream, users: Arc<WebSocketUsers>, addr: SocketAddr) -> Result<(), Error> {
let mut user_uuid: Option<String> = None;
info!("Accepting WS connection from {addr}");
@@ -338,30 +660,30 @@ async fn handle_connection(stream: TcpStream, users: WebSocketUsers, addr: Socke
let user_uuid = user_uuid.expect("User UUID should be set after the handshake");
let (mut rx, guard) = {
// Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4();
let (tx, mut rx) = tokio::sync::mpsc::channel(100);
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(user_uuid.clone()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, user_uuid, entry_uuid, addr.ip()))
};
let _guard = guard;
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
tokio::select! {
res = stream.next() => {
match res {
Some(Ok(message)) => {
match message {
// Respond to any pings
if let Message::Ping(ping) = message {
if stream.send(Message::Pong(ping)).await.is_err() {
break;
}
continue;
} else if let Message::Pong(_) = message {
/* Ignored */
continue;
}
Message::Ping(ping) => stream.send(Message::Pong(ping)).await?,
Message::Pong(_) => {/* Ignored */},
// We should receive an initial message with the protocol and version, and we will reply to it
if let Message::Text(ref message) = message {
Message::Text(ref message) => {
let msg = message.strip_suffix(RECORD_SEPARATOR as char).unwrap_or(message);
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
@@ -369,10 +691,8 @@ async fn handle_connection(stream: TcpStream, users: WebSocketUsers, addr: Socke
continue;
}
}
// Just echo anything else the client sends
if stream.send(message).await.is_err() {
break;
_ => stream.send(message).await?,
}
}
_ => break,
@@ -381,27 +701,15 @@ async fn handle_connection(stream: TcpStream, users: WebSocketUsers, addr: Socke
res = rx.recv() => {
match res {
Some(res) => {
if stream.send(res).await.is_err() {
break;
}
},
Some(res) => stream.send(res).await?,
None => break,
}
}
_= interval.tick() => {
if stream.send(Message::Ping(create_ping())).await.is_err() {
break;
}
}
_ = interval.tick() => stream.send(Message::Ping(create_ping())).await?
}
}
info!("Closing WS connection from {addr}");
// Delete from map
users.map.entry(user_uuid).or_default().retain(|(uuid, _)| uuid != &entry_uuid);
Ok(())
}

294
src/api/push.rs Normal file
View File

@@ -0,0 +1,294 @@
use reqwest::header::{ACCEPT, AUTHORIZATION, CONTENT_TYPE};
use serde_json::Value;
use tokio::sync::RwLock;
use crate::{
api::{ApiResult, EmptyResult, UpdateType},
db::models::{Cipher, Device, Folder, Send, User},
util::get_reqwest_client,
CONFIG,
};
use once_cell::sync::Lazy;
use std::time::{Duration, Instant};
#[derive(Deserialize)]
struct AuthPushToken {
access_token: String,
expires_in: i32,
}
#[derive(Debug)]
struct LocalAuthPushToken {
access_token: String,
valid_until: Instant,
}
async fn get_auth_push_token() -> ApiResult<String> {
static PUSH_TOKEN: Lazy<RwLock<LocalAuthPushToken>> = Lazy::new(|| {
RwLock::new(LocalAuthPushToken {
access_token: String::new(),
valid_until: Instant::now(),
})
});
let push_token = PUSH_TOKEN.read().await;
if push_token.valid_until.saturating_duration_since(Instant::now()).as_secs() > 0 {
debug!("Auth Push token still valid, no need for a new one");
return Ok(push_token.access_token.clone());
}
drop(push_token); // Drop the read lock now
let installation_id = CONFIG.push_installation_id();
let client_id = format!("installation.{installation_id}");
let client_secret = CONFIG.push_installation_key();
let params = [
("grant_type", "client_credentials"),
("scope", "api.push"),
("client_id", &client_id),
("client_secret", &client_secret),
];
let res = match get_reqwest_client().post("https://identity.bitwarden.com/connect/token").form(&params).send().await
{
Ok(r) => r,
Err(e) => err!(format!("Error getting push token from bitwarden server: {e}")),
};
let json_pushtoken = match res.json::<AuthPushToken>().await {
Ok(r) => r,
Err(e) => err!(format!("Unexpected push token received from bitwarden server: {e}")),
};
let mut push_token = PUSH_TOKEN.write().await;
push_token.valid_until = Instant::now()
.checked_add(Duration::new((json_pushtoken.expires_in / 2) as u64, 0)) // Token valid for half the specified time
.unwrap();
push_token.access_token = json_pushtoken.access_token;
debug!("Token still valid for {}", push_token.valid_until.saturating_duration_since(Instant::now()).as_secs());
Ok(push_token.access_token.clone())
}
pub async fn register_push_device(user_uuid: String, device: Device) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let auth_push_token = get_auth_push_token().await?;
//Needed to register a device for push to bitwarden :
let data = json!({
"userId": user_uuid,
"deviceId": device.push_uuid,
"identifier": device.uuid,
"type": device.atype,
"pushToken": device.push_token
});
let auth_header = format!("Bearer {}", &auth_push_token);
get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/register")
.header(CONTENT_TYPE, "application/json")
.header(ACCEPT, "application/json")
.header(AUTHORIZATION, auth_header)
.json(&data)
.send()
.await?
.error_for_status()?;
Ok(())
}
pub async fn unregister_push_device(uuid: String) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let auth_push_token = get_auth_push_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token);
match get_reqwest_client()
.delete(CONFIG.push_relay_uri() + "/push/" + &uuid)
.header(AUTHORIZATION, auth_header)
.send()
.await
{
Ok(r) => r,
Err(e) => err!(format!("An error occured during device unregistration: {e}")),
};
Ok(())
}
pub async fn push_cipher_update(
ut: UpdateType,
cipher: &Cipher,
acting_device_uuid: &String,
conn: &mut crate::db::DbConn,
) {
// We shouldn't send a push notification on cipher update if the cipher belongs to an organization, this isn't implemented in the upstream server too.
if cipher.organization_uuid.is_some() {
return;
};
let user_uuid = match &cipher.user_uuid {
Some(c) => c,
None => {
debug!("Cipher has no uuid");
return;
}
};
if Device::check_user_has_push_device(user_uuid, conn).await {
send_to_push_relay(json!({
"userId": user_uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": cipher.uuid,
"userId": cipher.user_uuid,
"organizationId": (),
"revisionDate": cipher.updated_at
}
}))
.await;
}
}
pub fn push_logout(user: &User, acting_device_uuid: Option<String>) {
let acting_device_uuid: Value = acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| Value::Null);
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"type": UpdateType::LogOut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
}
})));
}
pub fn push_user_update(ut: UpdateType, user: &User) {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
"organizationId": (),
"deviceId": (),
"identifier": (),
"type": ut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
}
})));
}
pub async fn push_folder_update(
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
conn: &mut crate::db::DbConn,
) {
if Device::check_user_has_push_device(&folder.user_uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": folder.user_uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": folder.uuid,
"userId": folder.user_uuid,
"revisionDate": folder.updated_at
}
})));
}
}
pub async fn push_send_update(ut: UpdateType, send: &Send, acting_device_uuid: &String, conn: &mut crate::db::DbConn) {
if let Some(s) = &send.user_uuid {
if Device::check_user_has_push_device(s, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": send.user_uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"type": ut as i32,
"payload": {
"id": send.uuid,
"userId": send.user_uuid,
"revisionDate": send.revision_date
}
})));
}
}
}
async fn send_to_push_relay(notification_data: Value) {
if !CONFIG.push_enabled() {
return;
}
let auth_push_token = match get_auth_push_token().await {
Ok(s) => s,
Err(e) => {
debug!("Could not get the auth push token: {}", e);
return;
}
};
let auth_header = format!("Bearer {}", &auth_push_token);
if let Err(e) = get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/send")
.header(ACCEPT, "application/json")
.header(CONTENT_TYPE, "application/json")
.header(AUTHORIZATION, &auth_header)
.json(&notification_data)
.send()
.await
{
error!("An error occured while sending a send update to the push relay: {}", e);
};
}
pub async fn push_auth_request(user_uuid: String, auth_request_uuid: String, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid,
"organizationId": (),
"deviceId": null,
"identifier": null,
"type": UpdateType::AuthRequest as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
}
})));
}
}
pub async fn push_auth_response(
user_uuid: String,
auth_request_uuid: String,
approving_device_uuid: String,
conn: &mut crate::db::DbConn,
) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid,
"organizationId": (),
"deviceId": approving_device_uuid,
"identifier": approving_device_uuid,
"type": UpdateType::AuthRequestResponse as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
}
})));
}
}

View File

@@ -4,7 +4,8 @@ use rocket::{fs::NamedFile, http::ContentType, response::content::RawHtml as Htm
use serde_json::Value;
use crate::{
api::{core::now, ApiResult},
api::{core::now, ApiResult, EmptyResult},
auth::decode_file_download,
error::Error,
util::{Cached, SafeString},
CONFIG,
@@ -13,11 +14,17 @@ use crate::{
pub fn routes() -> Vec<Route> {
// If addding more routes here, consider also adding them to
// crate::utils::LOGGED_ROUTES to make sure they appear in the log
let mut routes = routes![attachments, alive, alive_head, static_files];
if CONFIG.web_vault_enabled() {
routes![web_index, app_id, web_files, attachments, alive, static_files]
} else {
routes![attachments, alive, static_files]
routes.append(&mut routes![web_index, web_index_head, app_id, web_files]);
}
#[cfg(debug_assertions)]
if CONFIG.reload_templates() {
routes.append(&mut routes![_static_files_dev]);
}
routes
}
pub fn catchers() -> Vec<Catcher> {
@@ -43,6 +50,17 @@ async fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).await.ok(), false)
}
#[head("/")]
fn web_index_head() -> EmptyResult {
// Add an explicit HEAD route to prevent uptime monitoring services from
// generating "No matching routes for HEAD /" error messages.
//
// Rocket automatically implements a HEAD route when there's a matching GET
// route, but relying on this behavior also means a spurious error gets
// logged due to <https://github.com/SergioBenitez/Rocket/issues/1098>.
Ok(())
}
#[get("/app-id.json")]
fn app_id() -> Cached<(ContentType, Json<Value>)> {
let content_type = ContentType::new("application", "fido.trusted-apps+json");
@@ -80,8 +98,13 @@ async fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).await.ok(), true)
}
#[get("/attachments/<uuid>/<file_id>")]
async fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
#[get("/attachments/<uuid>/<file_id>?<token>")]
async fn attachments(uuid: SafeString, file_id: SafeString, token: String) -> Option<NamedFile> {
let Ok(claims) = decode_file_download(&token) else { return None };
if claims.sub != *uuid || claims.file_id != *file_id {
return None;
}
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).await.ok()
}
@@ -92,9 +115,39 @@ fn alive(_conn: DbConn) -> Json<String> {
now()
}
#[get("/vw_static/<filename>")]
pub fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Error> {
match filename.as_ref() {
#[head("/alive")]
fn alive_head(_conn: DbConn) -> EmptyResult {
// Avoid logging spurious "No matching routes for HEAD /alive" errors
// due to <https://github.com/SergioBenitez/Rocket/issues/1098>.
Ok(())
}
// This endpoint/function is used during development and development only.
// It allows to easily develop the admin interface by always loading the files from disk instead from a slice of bytes
// This will only be active during a debug build and only when `RELOAD_TEMPLATES` is set to `true`
// NOTE: Do not forget to add any new files added to the `static_files` function below!
#[cfg(debug_assertions)]
#[get("/vw_static/<filename>", rank = 1)]
pub async fn _static_files_dev(filename: PathBuf) -> Option<NamedFile> {
warn!("LOADING STATIC FILES FROM DISK");
let file = filename.to_str().unwrap_or_default();
let ext = filename.extension().unwrap_or_default();
let path = if ext == "png" || ext == "svg" {
tokio::fs::canonicalize(Path::new(file!()).parent().unwrap().join("../static/images/").join(file)).await
} else {
tokio::fs::canonicalize(Path::new(file!()).parent().unwrap().join("../static/scripts/").join(file)).await
};
if let Ok(path) = path {
return NamedFile::open(path).await.ok();
};
None
}
#[get("/vw_static/<filename>", rank = 2)]
pub fn static_files(filename: &str) -> Result<(ContentType, &'static [u8]), Error> {
match filename {
"404.png" => Ok((ContentType::PNG, include_bytes!("../static/images/404.png"))),
"mail-github.png" => Ok((ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok((ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
@@ -102,14 +155,25 @@ pub fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Er
"hibp.png" => Ok((ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"vaultwarden-icon.png" => Ok((ContentType::PNG, include_bytes!("../static/images/vaultwarden-icon.png"))),
"vaultwarden-favicon.png" => Ok((ContentType::PNG, include_bytes!("../static/images/vaultwarden-favicon.png"))),
"404.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/404.css"))),
"admin.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/admin.css"))),
"admin.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/admin.js"))),
"admin_settings.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/admin_settings.js"))),
"admin_users.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/admin_users.js"))),
"admin_organizations.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/admin_organizations.js")))
}
"admin_diagnostics.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/admin_diagnostics.js")))
}
"bootstrap.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js"))),
"bootstrap.bundle.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap.bundle.js"))),
"jdenticon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon.js"))),
"datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.6.2.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.2.slim.js")))
"jquery-3.7.0.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.7.0.slim.js")))
}
_ => err!(format!("Static file not found: {}", filename)),
_ => err!(format!("Static file not found: {filename}")),
}
}

View File

@@ -23,18 +23,17 @@ static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFI
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin()));
static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin()));
static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
std::fs::read(CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key.\n{}", e))
});
static PRIVATE_RSA_KEY: Lazy<EncodingKey> = Lazy::new(|| {
EncodingKey::from_rsa_pem(&PRIVATE_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{}", e))
});
static PUBLIC_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
std::fs::read(CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key.\n{}", e))
let key =
std::fs::read(CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key. \n{e}"));
EncodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{e}"))
});
static PUBLIC_RSA_KEY: Lazy<DecodingKey> = Lazy::new(|| {
DecodingKey::from_rsa_pem(&PUBLIC_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{}", e))
let key = std::fs::read(CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key. \n{e}"));
DecodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{e}"))
});
pub fn load_keys() {
@@ -45,7 +44,7 @@ pub fn load_keys() {
pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
match jsonwebtoken::encode(&JWT_HEADER, claims, &PRIVATE_RSA_KEY) {
Ok(token) => token,
Err(e) => panic!("Error encoding jwt {}", e),
Err(e) => panic!("Error encoding jwt {e}"),
}
}
@@ -96,6 +95,14 @@ pub fn decode_send(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_SEND_ISSUER.to_string())
}
pub fn decode_api_org(token: &str) -> Result<OrgApiKeyLoginJwtClaims, Error> {
decode_jwt(token, JWT_ORG_API_KEY_ISSUER.to_string())
}
pub fn decode_file_download(token: &str) -> Result<FileDownloadClaims, Error> {
decode_jwt(token, JWT_FILE_DOWNLOAD_ISSUER.to_string())
}
#[derive(Debug, Serialize, Deserialize)]
pub struct LoginJwtClaims {
// Not before
@@ -203,6 +210,60 @@ pub fn generate_emergency_access_invite_claims(
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct OrgApiKeyLoginJwtClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub client_id: String,
pub client_sub: String,
pub scope: Vec<String>,
}
pub fn generate_organization_api_key_login_claims(uuid: String, org_id: String) -> OrgApiKeyLoginJwtClaims {
let time_now = Utc::now().naive_utc();
OrgApiKeyLoginJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(1)).timestamp(),
iss: JWT_ORG_API_KEY_ISSUER.to_string(),
sub: uuid,
client_id: format!("organization.{org_id}"),
client_sub: org_id,
scope: vec!["api.organization".into()],
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct FileDownloadClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub file_id: String,
}
pub fn generate_file_download_claims(uuid: String, file_id: String) -> FileDownloadClaims {
let time_now = Utc::now().naive_utc();
FileDownloadClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(5)).timestamp(),
iss: JWT_FILE_DOWNLOAD_ISSUER.to_string(),
sub: uuid,
file_id,
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims {
// Not before
@@ -241,7 +302,7 @@ pub fn generate_admin_claims() -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(20)).timestamp(),
exp: (time_now + Duration::minutes(CONFIG.admin_session_lifetime())).timestamp(),
iss: JWT_ADMIN_ISSUER.to_string(),
sub: "admin_panel".to_string(),
}
@@ -253,7 +314,7 @@ pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(2)).timestamp(),
iss: JWT_SEND_ISSUER.to_string(),
sub: format!("{}/{}", send_id, file_id),
sub: format!("{send_id}/{file_id}"),
}
}
@@ -306,7 +367,7 @@ impl<'r> FromRequest<'r> for Host {
""
};
format!("{}://{}", protocol, host)
format!("{protocol}://{host}")
};
Outcome::Success(Host {
@@ -318,6 +379,7 @@ impl<'r> FromRequest<'r> for Host {
pub struct ClientHeaders {
pub host: String,
pub device_type: i32,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -326,6 +388,10 @@ impl<'r> FromRequest<'r> for ClientHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let host = try_outcome!(Host::from_request(request).await).host;
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip,
_ => err_handler!("Error getting Client IP"),
};
// When unknown or unable to parse, return 14, which is 'Unknown Browser'
let device_type: i32 =
request.headers().get_one("device-type").map(|d| d.parse().unwrap_or(14)).unwrap_or_else(|| 14);
@@ -333,6 +399,7 @@ impl<'r> FromRequest<'r> for ClientHeaders {
Outcome::Success(ClientHeaders {
host,
device_type,
ip,
})
}
}
@@ -341,6 +408,7 @@ pub struct Headers {
pub host: String,
pub device: Device,
pub user: User,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -351,6 +419,10 @@ impl<'r> FromRequest<'r> for Headers {
let headers = request.headers();
let host = try_outcome!(Host::from_request(request).await).host;
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip,
_ => err_handler!("Error getting Client IP"),
};
// Get access_token
let access_token: &str = match headers.get_one("Authorization") {
@@ -420,6 +492,7 @@ impl<'r> FromRequest<'r> for Headers {
host,
device,
user,
ip,
})
}
}
@@ -431,25 +504,7 @@ pub struct OrgHeaders {
pub org_user_type: UserOrgType,
pub org_user: UserOrganization,
pub org_id: String,
}
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_org_id(request: &Request<'_>) -> Option<String> {
if let Some(Ok(org_id)) = request.param::<String>(1) {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
}
if let Some(Ok(org_id)) = request.query_value::<String>("organizationId") {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
}
None
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -458,7 +513,28 @@ impl<'r> FromRequest<'r> for OrgHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(Headers::from_request(request).await);
match get_org_id(request) {
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
let url_org_id: Option<&str> = {
let mut url_org_id = None;
if let Some(Ok(org_id)) = request.param::<&str>(1) {
if uuid::Uuid::parse_str(org_id).is_ok() {
url_org_id = Some(org_id);
}
}
if let Some(Ok(org_id)) = request.query_value::<&str>("organizationId") {
if uuid::Uuid::parse_str(org_id).is_ok() {
url_org_id = Some(org_id);
}
}
url_org_id
};
match url_org_id {
Some(org_id) => {
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
@@ -466,7 +542,7 @@ impl<'r> FromRequest<'r> for OrgHeaders {
};
let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &mut conn).await {
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, org_id, &mut conn).await {
Some(user) => {
if user.status == UserOrgStatus::Confirmed as i32 {
user
@@ -490,7 +566,8 @@ impl<'r> FromRequest<'r> for OrgHeaders {
}
},
org_user,
org_id,
org_id: String::from(org_id),
ip: headers.ip,
})
}
_ => err_handler!("Error getting the organization id"),
@@ -504,6 +581,7 @@ pub struct AdminHeaders {
pub user: User,
pub org_user_type: UserOrgType,
pub client_version: Option<String>,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -520,6 +598,7 @@ impl<'r> FromRequest<'r> for AdminHeaders {
user: headers.user,
org_user_type: headers.org_user_type,
client_version,
ip: headers.ip,
})
} else {
err_handler!("You need to be Admin or Owner to call this endpoint")
@@ -533,6 +612,7 @@ impl From<AdminHeaders> for Headers {
host: h.host,
device: h.device,
user: h.user,
ip: h.ip,
}
}
}
@@ -564,6 +644,7 @@ pub struct ManagerHeaders {
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -580,14 +661,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.has_full_access()
&& !Collection::has_access_by_collection_and_user_uuid(
&col_id,
&headers.org_user.user_uuid,
&mut conn,
)
.await
{
if !can_access_collection(&headers.org_user, &col_id, &mut conn).await {
err_handler!("The current user isn't a manager for this collection")
}
}
@@ -599,6 +673,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
ip: headers.ip,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
@@ -612,6 +687,7 @@ impl From<ManagerHeaders> for Headers {
host: h.host,
device: h.device,
user: h.user,
ip: h.ip,
}
}
}
@@ -622,7 +698,9 @@ pub struct ManagerHeadersLoose {
pub host: String,
pub device: Device,
pub user: User,
pub org_user: UserOrganization,
pub org_user_type: UserOrgType,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -636,7 +714,9 @@ impl<'r> FromRequest<'r> for ManagerHeadersLoose {
host: headers.host,
device: headers.device,
user: headers.user,
org_user: headers.org_user,
org_user_type: headers.org_user_type,
ip: headers.ip,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
@@ -650,14 +730,45 @@ impl From<ManagerHeadersLoose> for Headers {
host: h.host,
device: h.device,
user: h.user,
ip: h.ip,
}
}
}
async fn can_access_collection(org_user: &UserOrganization, col_id: &str, conn: &mut DbConn) -> bool {
org_user.has_full_access()
|| Collection::has_access_by_collection_and_user_uuid(col_id, &org_user.user_uuid, conn).await
}
impl ManagerHeaders {
pub async fn from_loose(
h: ManagerHeadersLoose,
collections: &Vec<String>,
conn: &mut DbConn,
) -> Result<ManagerHeaders, Error> {
for col_id in collections {
if uuid::Uuid::parse_str(col_id).is_err() {
err!("Collection Id is malformed!");
}
if !can_access_collection(&h.org_user, col_id, conn).await {
err!("You don't have access to all collections!");
}
}
Ok(ManagerHeaders {
host: h.host,
device: h.device,
user: h.user,
org_user_type: h.org_user_type,
ip: h.ip,
})
}
}
pub struct OwnerHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub ip: ClientIp,
}
#[rocket::async_trait]
@@ -671,6 +782,7 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
host: headers.host,
device: headers.device,
user: headers.user,
ip: headers.ip,
})
} else {
err_handler!("You need to be Owner to call this endpoint")
@@ -713,3 +825,26 @@ impl<'r> FromRequest<'r> for ClientIp {
})
}
}
pub struct WsAccessTokenHeader {
pub access_token: Option<String>,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for WsAccessTokenHeader {
type Error = ();
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
// Get access_token
let access_token = match headers.get_one("Authorization") {
Some(a) => a.rsplit("Bearer ").next().map(String::from),
None => None,
};
Outcome::Success(Self {
access_token,
})
}
}

Some files were not shown because too many files have changed in this diff Show More