Compare commits

...

188 Commits

Author SHA1 Message Date
Daniel García
f94ac6ca61 Merge pull request #2044 from jjlin/emergency-access-cleanup
Emergency Access cleanup
2021-10-19 20:14:29 +02:00
Jeremy Lin
cee3fd5ba2 Emergency Access cleanup
This commit contains mostly superficial user-facing cleanup, to be followed up
with more extensive cleanup and fixes in the API implementation.
2021-10-19 02:22:44 -07:00
Daniel García
016fe2269e Update dependencies 2021-10-18 22:14:29 +02:00
Daniel García
03c0a5e405 Update web vault image to v2.23.0c 2021-10-18 22:06:35 +02:00
Daniel García
cbbed79036 Merge branch 'domdomegg-domdomegg/2fa-check-accepted' into main 2021-10-18 21:13:57 +02:00
Daniel García
4af81ec50e Merge branch 'domdomegg/2fa-check-accepted' of https://github.com/domdomegg/vaultwarden into domdomegg-domdomegg/2fa-check-accepted 2021-10-18 21:13:50 +02:00
Daniel García
a5ba67fef2 Merge branch 'BlackDex-alive-db-check' into main 2021-10-18 21:13:29 +02:00
Adam Jones
4cebe1fff4 cargo fmt 2021-10-09 15:42:06 +01:00
Adam Jones
a984dbbdf3 2FA org policy: do not enforce on invited (not accepted) users 2021-10-09 13:54:30 +01:00
BlackDex
881524bd54 Added DbConn to /alive healthcheck
During a small discusson on Matrix it seems logical to have the /alive
endpoint also check if the database connection still works.

The reason for this was regarding a certificate which failed/expired
while vaultwarden and the database were still up-and-running, but
suddenly vaultwarden couldn't connect anymore.

With this `DbConn` added to `/alive`, it will be more accurate, because
of vaultwarden can't reach the database, it isn't alive.
2021-10-09 14:16:27 +02:00
Daniel García
44da9e6ca7 Merge branch 'BlackDex-update-openssl-amd64-alpine' into main 2021-10-08 22:29:19 +02:00
Daniel García
4c0c8f7432 Merge branch 'update-openssl-amd64-alpine' of https://github.com/BlackDex/vaultwarden into BlackDex-update-openssl-amd64-alpine 2021-10-08 22:29:13 +02:00
Daniel García
f67854c59c Merge branch 'BlackDex-mail-errors' into main 2021-10-08 22:28:54 +02:00
Daniel García
a1c1b9ab3b Merge branch 'mail-errors' of https://github.com/BlackDex/vaultwarden into BlackDex-mail-errors 2021-10-08 22:28:46 +02:00
Daniel García
395979e834 Merge branch 'domdomegg-domdomegg/single-organization-policy' into main 2021-10-08 22:27:31 +02:00
BlackDex
fce6cb5865 Update OpenSSL via an updated clux build image.
Recently the LetsEncrypt DST certificate has expired.
Older versions of OpenSSL like v1.0.x have issues using this certificate.

Recently clux has updated his image to support OpenSSL v1.1.1[a-z].
This solves issues with those certificates.

This issues was disscused on Matrix.
2021-10-08 16:46:29 +02:00
BlackDex
338756550a Fix error reporting in admin and some small fixes
- Fixed a bug in JavaScript which caused no messages to be shown to the
user in-case of an error send by the server.
- Changed mail error handling for better error messages
- Changed user/org actions from a to buttons, this should prevent
strange issues in-case of javascript issues and the page does re-load.
- Added Alpine and Debian info for the running docker image

During the mail error testing i encountered a bug which caused lettre to
panic. This panic only happens on debug builds and not release builds,
so no need to update anything on that part. This bug is also already
fixed. See https://github.com/lettre/lettre/issues/678 and https://github.com/lettre/lettre/pull/679

Resolves #2021
Could also fix the issue reported here #2022, or at least no hash `#` in
the url.
2021-10-08 00:01:24 +02:00
Adam Jones
d014eede9a feature: Support single organization policy
This adds back-end support for the [single organization policy](https://bitwarden.com/help/article/policies/#single-organization).
2021-10-02 19:30:19 +02:00
Daniel García
9930a0d752 Merge pull request #2001 from BlackDex/issue-1998
Revert Debian images back to Buster.
2021-09-27 19:59:59 +02:00
BlackDex
9928a5404b Revert Debian images back to Buster.
This fixes #1998 where with some checking it seems Bullseye has some
issues with the glibc sleep call. It returns a SIGILL.

The glibc on Buster doesn't seem to have this issue, so revert back for
now until a fix has been released.
2021-09-27 17:35:49 +02:00
Daniel García
a6e0ddcdf1 Merge branch 'domdomegg-domdomegg/support-no-data-org-policies' into main 2021-09-26 23:21:30 +02:00
Daniel García
acab70ed89 Merge branch 'domdomegg/support-no-data-org-policies' of https://github.com/domdomegg/vaultwarden into domdomegg-domdomegg/support-no-data-org-policies 2021-09-26 23:21:24 +02:00
Daniel García
c0d149060f Merge branch 'BlackDex-icon-download-update' into main 2021-09-26 23:20:55 +02:00
Daniel García
344f00d9c9 Merge branch 'icon-download-update' of https://github.com/BlackDex/vaultwarden into BlackDex-icon-download-update 2021-09-26 23:20:44 +02:00
Daniel García
b26afb970a Merge branch 'Makishima-patch-1' into main 2021-09-26 23:19:36 +02:00
Nikolay
34ed5ce4b3 Update README.md
Fixed 'Bitwarden clients' link, it should lead to https://bitwarden.com/download/ and not to https://bitwarden.com/#download/
2021-09-25 13:10:06 +03:00
BlackDex
9375d5b8c2 Updated icon downloading
- Unicode websites could break (www.post.japanpost.jp for example).
  regex would fail because it was missing the unicode-perl feature.
- Be less verbose in logging with icon downloads
- Removed duplicate info/error messages
- Added err_silent! macro to help with the less verbose error/info messages.
2021-09-24 18:27:52 +02:00
Adam Jones
e3678b4b56 fix: Support no-data enterprise policies
Boolean-toggle enterprise policies (like 'Two-Step Login' and 'Personal Ownership') don't provide a data attribute in the new version of the web client. This updates the backend to expect these to be optional.

Web change introduced in https://github.com/bitwarden/web/pull/1147 which added 2cbe023a38/src/app/organizations/policies/base-policy.component.ts (L48-L50)
2021-09-24 17:20:44 +02:00
Daniel García
b4c95fb4ac Hide some warnings for unused struct fields 2021-09-22 21:39:31 +02:00
Daniel García
0bb33e04bb Update dependencies and ser cargo resolver to version 2 ahead of 2021 edition 2021-09-22 20:26:48 +02:00
Daniel García
4d33e24099 Update web vault to 2.23.0 2021-09-22 20:26:17 +02:00
Daniel García
2cdce04662 Merge branch 'thelittlefireman-emergency_feature' into main 2021-09-19 23:54:28 +02:00
Daniel García
756d108f6a Merge branch 'emergency_feature' of https://github.com/thelittlefireman/bitwarden_rs into thelittlefireman-emergency_feature 2021-09-19 23:54:19 +02:00
thelittlefireman
ca20b3d80c [PATCH] Some fixes to the Emergency Access PR
- Changed the date of the migration folders to be from this date.
- Removed a lot is_email_domain_allowed checks.
  This check only needs to be done during the invite it self, else
everything else will fail even if a user has an account created via the
/admin interface which bypasses that specific check! Also, the check was
at the wrong place anyway's, since it would only not send out an e-mail,
but would still have allowed an not allowed domain to be used when
e-mail would have been disabled. While that check always works, even if
sending e-mails is disasbled.
- Added an extra allowed route during password/key-rotation change which
updates/checks the public-key afterwards.
- A small change with some `Some` and `None` orders.
- Change the new invite object to only generate the UTC time once, since
it could be possible that there will be a second difference, and we only
need to call it just once.

by black.dex@gmail.com

Signed-off-by: thelittlefireman <thelittlefireman@users.noreply.github.com>
2021-09-17 01:25:47 +02:00
thelittlefireman
4ab9362971 Add Emergency contact feature
Signed-off-by: thelittlefireman <thelittlefireman@users.noreply.github.com>
2021-09-17 01:25:44 +02:00
Daniel García
4e8828e41a Merge branch 'BlackDex-admin-interface' into main 2021-09-16 21:36:31 +02:00
Daniel García
f8d1cfad2a Merge branch 'admin-interface' of https://github.com/BlackDex/vaultwarden into BlackDex-admin-interface 2021-09-16 21:36:25 +02:00
BlackDex
b0a411b733 Update some JS Libraries and fix small issues
- Updated JS Libraries
- Downgraded bootstrap.css to v5.0.2 which works with Bootstrap-Native.
- Fixed issue with settings being able to open/collapse on some systems.
- Added .js and .css to the exclude list for the end-of-file-fixer pre-commit
2021-09-18 19:49:44 +02:00
Daniel García
81741647f3 Merge branch 'BlackDex-bulk-org-actions' into main 2021-09-16 21:36:15 +02:00
BlackDex
f36bd72a7f Add Organization bulk actions support
For user management within the organization view you are able to select
multiple users to re-invite, confirm or delete them.

These actions were not working which this PR fixes by adding support for
these endpoints. This will make it easier to confirm and delete multiple
users at once instead of having to do this one-by-one.
2021-09-18 14:22:14 +02:00
Mathijs van Veluw
8c10de3edd Merge pull request #1978 from benarmstead/main
Update dependencies
2021-09-16 19:28:05 +02:00
Ben Armstead
0ab10a7c43 Merge pull request #1 from benarmstead/update
Add updates in cargo.toml too
2021-09-16 16:02:50 +01:00
Ben Armstead
a1a5e00ff5 Update dependencies in Cargo.lock 2021-09-16 15:59:24 +01:00
Ben Armstead
8af4b593fa Update dependencies in cargo.toml 2021-09-16 15:58:49 +01:00
Ben Armstead
9bef2c120c Update dependencies 2021-09-16 14:21:06 +01:00
Daniel García
f7d99c43b5 Merge pull request #1973 from BlackDex/optimize-release
Optimize release workflow.
2021-09-13 20:39:48 +02:00
BlackDex
ca0fd7a31b Optimize release workflow.
- Split Debian and Alpine into different build matrix
  This starts building both Debian and Alpine based images at the same time
- Make use of Docker BuildKit, which improves speed also.
- Use BuildKit caching for Rust Cargo across docker images.
  This prevents downloading the same crates multiple times.
- Use Github Actions Services to start a docker registry, starting it
via the build script sometimes caused issues.
- Updated the Build workflow to use Ubuntu 20.04 which is more close to
the Bullseye Debian release regarding package versions.
2021-09-13 14:42:15 +02:00
Daniel García
9e1550af8e Merge branch 'BlackDex-issue-1963' into main 2021-09-09 20:30:38 +02:00
Daniel García
a99c9715f6 Merge branch 'issue-1963' of https://github.com/BlackDex/vaultwarden into BlackDex-issue-1963 2021-09-09 20:30:29 +02:00
Daniel García
1a888b5355 Merge branch 'jjlin-personal-ownership-imports' into main 2021-09-09 20:30:13 +02:00
BlackDex
10d5c7738a Fix issue when using uppercase chars in emails
In the case when SMTP is disabled and.
when inviting new users either via the admin interface or into an
organization and using uppercase letters, this would fail for those
users to be able to register since the checks which were done are
case-sensitive and never matched.

This PR fixes that issue by ensuring everything is lowercase.
Fixes #1963
2021-09-09 13:52:39 +02:00
Jeremy Lin
80f23e6d78 Enforce Personal Ownership policy on imports
Upstream PR: https://github.com/bitwarden/server/pull/1565
2021-09-08 23:26:15 -07:00
Daniel García
d5ed2ce6df Merge branch 'jjlin-webauthn-origin' into main 2021-09-06 17:17:04 +02:00
Daniel García
5e649f0d0d Merge branch 'webauthn-origin' of https://github.com/jjlin/vaultwarden into jjlin-webauthn-origin 2021-09-06 17:16:56 +02:00
Daniel García
612c0e9478 Merge branch 'jjlin-bullseye' into main 2021-09-06 17:16:36 +02:00
Daniel García
0d2b3bfb99 Merge pull request #1945 from BlackDex/github-actions-release
Build Docker Hub images via Github Actions
2021-09-08 20:54:41 +02:00
Daniel García
c934838ace Merge branch 'bullseye' of https://github.com/jjlin/vaultwarden into jjlin-bullseye 2021-09-06 17:16:28 +02:00
Jeremy Lin
4350e9d241 Update Debian base images to bullseye 2021-09-04 11:46:15 -07:00
Jeremy Lin
0cdc0cb147 Fix incorrect WebAuthn origin
This mainly affects users running Vaultwarden under a subpath.

Refs:

* https://github.com/kanidm/webauthn-rs/blob/b2cbb34/src/core.rs#L941-L948
* https://github.com/kanidm/webauthn-rs/blob/b2cbb34/src/core.rs#L316
* https://w3c.github.io/webauthn/#dictionary-client-data
2021-08-29 15:53:25 -07:00
BlackDex
20535065d7 Build Docker Hub images via Github Actions
Since docker hub stopped Autobuild, we need to switch to something else.
This will trigger building of images on Github Actions and pushes them
to Docker Hub.

You only need to add 3 secrets before you merge this PR to have it working directly.

- DOCKERHUB_USERNAME : The username of the account you are going to push the builds to
- DOCKERHUB_TOKEN : The token needed to login and push builds
- DOCKERHUB_REPO : The repo name in the following form `index.docker.io/<user>/<repo>`
  So for vaultwarden that would be `index.docker.io/vaultwarden/server`

Also some small modifications to the other workflows.
2021-08-28 17:29:13 +02:00
Daniel García
a23f4a704b Merge branch 'fabianthdev-fix/sends_notifications' into main 2021-08-22 22:17:00 +02:00
Daniel García
93f2f74767 Merge branch 'fix/sends_notifications' of https://github.com/fabianthdev/vaultwarden into fabianthdev-fix/sends_notifications 2021-08-22 22:16:50 +02:00
Daniel García
37ca202247 Merge branch 'mrckndt-fix-timezone-alpine-container' into main 2021-08-22 22:14:46 +02:00
Daniel García
37525b1e7e Merge branch 'fix-timezone-alpine-container' of https://github.com/mrckndt/vaultwarden into mrckndt-fix-timezone-alpine-container 2021-08-22 22:14:38 +02:00
Daniel García
d594b5a266 Merge branch 'jjlin-fix-attachment-sharing' into main 2021-08-22 22:14:14 +02:00
Daniel García
41add45e67 Merge branch 'fix-attachment-sharing' of https://github.com/jjlin/vaultwarden into jjlin-fix-attachment-sharing 2021-08-22 22:14:07 +02:00
Daniel García
08b168a0a1 Merge branch 'BlackDex-fix-1878' into main 2021-08-22 22:12:59 +02:00
Daniel García
978ef2bc8b Merge branch 'fix-1878' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-1878 2021-08-22 22:12:52 +02:00
BlackDex
881d1f4334 Fix wrong display of MFA email.
There was some wrong logic regarding the display of which email is
configured to be used for the email MFA. This is now fixed.

Resolves #1878
2021-08-19 09:25:34 +02:00
Jeremy Lin
56b4f46d7d Fix limitation on sharing ciphers with attachments
This check is several years old, so maybe there was a valid reason
for having it before, but it's not correct anymore.
2021-08-16 22:23:33 -07:00
Marco
f6bd8b3462 Adding tzdata to container
To be able to set a timezone inside a container with the env variable TZ
the tzdata package is needed. Otherwise only UTC will be set.
2021-08-06 13:39:33 +02:00
Fabian Thies
1f0f64d961 Sort the imports in notifications.rs alphabetically 2021-08-04 16:56:43 +02:00
Fabian Thies
42ba817a4c Fix errors that occurred in the nightly build 2021-08-04 13:25:41 +02:00
Fabian Thies
dd98fe860b Send create, update and delete notifications for Sends in the correct format.
Add endpoints to get all sends or a specific send by its uuid.
2021-08-03 17:39:38 +02:00
Daniel García
1fe9f101be Merge branch 'jjlin-fix-org-attachment-uploads' into main 2021-07-25 19:08:44 +02:00
Daniel García
c68fbb41d2 Merge branch 'fix-org-attachment-uploads' of https://github.com/jjlin/vaultwarden into jjlin-fix-org-attachment-uploads 2021-07-25 19:08:38 +02:00
Jeremy Lin
91e80657e4 Fix error with adding file attachment from org vault view 2021-08-18 20:54:36 -07:00
Daniel García
2db30f918e Merge branch 'BlackDex-fix-sync-desktop-client' into main 2021-07-25 19:07:59 +02:00
Daniel García
cfceac3909 Merge branch 'fix-sync-desktop-client' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-sync-desktop-client 2021-07-25 19:07:51 +02:00
BlackDex
58b046fd10 Fix syncing with Bitwarden Desktop v1.28.0
Syncing with the latest desktop client (v1.28.0) fails because it expects some json key/values to be there.

This PR adds those key/value pairs.

Resolves #1924
2021-08-21 10:36:08 +02:00
Daniel García
227779256c Merge branch 'BlackDex-dep-update' into main 2021-07-25 19:07:04 +02:00
BlackDex
89b5f7c98d Dependency updates
Updated several dependencies and switch to different totp library.

- Switch oath with totp-lite
  oauth hasn't been updated in a long while and some dependencies could not be updated any more
  It now also validates a preseeding 0, as the previous library returned an int instead of a str which stripped a leading 0
- Updated rust to the current latest nightly (including build image)
- Updated bootstrap css and js
- Updated hadolint to latest version
- Updated default rust image from v1.53 to v1.54
- Updated new nightly build/clippy messages
2021-08-22 13:46:48 +02:00
Daniel García
c666497130 Update webvault to 2.21.1 2021-07-25 18:56:06 +02:00
Daniel García
2620a1ac8c Merge pull request #1869 from BlackDex/webauthn-plus-updates
Fix WebAuthn issues and some small updates
2021-07-25 17:23:37 +02:00
BlackDex
ffdcafa044 Fix WebAuthn issues and some small updates
- Updated some packages
- Updated code related to package updates.
- Disabled User Verification enforcement when WebAuthn Key sends UV=1
  This makes it compatible with upstream and resolves #1840
- Fixed a bug where removing an individual WebAuthn key deleted the wrong key.
2021-07-25 14:49:55 +02:00
Daniel García
56ffec40f4 Formatting 2021-07-15 21:52:17 +02:00
Daniel García
96c2416903 Merge branch 'BlackDex-future-web-vault' into main 2021-07-15 21:51:52 +02:00
Mathijs van Veluw
340d42a1ca Merge branch 'main' into future-web-vault 2021-07-15 21:43:23 +02:00
Daniel García
e19420160f Simplify 2fa removed email and remove extra table close in the footer 2021-07-15 21:25:46 +02:00
Daniel García
1741316f42 Merge branch 'olivierIllogika-2fa_enforcement' into main 2021-07-15 19:27:45 +02:00
Daniel García
4f08167d6f Merge branch '2fa_enforcement' of https://github.com/olivierIllogika/bitwarden_rs into olivierIllogika-2fa_enforcement 2021-07-15 19:27:36 +02:00
Daniel García
fef76e2f6f Merge branch 'BlackDex-attachment-storage' into main 2021-07-15 19:20:57 +02:00
Daniel García
f16d56cb27 Merge branch 'attachment-storage' of https://github.com/BlackDex/vaultwarden into BlackDex-attachment-storage 2021-07-15 19:20:52 +02:00
Daniel García
120b286f2b Merge branch 'umireon-umireon-add-edge-frame-ancestors' into main 2021-07-15 19:20:25 +02:00
Daniel García
7f437b6947 Merge branch 'umireon-add-edge-frame-ancestors' of https://github.com/umireon/vaultwarden into umireon-umireon-add-edge-frame-ancestors 2021-07-15 19:20:19 +02:00
Daniel García
8d6e62e18b Merge branch 'jjlin-password-hints' into main 2021-07-15 19:18:30 +02:00
Daniel García
d0ec410b73 Merge branch 'password-hints' of https://github.com/jjlin/vaultwarden into jjlin-password-hints 2021-07-15 19:18:22 +02:00
Daniel García
c546a59c38 Dependency updates 2021-07-15 19:18:16 +02:00
Daniel García
e5ec245626 Protect namedfile against path traversal, rocket only does it for pathbuf 2021-07-15 19:15:55 +02:00
BlackDex
6ea95d1ede Updated attachment limit descriptions
The user and org attachment limit use `size` as wording while it should
have been `storage` since it isn't per attachment, but the sum of all attachments.

- Changed the wording in the config/env
- Changed the wording of the error messages.

Resolves #1818
2021-07-13 15:17:03 +02:00
Jeremy Lin
88bea44dd8 Prevent user enumeration via password hints
When `show_password_hint` is enabled but mail is not configured, the previous
implementation returned a differentiable response for non-existent email
addresses.

Even if mail is enabled, there is a timing side channel since mail is sent
synchronously. Add a randomized sleep to mitigate this somewhat.
2021-07-10 01:21:27 -07:00
Jeremy Lin
8ee5d51bd4 Disable show_password_hint by default
A setting that provides unauthenticated access to potentially sensitive data
shouldn't be enabled by default.
2021-07-10 01:20:37 -07:00
Kaito Udagawa
c640abbcd7 Update src/util.rs
Co-authored-by: William Desportes <williamdes@wdes.fr>
2021-07-08 02:55:58 +09:00
Kaito Udagawa
13598c098f Add links to browser extensions 2021-07-08 02:52:45 +09:00
Kaito Udagawa
a622b4d2fb Add Edge's frame-ancestors
Edge's frame-ancestors are required for Edge extension to do WebAuthn.
2021-07-08 01:19:52 +09:00
BlackDex
403f35b571 Added web-vault v2.21.x support + some misc fixes
- The new web-vault v2.21.0+ has support for Master Password Reset. For
this to work it generates a public/private key-pair which needs to be
stored in the database. Currently the Master Password Reset is not
fixed, but there are endpoints which are needed even if we do not
support this feature (yet). This PR fixes those endpoints, and stores
the keys already in the database.

- There was an issue when you want to do a key-rotate when you change
your password, it also called an Emergency Access endpoint, which we do
not yet support. Because this endpoint failed to reply correctly
produced some errors, and also prevent the user from being forced to
logout. This resolves #1826 by adding at least that endpoint.

Because of that extra endpoint check to Emergency Access is done using
an old user stamp, i also modified the stamp exception to allow multiple
rocket routes to be called, and added an expiration timestamp to it.

During these tests i stumbled upon an issue that after my key-change was
done, it triggered the websockets to try and reload my ciphers, because
they were updated. This shouldn't happen when rotating they keys, since
all access should be invalided. Now there will be no websocket
notification for this, which also prevents error toasts.

- Increased Send Size limit to 500MB (with a litle overhead)

As a side note, i tested these changes on both v2.20.4 and v2.21.1 web-vault versions, all keeps working.
2021-07-04 23:02:56 +02:00
Daniel García
3968bc8016 Merge pull request #1800 from BlackDex/pre-commit
Adding pre-commit config
2021-07-04 21:58:43 +02:00
Daniel García
ff66368cb6 Merge pull request #1830 from BlackDex/vaultwarden-logo
Storing the original Vaultwarden svg images
2021-07-04 21:58:29 +02:00
BlackDex
3fb419e704 Storing the original Vaultwarden svg images 2021-07-04 18:37:01 +02:00
Daniel García
832f838ddd Merge pull request #1809 from BlackDex/fix-armv7
Fix armv7 alpine build.
2021-06-29 17:16:57 +02:00
BlackDex
18703bf195 Fix armv7 alpine build.
The `messense/rust-musl-cross` has removed OpenSSL in favor of the
vendored option. Enabled vendored openssl to resolve this.

Resolves #1807
2021-06-29 10:37:39 +02:00
BlackDex
ff8e88a5df Adding pre-commit config
There is a nice tool called pre-commit: https://pre-commit.com/
It can run actions prior to a commit to validate everything is working.
People can choose to enable this for them selfs, but it would be nice to have a base config by default.
2021-06-27 19:11:22 +02:00
Daniel García
72e1946ce5 Merge pull request #1799 from BlackDex/issue-1796
Fixes issue with multiple security keys.
2021-06-27 18:23:15 +02:00
BlackDex
ee391720aa Fixes issue with multiple security keys.
- Updated webauthn-rs commit hash to resolve #1796
2021-06-27 18:12:27 +02:00
Daniel García
e3a2dfffab Formatting 2021-06-26 14:21:58 +02:00
Daniel García
8bf1278b1b Update web vault and docker base images 2021-06-26 14:08:06 +02:00
Daniel García
00ce943ea5 Merge branch 'BlackDex-security-md' into main 2021-06-26 13:36:14 +02:00
Daniel García
b67eacdfde Merge branch 'security-md' of https://github.com/BlackDex/vaultwarden into BlackDex-security-md 2021-06-26 13:36:05 +02:00
Daniel García
0dcea75764 Remove unused lifetime and double referencing 2021-06-26 13:35:09 +02:00
BlackDex
0c5532d8b5 Adding a SECURITY.md 2021-06-26 11:49:00 +02:00
Daniel García
46e0f3c43a Load RSA keys as pem format directly, and using openssl crate, backported from async branch 2021-06-25 20:53:26 +02:00
Daniel García
2cd17fe7af Add token with short expiration time to send url 2021-06-25 20:53:26 +02:00
Daniel García
f44b2611e6 Update rust toolchain and dependencies 2021-06-25 20:53:26 +02:00
Mathijs van Veluw
82fee0ede3 Merge pull request #1779 from jjlin/last-known-rev-warning
Avoid `Error parsing LastKnownRevisionDate` warning for mobile clients
2021-06-20 18:07:18 +02:00
Jeremy Lin
49579e4ce7 Avoid Error parsing LastKnownRevisionDate warning for mobile clients
When creating a new cipher, the mobile clients seem to set this field to an
invalid value, which causes a warning to be logged:

    Error parsing LastKnownRevisionDate '0001-01-01T00:00:00': premature end of input

Avoid this by dropping the `LastKnownRevisionDate` field on cipher creation.
2021-06-19 21:32:11 -07:00
Daniel García
9254cf9d9c Fix clippy lints 2021-06-19 22:02:03 +02:00
Daniel García
ff0fee3690 Merge branch 'BlackDex-admin-changes' into main 2021-06-19 21:38:58 +02:00
Daniel García
0778bd4bd5 Merge branch 'admin-changes' of https://github.com/BlackDex/vaultwarden into BlackDex-admin-changes 2021-06-19 21:27:25 +02:00
Daniel García
0cd065d354 Update webauthn-rs crate to upstream version 2021-06-19 21:25:55 +02:00
BlackDex
8615736e84 Multiple Admin Interface fixes and some others.
Misc:
- Fixed hadolint workflow, new git cli needs some extra arguments.
- Add ignore paths to all specific on triggers.
- Updated hadolint version.
- Made SMTP_DEBUG read-only, since it can't be changed at runtime.

Admin:
- Migrated from Bootstrap v4 to v5
- Updated jquery to v3.6.0
- Updated Datatables
- Made Javascript strict
- Added a way to show which ENV Vars are overridden.
- Changed the way to provide data for handlebars.
- Fixed date/time check.
- Made support string use details and summary feature of markdown/github.
2021-06-19 19:22:19 +02:00
Daniel García
5772836be5 Fix admin page with handlebars 4 2021-06-16 22:57:28 +02:00
Daniel García
c380d9c379 Support for webauthn and u2f->webauthn migrations 2021-06-16 19:06:40 +02:00
Daniel García
cea7a30d82 Merge pull request #1761 from jjlin/deps
Update dependencies
2021-06-10 21:03:05 +02:00
Jeremy Lin
06cde29419 Update dependencies
Notably, update `diesel` to 1.4.7 and `libsqlite3-sys` to 0.22.2 to pick up
the fix for CVE-2021-20227 added in SQLite 3.34.1.
2021-06-09 01:44:29 -07:00
Daniel García
20f5988174 Merge pull request #1736 from jjlin/rocket-env-docs
Clarify Rocket env var defaults
2021-06-04 20:03:17 +02:00
Jeremy Lin
b491cfe0b0 Clarify Rocket env var defaults
Mention `ROCKET_WORKERS`, but remove `ROCKET_ENV` since most users
probably wouldn't use it.
2021-05-31 13:13:02 -07:00
Daniel García
fc513413ea Merge pull request #1730 from jjlin/attachment-upload-v2
Add support for v2 attachment upload APIs
2021-05-30 22:27:52 +02:00
Jeremy Lin
3f7e4712cd Fix attachment size limit calculation for v2 uploads 2021-05-25 23:17:22 -07:00
Jeremy Lin
c2ef331df9 Rework file ID generation 2021-05-25 23:15:24 -07:00
Jeremy Lin
5fef7983f4 Clean up attachment error handling 2021-05-25 22:13:04 -07:00
Jeremy Lin
29ed82a359 Add support for v2 attachment upload APIs
Upstream PR: https://github.com/bitwarden/server/pull/1229
2021-05-25 04:14:51 -07:00
Daniel García
7d5186e40a Merge pull request #1706 from jjlin/trash-auto-delete-env
Add `TRASH_AUTO_DELETE_DAYS` to .env.template
2021-05-17 17:21:34 +02:00
Daniel García
99270612ba Merge pull request #1704 from jjlin/global-domains
Sync global_domains.json
2021-05-17 17:21:09 +02:00
Jeremy Lin
c7b5b6ee07 Add TRASH_AUTO_DELETE_DAYS to .env.template 2021-05-16 17:51:54 -07:00
Jeremy Lin
848d17ffb9 Sync global_domains.json to bitwarden/server@7857053 (Amazon) 2021-05-16 15:16:41 -07:00
Daniel García
47e8aa29e1 Merge pull request #1702 from BlackDex/icon-updates-plus
Updated icon fetching and crates.
2021-05-16 23:35:37 +02:00
BlackDex
f270f2ed65 Updated icon fetching and crates.
- Updated some crates
- Updated icon fetching code:
  + Use a cookie jar and set Max-Age to 2 minutes for all cookies
  + Locate the base href tag to fix some locations
  + Changed User-Agent (Helps on some sites to get HTML instead of JS)
  + Reduced HTML code limit from 512KB to 384KB
  + Allow some large icons higer-up in the sort
  + Allow GIF images
  + Ignore cookie_store and hyper::client debug messages
2021-05-16 15:29:13 +02:00
Daniel García
aba5b234af Merge pull request #1700 from jjlin/fix-attachment-downloads
Fix attachment downloads
2021-05-16 14:11:21 +02:00
Jeremy Lin
9133e2927d Fix attachment downloads
Upstream switched to new upload/download APIs. Uploads fall back to the
legacy APIs for now, but not downloads apparently.
2021-05-15 22:46:57 -07:00
Jeremy Lin
38104ba7cf cargo fmt changes
The PR build seems to fail without this...
2021-05-15 22:46:37 -07:00
Daniel García
c42bcae224 Merge pull request #1696 from umireon/patch-1
Remove unneeded spaces in .env.template
2021-05-14 17:40:05 +02:00
Kaito Udagawa
764e51bbe9 Remove unneeded spaces in .env.template 2021-05-14 22:36:42 +09:00
Daniel García
8e6c6a1dc4 Merge pull request #1689 from jjlin/hide-email
Add support for hiding the sender's email address in Bitwarden Sends
2021-05-12 23:05:53 +02:00
Daniel García
7a9cfc45da Merge pull request #1688 from jjlin/config-sends-allowed
Add `sends_allowed` config setting
2021-05-12 23:05:41 +02:00
Daniel García
9e24b9065c Merge pull request #1682 from dongcarl/2021-05-admin-granular-http-codes
admin: More granular HTTP return codes for user-related endpoints
2021-05-12 23:05:30 +02:00
Daniel García
1c2b376ca2 Merge pull request #1663 from dongcarl/2021-05-invite_user-return
admin: Return newly-created user in invite_user
2021-05-12 23:05:20 +02:00
Daniel García
746ce2afb4 Merge pull request #1653 from jjlin/password-reprompt
Add support for password reprompt
2021-05-12 23:05:01 +02:00
Jeremy Lin
029008bad5 Add support for the Send Options policy
Upstream refs:

* https://github.com/bitwarden/server/pull/1234
* https://bitwarden.com/help/article/policies/#send-options
2021-05-12 01:22:12 -07:00
Jeremy Lin
d3449bfa00 Add support for hiding the sender's email address in Bitwarden Sends
Note: The original Vaultwarden implementation of Bitwarden Send would always
hide the email address, while the upstream implementation would always show it.

Upstream PR: https://github.com/bitwarden/server/pull/1234
2021-05-11 22:51:12 -07:00
Jeremy Lin
a9a5706764 Add support for password reprompt
Upstream PR: https://github.com/bitwarden/server/pull/1269
2021-05-11 20:09:57 -07:00
Jeremy Lin
3ff8014add Add sends_allowed config setting
This provides global control over whether users can create Bitwarden Sends.
2021-05-11 20:07:32 -07:00
Carl Dong
e60bdc7efe admin: Make invite_user error codes more specific
- Return 409 Conflict for when a user with that email already exists
- Return 500 InternalServerError for everything else
2021-05-10 11:47:41 -04:00
Carl Dong
cccd8262fa admin: Add /users/<uuid> route
Individual user information can now be looked up by UUID.
2021-05-10 11:47:41 -04:00
Carl Dong
68e5d95d25 admin: Specifically return 404 for user not found
- Modify err_code to accept an expr for err_code
- Add get_user_or_404, properly returning 404 instead of a generic 400
  for cases where user is not found
- Use get_user_or_404 where appropriate.
2021-05-10 11:47:41 -04:00
Carl Dong
5f458b288a admin: Return newly-created user in invite_user
Instead of having the caller dig through /admin/users for the right one,
just return the user upon creation.
2021-05-10 11:47:41 -04:00
Daniel García
e9ee8ac2fa Fix sponsors 2021-05-08 19:01:51 +02:00
Daniel García
8a4dfc3bbe Merge pull request #1670 from BlackDex/branding-and-email
Updated branding, email and crates
2021-05-08 18:40:57 +02:00
Daniel García
4f86517501 Merge pull request #1679 from BlackDex/gh-workflows
Updated Pipelines and fixed new Hadolints
2021-05-08 18:40:41 +02:00
BlackDex
7cb19ef767 Updated branding, email and crates
- Updated branding for admin and emails
- Updated crates and some deprications
- Removed newline-converter because this is built-in into lettre
- Updated email templates to use a shared header and footer template
- Also trigger SMTP SSL When TLS is selected without SSL
  Resolves #1641
2021-05-08 17:46:31 +02:00
BlackDex
565439a914 Updated Pipelines and fixed new Hadolints
- Removed azure-pipelines
- Updated gh-actions to run `cargo test` per db feature
- Fail on warnings by adding `RUSTFLAGS` env
- Updated Dockerfile to fix some new hadolint warnings
2021-05-08 16:48:48 +02:00
Daniel García
b8010be26b Extract some FromDb trait impls outside the macros so they aren't repeated, and fix some clippy lints 2021-05-02 17:49:25 +02:00
Daniel García
f76b8a32ca Update dependencies 2021-05-02 17:48:06 +02:00
Olivier Martin
39167d333a Merge commit '0d631329873196935ba29db985c5e32def391251' into 2fa_enforcement 2021-05-01 12:35:58 -04:00
Daniel García
0d63132987 Rename the title too, not just the URL 2021-04-30 22:43:40 +02:00
Daniel García
7b5d5d1302 Rename references to the discourse forum 2021-04-30 22:40:12 +02:00
Daniel García
0dc98bda23 Merge pull request #1650 from mprasil/main
Point docker hub badge to correct repository
2021-04-30 21:29:32 +02:00
Miro Prasil
f9a062cac8 Point docker hub badge to correct repository
The link is already pointing to the new image location, but the badge
source was somewhat confusingly showing stats for the old image.
2021-04-30 19:29:17 +01:00
Daniel García
6ad4ccd901 Merge pull request #1640 from Proxymiity/patch-1
Fixed a typo in the readme
2021-04-30 16:22:39 +02:00
Daniel García
ee6ceaa923 Update README.md 2021-04-30 16:20:52 +02:00
Daniel García
20b393d354 Update README.md 2021-04-30 16:19:26 +02:00
Olivier Martin
f707f86c8e Merge commit '1e5306b8203a7ebe24047910e6c690c18c6d827a' into 2fa_enforcement 2021-04-29 23:29:28 -04:00
Proxymiity ☆
daea54b288 Renamed bw-data according to project name change 2021-04-29 21:13:41 +02:00
Olivier Martin
cc021a4784 project name and links in new email templates 2021-04-27 21:48:07 -04:00
Olivier Martin
e3c4609c2a Merge commit '3da44a8d30e76f48b84f5b888e0b33427037037c' into 2fa_enforcement 2021-04-27 21:44:32 -04:00
Olivier Martin
89a68741d6 ran cargo fmt --all 2021-04-16 14:49:59 -04:00
Olivier Martin
2421d49d9a Merge branch 'master' of github.com:dani-garcia/bitwarden_rs into 2fa_enforcement
# Conflicts:
#	src/db/models/org_policy.rs
#	src/db/models/organization.rs
2021-04-16 14:29:28 -04:00
Olivier Martin
1db37bf3d0 make error toast display detailed message
replace invite accept error message with the one from upstream
check if config mail is enabled
2021-04-12 21:54:57 -04:00
Olivier Martin
d75a80bd2d Resolves dani-garcia/bitwarden_rs#981
* a user without 2fa trying to join a 2fa org will fail, but user gets an email to enable 2fa
* a user disabling 2fa will be removed from 2fa orgs; user gets an email for each org
* an org enabling 2fa policy will remove users without 2fa; users get an email
2021-04-11 22:57:17 -04:00
162 changed files with 18695 additions and 12623 deletions

View File

@@ -56,6 +56,15 @@
# WEBSOCKET_ADDRESS=0.0.0.0 # WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012 # WEBSOCKET_PORT=3012
## Controls whether users are allowed to create Bitwarden Sends.
## This setting applies globally to all users.
## To control this on a per-org basis instead, use the "Disable Send" org policy.
# SENDS_ALLOWED=true
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Job scheduler settings ## Job scheduler settings
## ##
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron), ## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
@@ -72,6 +81,14 @@
## Cron schedule of the job that checks for trashed items to delete permanently. ## Cron schedule of the job that checks for trashed items to delete permanently.
## Defaults to daily (5 minutes after midnight). Set blank to disable this job. ## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
# TRASH_PURGE_SCHEDULE="0 5 0 * * *" # TRASH_PURGE_SCHEDULE="0 5 0 * * *"
##
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
##
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
## Enable extended logging, which shows timestamps and targets in the logs ## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true # EXTENDED_LOGGING=true
@@ -189,20 +206,28 @@
## Name shown in the invitation emails that don't come from a specific organization ## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden # INVITATION_ORG_NAME=Vaultwarden
## Per-organization attachment limit (KB) ## Per-organization attachment storage limit (KB)
## Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more ## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
# ORG_ATTACHMENT_LIMIT= # ORG_ATTACHMENT_LIMIT=
## Per-user attachment limit (KB). ## Per-user attachment storage limit (KB)
## Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more ## Max kilobytes of attachment storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further attachments.
# USER_ATTACHMENT_LIMIT= # USER_ATTACHMENT_LIMIT=
## Number of days to wait before auto-deleting a trashed item.
## If unset (the default), trashed items are not auto-deleted.
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Controls the PBBKDF password iterations to apply on the server ## Controls the PBBKDF password iterations to apply on the server
## The change only applies when the password is changed ## The change only applies when the password is changed
# PASSWORD_ITERATIONS=100000 # PASSWORD_ITERATIONS=100000
## Whether password hint should be sent into the error response when the client request it ## Controls whether a password hint should be shown directly in the web page if
# SHOW_PASSWORD_HINT=true ## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
## Domain settings ## Domain settings
## The domain must match the address from where you access the server ## The domain must match the address from where you access the server
@@ -249,10 +274,11 @@
## In any case, if a code has been used it can not be used again, also codes which predates it will be invalid. ## In any case, if a code has been used it can not be used again, also codes which predates it will be invalid.
# AUTHENTICATOR_DISABLE_TIME_DRIFT=false # AUTHENTICATOR_DISABLE_TIME_DRIFT=false
## Rocket specific settings, check Rocket documentation to learn more ## Rocket specific settings
# ROCKET_ENV=staging ## See https://rocket.rs/v0.4/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0 # Enable this to test mobile app # ROCKET_ADDRESS=0.0.0.0
# ROCKET_PORT=8000 # ROCKET_PORT=80 # Defaults to 80 in the Docker images, or 8000 otherwise.
# ROCKET_WORKERS=10
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"} # ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## Mail specific settings, set SMTP_HOST and SMTP_FROM to enable the mail service. ## Mail specific settings, set SMTP_HOST and SMTP_FROM to enable the mail service.

View File

@@ -1,7 +1,7 @@
blank_issues_enabled: false blank_issues_enabled: false
contact_links: contact_links:
- name: Discourse forum for bitwarden_rs - name: Discourse forum for vaultwarden
url: https://bitwardenrs.discourse.group/ url: https://vaultwarden.discourse.group/
about: Use this forum to request features or get help with usage/configuration. about: Use this forum to request features or get help with usage/configuration.
- name: GitHub Discussions for vaultwarden - name: GitHub Discussions for vaultwarden
url: https://github.com/dani-garcia/vaultwarden/discussions url: https://github.com/dani-garcia/vaultwarden/discussions

BIN
.github/security-contact.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

@@ -2,65 +2,51 @@ name: Build
on: on:
push: push:
paths:
- ".github/workflows/build.yml"
- "src/**"
- "migrations/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
pull_request: pull_request:
# Ignore when there are only changes done too one of these paths paths:
paths-ignore: - ".github/workflows/build.yml"
- "**.md" - "src/**"
- "**.txt" - "migrations/**"
- ".dockerignore" - "Cargo.*"
- ".env.template" - "build.rs"
- ".gitattributes" - "diesel.toml"
- ".gitignore" - "rust-toolchain"
- "azure-pipelines.yml"
- "docker/**"
- "hooks/**"
- "tools/**"
- ".github/FUNDING.yml"
- ".github/ISSUE_TEMPLATE/**"
jobs: jobs:
build: build:
# Make warnings errors, this is to prevent warnings slipping through.
# This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes.
env:
RUSTFLAGS: "-D warnings"
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
channel: channel:
- nightly - nightly
# - stable
target-triple: target-triple:
- x86_64-unknown-linux-gnu - x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
include: include:
- target-triple: x86_64-unknown-linux-gnu - target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu host-triple: x86_64-unknown-linux-gnu
features: "sqlite,mysql,postgresql" features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
channel: nightly channel: nightly
os: ubuntu-18.04 os: ubuntu-20.04
ext: ext: ""
# - target-triple: x86_64-unknown-linux-gnu
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,mysql,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: nightly
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }} name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
# End Checkout the repo # End Checkout the repo
@@ -79,13 +65,13 @@ jobs:
# Enable Rust Caching # Enable Rust Caching
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
# End Enable Rust Caching # End Enable Rust Caching
# Uses the rust-toolchain file to determine version # Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}' - name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
with: with:
profile: minimal profile: minimal
target: ${{ matrix.target-triple }} target: ${{ matrix.target-triple }}
@@ -94,26 +80,49 @@ jobs:
# Run cargo tests (In release mode to speed up future builds) # Run cargo tests (In release mode to speed up future builds)
- name: '`cargo test --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`' # First test all features together, afterwards test them separately.
uses: actions-rs/cargo@v1 - name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with: with:
command: test command: test
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }} args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Test single features
# 0: sqlite
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[0] != '' }}
# 1: mysql
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[1] != '' }}
# 2: postgresql
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with:
command: test
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[2] != '' }}
# End Run cargo tests # End Run cargo tests
# Run cargo clippy (In release mode to speed up future builds) # Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
- name: '`cargo clippy --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`' - name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with: with:
command: clippy command: clippy
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }} args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
# End Run cargo clippy # End Run cargo clippy
# Run cargo fmt # Run cargo fmt
- name: '`cargo fmt`' - name: '`cargo fmt`'
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with: with:
command: fmt command: fmt
args: --all -- --check args: --all -- --check
@@ -121,31 +130,18 @@ jobs:
# Build the binary # Build the binary
- name: '`cargo build --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`' - name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
with: with:
command: build command: build
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }} args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# End Build the binary # End Build the binary
# Upload artifact to Github Actions # Upload artifact to Github Actions
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@27121b0bdffd731efa15d66772be8dc71245d074 # v2.2.4
with: with:
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }} name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }} path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# End Upload artifact to Github Actions # End Upload artifact to Github Actions
## This is not used at the moment
## We could start using this when we can build static binaries
# Upload to github actions release
# - name: Release
# uses: Shopify/upload-to-release@1
# if: startsWith(github.ref, 'refs/tags/')
# with:
# name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
# path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# repo-token: ${{ secrets.GITHUB_TOKEN }}
# End Upload to github actions release

View File

@@ -2,8 +2,10 @@ name: Hadolint
on: on:
push: push:
paths:
- "docker/**"
pull_request: pull_request:
# Ignore when there are only changes done too one of these paths
paths: paths:
- "docker/**" - "docker/**"
@@ -14,7 +16,7 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
# End Checkout the repo # End Checkout the repo
@@ -22,14 +24,14 @@ jobs:
- name: Download hadolint - name: Download hadolint
shell: bash shell: bash
run: | run: |
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \ sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint sudo chmod +x /usr/local/bin/hadolint
env: env:
HADOLINT_VERSION: 2.0.0 HADOLINT_VERSION: 2.7.0
# End Download hadolint # End Download hadolint
# Test Dockerfiles # Test Dockerfiles
- name: Run hadolint - name: Run hadolint
shell: bash shell: bash
run: git ls-files --exclude='docker/*/Dockerfile*' --ignored | xargs hadolint run: git ls-files --exclude='docker/*/Dockerfile*' --ignored --cached | xargs hadolint
# End Test Dockerfiles # End Test Dockerfiles

119
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,119 @@
name: Release
on:
push:
paths:
- ".github/workflows/release.yml"
- "src/**"
- "migrations/**"
- "hooks/**"
- "docker/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
branches: # Only on paths above
- main
tags: # Always, regardless of paths above
- '*'
jobs:
# https://github.com/marketplace/actions/skip-duplicate-actions
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-latest
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@f75dd6564bb646f95277dc8c3b80612e46a4a1ea # v3.4.1
with:
cancel_others: 'true'
# Only run this when not creating a tag
if: ${{ startsWith(github.ref, 'refs/heads/') }}
docker-build:
runs-on: ubuntu-latest
needs: skip_check
# Start a local docker registry to be used to generate multi-arch images.
services:
registry:
image: registry:2
ports:
- 5000:5000
env:
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
strategy:
matrix:
base_image: ["debian","alpine"]
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
with:
fetch-depth: 0
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # v1.10.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Determine Docker Tag
- name: Init Variables
id: vars
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
echo "set-output name=DOCKER_TAG::testing"
echo "::set-output name=DOCKER_TAG::testing"
fi
# End Determine Docker Tag
- name: Build Debian based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' }}
- name: Push Debian based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' }}
- name: Build Alpine based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' }}
- name: Push Alpine based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' }}

38
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,38 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
hooks:
- id: check-yaml
- id: check-json
- id: check-toml
- id: end-of-file-fixer
exclude: "(.*js$|.*css$)"
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
- repo: local
hooks:
- id: fmt
name: fmt
description: Format files with cargo fmt.
entry: cargo fmt
language: system
types: [rust]
args: ["--", "--check"]
- id: cargo-test
name: cargo test
description: Test the package for errors.
entry: cargo test
language: system
args: ["--features", "sqlite,mysql,postgresql", "--"]
types: [rust]
pass_filenames: false
- id: cargo-clippy
name: cargo clippy
description: Lint Rust sources
entry: cargo clippy
language: system
args: ["--features", "sqlite,mysql,postgresql", "--", "-D", "warnings"]
types: [rust]
pass_filenames: false

1100
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,8 @@ name = "vaultwarden"
version = "1.0.0" version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"] authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2018" edition = "2018"
rust-version = "1.57"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden" repository = "https://github.com/dani-garcia/vaultwarden"
readme = "README.md" readme = "README.md"
@@ -28,41 +30,47 @@ syslog = "4.0.1"
[dependencies] [dependencies]
# Web framework for nightly with a focus on ease-of-use, expressibility, and speed. # Web framework for nightly with a focus on ease-of-use, expressibility, and speed.
rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false } rocket = { version = "=0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev" rocket_contrib = "=0.5.0-dev"
# HTTP client # HTTP client
reqwest = { version = "0.11.3", features = ["blocking", "json", "gzip", "brotli", "socks"] } reqwest = { version = "0.11.5", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies"] }
# Used for custom short lived cookie jar
cookie = "0.15.1"
cookie_store = "0.15.0"
bytes = "1.1.0"
url = "2.2.2"
# multipart/form-data support # multipart/form-data support
multipart = { version = "0.17.1", features = ["server"], default-features = false } multipart = { version = "0.18.0", features = ["server"], default-features = false }
# WebSockets library # WebSockets library
ws = { version = "0.10.0", package = "parity-ws" } ws = { version = "0.11.0", package = "parity-ws" }
# MessagePack library # MessagePack library
rmpv = "0.4.7" rmpv = "1.0.0"
# Concurrent hashmap implementation # Concurrent hashmap implementation
chashmap = "2.2.2" chashmap = "2.2.2"
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = { version = "1.0.125", features = ["derive"] } serde = { version = "1.0.130", features = ["derive"] }
serde_json = "1.0.64" serde_json = "1.0.68"
# Logging # Logging
log = "0.4.14" log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] } fern = { version = "0.6.0", features = ["syslog-4"] }
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "1.4.6", features = [ "chrono", "r2d2"] } diesel = { version = "1.4.8", features = [ "chrono", "r2d2"] }
diesel_migrations = "1.4.0" diesel_migrations = "1.4.0"
# Bundled SQLite # Bundled SQLite
libsqlite3-sys = { version = "0.20.1", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.22.2", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = "0.8.3" rand = "0.8.4"
ring = "0.16.20" ring = "0.16.20"
# UUID generation # UUID generation
@@ -70,14 +78,14 @@ uuid = { version = "0.8.2", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = { version = "0.4.19", features = ["serde"] } chrono = { version = "0.4.19", features = ["serde"] }
chrono-tz = "0.5.3" chrono-tz = "0.6.0"
time = "0.2.26" time = "0.2.27"
# Job scheduler # Job scheduler
job_scheduler = "1.2.1" job_scheduler = "1.2.1"
# TOTP library # TOTP library
oath = "0.10.2" totp-lite = "1.0.3"
# Data encoding library # Data encoding library
data-encoding = "2.3.2" data-encoding = "2.3.2"
@@ -87,6 +95,7 @@ jsonwebtoken = "7.2.0"
# U2F library # U2F library
u2f = "0.2.0" u2f = "0.2.0"
webauthn-rs = "0.3.0-alpha.12"
# Yubico Library # Yubico Library
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false } yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
@@ -95,39 +104,38 @@ yubico = { version = "0.10.0", features = ["online-tokio"], default-features = f
dotenv = { version = "0.15.0", default-features = false } dotenv = { version = "0.15.0", default-features = false }
# Lazy initialization # Lazy initialization
once_cell = "1.7.2" once_cell = "1.8.0"
# Numerical libraries # Numerical libraries
num-traits = "0.2.14" num-traits = "0.2.14"
num-derive = "0.3.3" num-derive = "0.3.3"
# Email libraries # Email libraries
tracing = { version = "0.1.25", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled. tracing = { version = "0.1.29", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
lettre = { version = "0.10.0-beta.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false } lettre = { version = "0.10.0-rc.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
newline-converter = "0.2.0"
# Template library # Template library
handlebars = { version = "3.5.4", features = ["dir_source"] } handlebars = { version = "4.1.3", features = ["dir_source"] }
# For favicon extraction from main website # For favicon extraction from main website
html5ever = "0.25.1" html5ever = "0.25.1"
markup5ever_rcdom = "0.1.0" markup5ever_rcdom = "0.1.0"
regex = { version = "1.4.5", features = ["std", "perf"], default-features = false } regex = { version = "1.5.4", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.1.0" data-url = "0.1.0"
# Used by U2F, JWT and Postgres # Used by U2F, JWT and Postgres
openssl = "0.10.34" openssl = "0.10.36"
# URL encoding library # URL encoding library
percent-encoding = "2.1.0" percent-encoding = "2.1.0"
# Punycode conversion # Punycode conversion
idna = "0.2.2" idna = "0.2.3"
# CLI argument parsing # CLI argument parsing
pico-args = "0.4.0" pico-args = "0.4.2"
# Logging panics to logfile instead stderr only # Logging panics to logfile instead stderr only
backtrace = "0.3.56" backtrace = "0.3.61"
# Macro ident concatenation # Macro ident concatenation
paste = "1.0.5" paste = "1.0.5"
@@ -138,7 +146,7 @@ rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429d
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' } rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
# For favicon extraction from main website # For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '540ede02d0771824c0c80ff9f57fe8eff38b1291' } data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = 'eb7330b5296c0d43816d1346211b74182bb4ae37' }
# The maintainer of the `job_scheduler` crate doesn't seem to have responded # The maintainer of the `job_scheduler` crate doesn't seem to have responded
# to any issues or PRs for almost a year (as of April 2021). This hopefully # to any issues or PRs for almost a year (as of April 2021). This hopefully

View File

@@ -1,8 +1,10 @@
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal. ### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
--- ---
[![Docker Pulls](https://img.shields.io/docker/pulls/bitwardenrs/server.svg)](https://hub.docker.com/r/vaultwarden/server) [![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden) [![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest) [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/master/LICENSE.txt) [![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/master/LICENSE.txt)
@@ -35,7 +37,7 @@ Pull the docker image and mount a volume from the host for persistent storage:
docker pull vaultwarden/server:latest docker pull vaultwarden/server:latest
docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server:latest docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server:latest
``` ```
This will preserve any persistent data under /bw-data/, you can adapt the path to whatever suits you. This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS. **IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS.
@@ -73,7 +75,7 @@ Thanks for your contribution to the project!
<table> <table>
<tr> <tr>
<td align="center"> <td align="center">
<a href="https://github.com/ChonoN" style="width: 75px"> <a href="https://github.com/Gyarbij" style="width: 75px">
<sub><b>Chono N</b></sub> <sub><b>Chono N</b></sub>
</a> </a>
</td> </td>
@@ -81,7 +83,7 @@ Thanks for your contribution to the project!
<tr> <tr>
<td align="center"> <td align="center">
<a href="https://github.com/themightychris"> <a href="https://github.com/themightychris">
<sub><b>themightychris</b></sub> <sub><b>Chris Alfano</b></sub>
</a> </a>
</td> </td>
</tr> </tr>

45
SECURITY.md Normal file
View File

@@ -0,0 +1,45 @@
Vaultwarden tries to prevent security issues but there could always slip something through.
If you believe you've found a security issue in our application, we encourage you to
notify us. We welcome working with you to resolve the issue promptly. Thanks in advance!
# Disclosure Policy
- Let us know as soon as possible upon discovery of a potential security issue, and we'll make every
effort to quickly resolve the issue.
- Provide us a reasonable amount of time to resolve the issue before any disclosure to the public or a
third-party. We may publicly disclose the issue before resolving it, if appropriate.
- Make a good faith effort to avoid privacy violations, destruction of data, and interruption or
degradation of our service. Only interact with accounts you own or with explicit permission of the
account holder.
# In-scope
- Security issues in any current release of Vaultwarden. Source code is available at https://github.com/dani-garcia/vaultwarden. This includes the current `latest` release and `main / testing` release.
# Exclusions
The following bug classes are out-of scope:
- Bugs that are already reported on Vaultwarden's issue tracker (https://github.com/dani-garcia/vaultwarden/issues)
- Bugs that are not part of Vaultwarden, like on the the web-vault or mobile and desktop clients. These issues need to be reported in the respective project issue tracker at https://github.com/bitwarden to which we are not associated
- Issues in an upstream software dependency (ex: Rust, or External Libraries) which are already reported to the upstream maintainer
- Attacks requiring physical access to a user's device
- Issues related to software or protocols not under Vaultwarden's control
- Vulnerabilities in outdated versions of Vaultwarden
- Missing security best practices that do not directly lead to a vulnerability (You may still report them as a normal issue)
- Issues that do not have any impact on the general public
While researching, we'd like to ask you to refrain from:
- Denial of service
- Spamming
- Social engineering (including phishing) of Vaultwarden developers, contributors or users
Thank you for helping keep Vaultwarden and our users safe!
# How to contact us
- You can contact us on Matrix https://matrix.to/#/#vaultwarden:matrix.org (user: `@danig:matrix.org`)
- You can send an ![security-contact](/.github/security-contact.gif) to report a security issue.
- If you want to send an encrypted email you can use the following GPG key:<br>
https://keyserver.ubuntu.com/pks/lookup?search=0xB9B7A108373276BF3C0406F9FC8A7D14C3CD543A&fingerprint=on&op=index

View File

@@ -1,22 +0,0 @@
pool:
vmImage: 'Ubuntu-18.04'
steps:
- script: |
ls -la
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $(cat rust-toolchain) --profile=minimal
echo "##vso[task.prependpath]$HOME/.cargo/bin"
displayName: 'Install Rust'
- script: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
displayName: 'Install build libraries.'
- script: |
rustc -Vv
cargo -V
displayName: Query rust and cargo versions
- script : cargo test --features "sqlite,mysql,postgresql"
displayName: 'Test project with sqlite, mysql and postgresql backends'

View File

@@ -1,15 +1,17 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.51" %} {% set build_stage_base_image = "rust:1.55-buster" %}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "amd64" in target_file %} {% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2021-04-14" %} {% set build_stage_base_image = "clux/muslrust:nightly-2021-10-06" %}
{% set runtime_stage_base_image = "alpine:3.13" %} {% set runtime_stage_base_image = "alpine:3.14" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %} {% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %} {% elif "armv7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %} {% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.13" %} {% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.14" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %} {% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% endif %} {% endif %}
{% elif "amd64" in target_file %} {% elif "amd64" in target_file %}
@@ -40,12 +42,17 @@
{% else %} {% else %}
{% set package_arch_target_param = "" %} {% set package_arch_target_param = "" %}
{% endif %} {% endif %}
{% if "buildx" in target_file %}
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
{% else %}
{% set mount_rust_cache = "" %}
{% endif %}
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
{% set vault_version = "2.19.0d" %} {% set vault_version = "2.23.0c" %}
{% set vault_image_digest = "sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233" %} {% set vault_image_digest = "sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub. # The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security, # Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later # as the digest of an image is immutable, whereas a tag name can later
@@ -75,7 +82,8 @@ ARG DB=sqlite,postgresql
{% set features = "sqlite,postgresql" %} {% set features = "sqlite,postgresql" %}
{% else %} {% else %}
# Alpine-based ARM (musl) only supports sqlite during compile time. # Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite # We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
ARG DB=sqlite,vendored_openssl
{% set features = "sqlite" %} {% set features = "sqlite" %}
{% endif %} {% endif %}
{% else %} {% else %}
@@ -85,22 +93,40 @@ ARG DB=sqlite,mysql,postgresql
{% endif %} {% endif %}
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs {# {% if "alpine" not in target_file and "buildx" in target_file %}
RUN rustup set profile minimal # Debian based Buildx builds can use some special apt caching to speedup building.
# By default Debian based images have some rules to keep docker builds clean, we need to remove this.
# See: https://hub.docker.com/r/docker/dockerfile
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
{% endif %} #}
# Create CARGO_HOME folder and don't download rust docs
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
{% if "armv7" in target_file %} {% if "armv7" in target_file %}
{#- https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html -#}
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16" ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
{% endif %} {% endif %}
{% elif "arm" in target_file %} {% elif "arm" in target_file %}
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for {{ package_arch_name }} architecture. # Install required build libs for {{ package_arch_name }} architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture {{ package_arch_name }} \ && dpkg --add-architecture {{ package_arch_name }} \
&& apt-get update \ && apt-get update \
&& apt-get install -y \ && apt-get install -y \
@@ -109,28 +135,43 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
libc6-dev{{ package_arch_prefix }} \ libc6-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \ libpq5{{ package_arch_prefix }} \
libpq-dev \ libpq-dev \
libmariadb3:amd64 \
libmariadb-dev{{ package_arch_prefix }} \ libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }} libmariadb-dev-compat{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
#
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% elif "amd64" in target_file %}
# Install DB packages
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \
gcc-{{ package_cross_compiler }} \
&& mkdir -p ~/.cargo \
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif -%}
{% if "amd64" in target_file and "alpine" not in target_file %}
# Install DB packages
RUN apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
libmariadb-dev{{ package_arch_prefix }} \ libmariadb-dev{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \ libpq-dev{{ package_arch_prefix }} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
{% endif %} {% endif %}
@@ -143,39 +184,15 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs COPY ./build.rs ./build.rs
{% if "alpine" not in target_file %}
{% if "arm" in target_file %}
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% endif -%}
{% endif %}
{% if package_arch_target is defined %} {% if package_arch_target is defined %}
RUN rustup target add {{ package_arch_target }} RUN {{ mount_rust_cache -}} rustup target add {{ package_arch_target }}
{% endif %} {% endif %}
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -186,9 +203,10 @@ RUN touch src/main.rs
# Builds again, this time it'll just be # Builds again, this time it'll just be
# your actual source files being built # your actual source files being built
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "armv7" in target_file %} {% if "armv7" in target_file %}
# hadolint ignore=DL3059
RUN musl-strip target/{{ package_arch_target }}/release/vaultwarden RUN musl-strip target/{{ package_arch_target }}/release/vaultwarden
{% endif %} {% endif %}
{% endif %} {% endif %}
@@ -206,13 +224,16 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
{% endif %} {% endif %}
{% if "amd64" not in target_file %} {% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-start" ] RUN [ "cross-build-start" ]
{% endif %} {% endif %}
# Install needed libraries
# Create data folder and Install needed libraries
RUN mkdir /data \
{% if "alpine" in runtime_stage_base_image %} {% if "alpine" in runtime_stage_base_image %}
RUN apk add --no-cache \ && apk add --no-cache \
openssl \ openssl \
tzdata \
curl \ curl \
dumb-init \ dumb-init \
{% if "mysql" in features %} {% if "mysql" in features %}
@@ -223,7 +244,7 @@ RUN apk add --no-cache \
{% endif %} {% endif %}
ca-certificates ca-certificates
{% else %} {% else %}
RUN apt-get update && apt-get install -y \ && apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
openssl \ openssl \
ca-certificates \ ca-certificates \
@@ -231,15 +252,15 @@ RUN apt-get update && apt-get install -y \
dumb-init \ dumb-init \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
{% endif %} {% endif %}
RUN mkdir /data
{% if "amd64" not in target_file %} {% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
{% endif %} {% endif %}
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012 EXPOSE 3012

View File

@@ -7,3 +7,9 @@ all: $(OBJECTS)
%/Dockerfile.alpine: Dockerfile.j2 render_template %/Dockerfile.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@" ./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,33 +16,42 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.51 as build FROM rust:1.55-buster as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal # Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages # Install DB packages
RUN apt-get update && apt-get install -y \ RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \ --no-install-recommends \
libmariadb-dev \ libmariadb-dev \
libpq-dev \ libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
@@ -56,8 +67,8 @@ COPY ./build.rs ./build.rs
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release RUN cargo build --features ${DB} --release \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -79,8 +90,10 @@ ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
# Install needed libraries
RUN apt-get update && apt-get install -y \ # Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
openssl \ openssl \
ca-certificates \ ca-certificates \
@@ -88,9 +101,10 @@ RUN apt-get update && apt-get install -y \
dumb-init \ dumb-init \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012 EXPOSE 3012

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,29 +16,35 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2021-04-14 as build FROM clux/muslrust:nightly-2021-10-06 as build
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time. # Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql ARG DB=sqlite,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
ENV USER "root" # Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
@@ -53,8 +61,8 @@ RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -70,22 +78,25 @@ RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image # Create a new stage with a minimal image
# because we already have a binary built # because we already have a binary built
FROM alpine:3.13 FROM alpine:3.14
ENV ROCKET_ENV "staging" ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs ENV SSL_CERT_DIR=/etc/ssl/certs
# Install needed libraries
RUN apk add --no-cache \ # Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \ openssl \
tzdata \
curl \ curl \
dumb-init \ dumb-init \
postgresql-libs \ postgresql-libs \
ca-certificates ca-certificates
RUN mkdir /data
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012 EXPOSE 3012

View File

@@ -0,0 +1,126 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.55-buster as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,118 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2021-10-06 as build
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.14
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
postgresql-libs \
ca-certificates
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,32 +16,44 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.51 as build FROM rust:1.55-buster as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for arm64 architecture. # Install required build libs for arm64 architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \ && dpkg --add-architecture arm64 \
&& apt-get update \ && apt-get update \
&& apt-get install -y \ && apt-get install -y \
@@ -48,20 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
libc6-dev:arm64 \ libc6-dev:arm64 \
libpq5:arm64 \ libpq5:arm64 \
libpq-dev \ libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:arm64 \ libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 libmariadb-dev-compat:arm64 \
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-aarch64-linux-gnu \ gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \ #
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \ # Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \ && apt-get download libmariadb-dev-compat:amd64 \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config && dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
@@ -72,33 +101,13 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu RUN rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -120,10 +129,12 @@ ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ] RUN [ "cross-build-start" ]
# Install needed libraries # Create data folder and Install needed libraries
RUN apt-get update && apt-get install -y \ RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
openssl \ openssl \
ca-certificates \ ca-certificates \
@@ -131,10 +142,10 @@ RUN apt-get update && apt-get install -y \
dumb-init \ dumb-init \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data # hadolint ignore=DL3059
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data

View File

@@ -0,0 +1,169 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.55-buster as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for arm64 architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
#
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,32 +16,44 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.51 as build FROM rust:1.55-buster as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for armel architecture. # Install required build libs for armel architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \ && dpkg --add-architecture armel \
&& apt-get update \ && apt-get update \
&& apt-get install -y \ && apt-get install -y \
@@ -48,20 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
libc6-dev:armel \ libc6-dev:armel \
libpq5:armel \ libpq5:armel \
libpq-dev \ libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:armel \ libmariadb-dev:armel \
libmariadb-dev-compat:armel libmariadb-dev-compat:armel \
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabi \ gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \ #
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \ # Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \ && apt-get download libmariadb-dev-compat:amd64 \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config && dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
@@ -72,33 +101,13 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi RUN rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -120,10 +129,12 @@ ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ] RUN [ "cross-build-start" ]
# Install needed libraries # Create data folder and Install needed libraries
RUN apt-get update && apt-get install -y \ RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
openssl \ openssl \
ca-certificates \ ca-certificates \
@@ -131,10 +142,10 @@ RUN apt-get update && apt-get install -y \
dumb-init \ dumb-init \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data # hadolint ignore=DL3059
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data

View File

@@ -0,0 +1,169 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.55-buster as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for armel architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel \
libpq5:armel \
libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
#
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,32 +16,44 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.51 as build FROM rust:1.55-buster as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for armhf architecture. # Install required build libs for armhf architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \ && dpkg --add-architecture armhf \
&& apt-get update \ && apt-get update \
&& apt-get install -y \ && apt-get install -y \
@@ -48,20 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
libc6-dev:armhf \ libc6-dev:armhf \
libpq5:armhf \ libpq5:armhf \
libpq-dev \ libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:armhf \ libmariadb-dev:armhf \
libmariadb-dev-compat:armhf libmariadb-dev-compat:armhf \
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \ gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \ #
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \ # Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \ && apt-get download libmariadb-dev-compat:amd64 \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config && dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
@@ -72,33 +101,13 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf RUN rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -120,10 +129,12 @@ ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ] RUN [ "cross-build-start" ]
# Install needed libraries # Create data folder and Install needed libraries
RUN apt-get update && apt-get install -y \ RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
openssl \ openssl \
ca-certificates \ ca-certificates \
@@ -131,10 +142,10 @@ RUN apt-get update && apt-get install -y \
dumb-init \ dumb-init \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data # hadolint ignore=DL3059
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,29 +16,36 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull vaultwarden/web-vault:v2.19.0d # $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233] # [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.19.0d] # [vaultwarden/web-vault:v2.23.0c]
# #
FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build FROM messense/rust-musl-cross:armv7-musleabihf as build
# Alpine-based ARM (musl) only supports sqlite during compile time. # Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite # We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
ARG DB=sqlite,vendored_openssl
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
ENV USER "root" # Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16" ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
@@ -54,8 +63,8 @@ RUN rustup target add armv7-unknown-linux-musleabihf
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf \
RUN find . -not -path "./target*" -delete && find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
@@ -67,29 +76,32 @@ RUN touch src/main.rs
# Builds again, this time it'll just be # Builds again, this time it'll just be
# your actual source files being built # your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# hadolint ignore=DL3059
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image # Create a new stage with a minimal image
# because we already have a binary built # because we already have a binary built
FROM balenalib/armv7hf-alpine:3.13 FROM balenalib/armv7hf-alpine:3.14
ENV ROCKET_ENV "staging" ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10 ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs ENV SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ] RUN [ "cross-build-start" ]
# Install needed libraries # Create data folder and Install needed libraries
RUN apk add --no-cache \ RUN mkdir /data \
&& apk add --no-cache \
openssl \ openssl \
tzdata \
curl \ curl \
dumb-init \ dumb-init \
ca-certificates ca-certificates
RUN mkdir /data # hadolint ignore=DL3059
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data

View File

@@ -0,0 +1,169 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.55-buster as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other.
#
# Install required build libs for armhf architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev \
libmariadb3:amd64 \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
#
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
#
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,125 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.23.0c
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.23.0c
# [vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459
# [vaultwarden/web-vault:v2.23.0c]
#
FROM vaultwarden/web-vault@sha256:dc94d303def3583af08816e91803f1c42107645612440f474f553f0cb0f97459 as vault
########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build
# Alpine-based ARM (musl) only supports sqlite during compile time.
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
ARG DB=sqlite,vendored_openssl
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-musleabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# hadolint ignore=DL3059
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.14
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -34,12 +34,17 @@ for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}") LABEL_ARGS+=(--label "${label}")
done done
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
buildx_suffix=.buildx
fi
set -ex set -ex
for arch in "${arches[@]}"; do for arch in "${arches[@]}"; do
docker build \ docker build \
"${LABEL_ARGS[@]}" \ "${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \ -t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${distro_suffix} \ -f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
. .
done done

View File

@@ -10,7 +10,7 @@ join() { local IFS="$1"; shift; echo "$*"; }
set -ex set -ex
echo ">>> Starting local Docker registry..." echo ">>> Starting local Docker registry when needed..."
# Docker Buildx's `docker-container` driver is needed for multi-platform # Docker Buildx's `docker-container` driver is needed for multi-platform
# builds, but it can't access existing images on the Docker host (like the # builds, but it can't access existing images on the Docker host (like the
@@ -25,7 +25,13 @@ echo ">>> Starting local Docker registry..."
# Use host networking so the buildx container can access the registry via # Use host networking so the buildx container can access the registry via
# localhost. # localhost.
# #
docker run -d --name registry --network host registry:2 # defaults to port 5000 # First check if there already is a registry container running, else skip it.
# This will only happen either locally or running it via Github Actions
#
if ! timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/5000'; then
# defaults to port 5000
docker run -d --name registry --network host registry:2
fi
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`. # Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`. # Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
@@ -49,7 +55,12 @@ echo ">>> Setting up Docker Buildx..."
# #
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714 # Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
# #
# Check if there already is a builder running, else skip this and use the existing.
# This will only happen either locally or running it via Github Actions
#
if ! docker buildx inspect builder > /dev/null 2>&1 ; then
docker buildx create --name builder --use --driver-opt network=host docker buildx create --name builder --use --driver-opt network=host
fi
echo ">>> Running Docker Buildx..." echo ">>> Running Docker Buildx..."

View File

@@ -0,0 +1,2 @@
ALTER TABLE ciphers
ADD COLUMN reprompt INTEGER;

View File

@@ -0,0 +1,2 @@
ALTER TABLE sends
ADD COLUMN hide_email BOOLEAN;

View File

@@ -0,0 +1,5 @@
ALTER TABLE organizations
ADD COLUMN private_key TEXT;
ALTER TABLE organizations
ADD COLUMN public_key TEXT;

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid CHAR(36) NOT NULL PRIMARY KEY,
grantor_uuid CHAR(36) REFERENCES users (uuid),
grantee_uuid CHAR(36) REFERENCES users (uuid),
email VARCHAR(255),
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at DATETIME,
last_notification_at DATETIME,
updated_at DATETIME NOT NULL,
created_at DATETIME NOT NULL
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE ciphers
ADD COLUMN reprompt INTEGER;

View File

@@ -0,0 +1,2 @@
ALTER TABLE sends
ADD COLUMN hide_email BOOLEAN;

View File

@@ -0,0 +1,5 @@
ALTER TABLE organizations
ADD COLUMN private_key TEXT;
ALTER TABLE organizations
ADD COLUMN public_key TEXT;

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid CHAR(36) NOT NULL PRIMARY KEY,
grantor_uuid CHAR(36) REFERENCES users (uuid),
grantee_uuid CHAR(36) REFERENCES users (uuid),
email VARCHAR(255),
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at TIMESTAMP,
last_notification_at TIMESTAMP,
updated_at TIMESTAMP NOT NULL,
created_at TIMESTAMP NOT NULL
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE ciphers
ADD COLUMN reprompt INTEGER;

View File

@@ -0,0 +1,2 @@
ALTER TABLE sends
ADD COLUMN hide_email BOOLEAN;

View File

@@ -0,0 +1,5 @@
ALTER TABLE organizations
ADD COLUMN private_key TEXT;
ALTER TABLE organizations
ADD COLUMN public_key TEXT;

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid TEXT NOT NULL PRIMARY KEY,
grantor_uuid TEXT REFERENCES users (uuid),
grantee_uuid TEXT REFERENCES users (uuid),
email TEXT,
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at DATETIME,
last_notification_at DATETIME,
updated_at DATETIME NOT NULL,
created_at DATETIME NOT NULL
);

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.7 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 17 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -1 +1 @@
nightly-2021-04-14 nightly-2021-10-14

View File

@@ -4,7 +4,7 @@ use serde_json::Value;
use std::{env, time::Duration}; use std::{env, time::Duration};
use rocket::{ use rocket::{
http::{Cookie, Cookies, SameSite}, http::{Cookie, Cookies, SameSite, Status},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request}, request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
response::{content::Html, Flash, Redirect}, response::{content::Html, Flash, Redirect},
Route, Route,
@@ -12,13 +12,15 @@ use rocket::{
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, NumberOrString}, api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp}, auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder, config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType}, db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
error::{Error, MapResult}, error::{Error, MapResult},
mail, mail,
util::{format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker}, util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
},
CONFIG, CONFIG,
}; };
@@ -30,6 +32,7 @@ pub fn routes() -> Vec<Route> {
routes![ routes![
admin_login, admin_login,
get_users_json, get_users_json,
get_user_json,
post_admin_login, post_admin_login,
admin_page, admin_page,
invite_user, invite_user,
@@ -195,9 +198,7 @@ fn _validate_token(token: &str) -> bool {
struct AdminTemplateData { struct AdminTemplateData {
page_content: String, page_content: String,
version: Option<&'static str>, version: Option<&'static str>,
users: Option<Vec<Value>>, page_data: Option<Value>,
organizations: Option<Vec<Value>>,
diagnostics: Option<Value>,
config: Value, config: Value,
can_backup: bool, can_backup: bool,
logged_in: bool, logged_in: bool,
@@ -213,51 +214,19 @@ impl AdminTemplateData {
can_backup: *CAN_BACKUP, can_backup: *CAN_BACKUP,
logged_in: true, logged_in: true,
urlpath: CONFIG.domain_path(), urlpath: CONFIG.domain_path(),
users: None, page_data: None,
organizations: None,
diagnostics: None,
} }
} }
fn users(users: Vec<Value>) -> Self { fn with_data(page_content: &str, page_data: Value) -> Self {
Self { Self {
page_content: String::from("admin/users"), page_content: String::from(page_content),
version: VERSION, version: VERSION,
users: Some(users), page_data: Some(page_data),
config: CONFIG.prepare_json(), config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP, can_backup: *CAN_BACKUP,
logged_in: true, logged_in: true,
urlpath: CONFIG.domain_path(), urlpath: CONFIG.domain_path(),
organizations: None,
diagnostics: None,
}
}
fn organizations(organizations: Vec<Value>) -> Self {
Self {
page_content: String::from("admin/organizations"),
version: VERSION,
organizations: Some(organizations),
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
users: None,
diagnostics: None,
}
}
fn diagnostics(diagnostics: Value) -> Self {
Self {
page_content: String::from("admin/diagnostics"),
version: VERSION,
organizations: None,
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
users: None,
diagnostics: Some(diagnostics),
} }
} }
@@ -278,23 +247,39 @@ struct InviteData {
email: String, email: String,
} }
fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn) {
Ok(user)
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[post("/invite", data = "<data>")] #[post("/invite", data = "<data>")]
fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> EmptyResult { fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner(); let data: InviteData = data.into_inner();
let email = data.email.clone(); let email = data.email.clone();
if User::find_by_mail(&data.email, &conn).is_some() { if User::find_by_mail(&data.email, &conn).is_some() {
err!("User already exists") err_code!("User already exists", Status::Conflict.code)
} }
let mut user = User::new(email); let mut user = User::new(email);
user.save(&conn)?;
// TODO: After try_blocks is stabilized, this can be made more readable
// See: https://github.com/rust-lang/rust/issues/31436
(|| {
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None) mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
} else { } else {
let invitation = Invitation::new(data.email); let invitation = Invitation::new(user.email.clone());
invitation.save(&conn) invitation.save(&conn)?;
} }
user.save(&conn)
})()
.map_err(|e| e.with_code(Status::InternalServerError.code))?;
Ok(Json(user.to_json(&conn)))
} }
#[post("/test/smtp", data = "<data>")] #[post("/test/smtp", data = "<data>")]
@@ -343,19 +328,26 @@ fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
}) })
.collect(); .collect();
let text = AdminTemplateData::users(users_json).render()?; let text = AdminTemplateData::with_data("admin/users", json!(users_json)).render()?;
Ok(Html(text)) Ok(Html(text))
} }
#[get("/users/<uuid>")]
fn get_user_json(uuid: String, _token: AdminToken, conn: DbConn) -> JsonResult {
let user = get_user_or_404(&uuid, &conn)?;
Ok(Json(user.to_json(&conn)))
}
#[post("/users/<uuid>/delete")] #[post("/users/<uuid>/delete")]
fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let user = get_user_or_404(&uuid, &conn)?;
user.delete(&conn) user.delete(&conn)
} }
#[post("/users/<uuid>/deauth")] #[post("/users/<uuid>/deauth")]
fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?; Device::delete_all_by_user(&user.uuid, &conn)?;
user.reset_security_stamp(); user.reset_security_stamp();
@@ -364,7 +356,7 @@ fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
#[post("/users/<uuid>/disable")] #[post("/users/<uuid>/disable")]
fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?; Device::delete_all_by_user(&user.uuid, &conn)?;
user.reset_security_stamp(); user.reset_security_stamp();
user.enabled = false; user.enabled = false;
@@ -374,7 +366,7 @@ fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
#[post("/users/<uuid>/enable")] #[post("/users/<uuid>/enable")]
fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let mut user = get_user_or_404(&uuid, &conn)?;
user.enabled = true; user.enabled = true;
user.save(&conn) user.save(&conn)
@@ -382,7 +374,7 @@ fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
#[post("/users/<uuid>/remove-2fa")] #[post("/users/<uuid>/remove-2fa")]
fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let mut user = get_user_or_404(&uuid, &conn)?;
TwoFactor::delete_all_by_user(&user.uuid, &conn)?; TwoFactor::delete_all_by_user(&user.uuid, &conn)?;
user.totp_recover = None; user.totp_recover = None;
user.save(&conn) user.save(&conn)
@@ -442,7 +434,7 @@ fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<St
}) })
.collect(); .collect();
let text = AdminTemplateData::organizations(organizations_json).render()?; let text = AdminTemplateData::with_data("admin/organizations", json!(organizations_json)).render()?;
Ok(Html(text)) Ok(Html(text))
} }
@@ -502,6 +494,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
// Execute some environment checks // Execute some environment checks
let running_within_docker = is_running_in_docker(); let running_within_docker = is_running_in_docker();
let docker_base_image = docker_base_image();
let has_http_access = has_http_access(); let has_http_access = has_http_access();
let uses_proxy = env::var_os("HTTP_PROXY").is_some() let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some() || env::var_os("http_proxy").is_some()
@@ -559,6 +552,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
"web_vault_version": web_vault_version.version, "web_vault_version": web_vault_version.version,
"latest_web_build": latest_web_build, "latest_web_build": latest_web_build,
"running_within_docker": running_within_docker, "running_within_docker": running_within_docker,
"docker_base_image": docker_base_image,
"has_http_access": has_http_access, "has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(), "ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(), "ip_header_match": ip_header_name == CONFIG.ip_header(),
@@ -568,11 +562,12 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
"db_type": *DB_TYPE, "db_type": *DB_TYPE,
"db_version": get_sql_server_version(&conn), "db_version": get_sql_server_version(&conn),
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))), "admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"overrides": &CONFIG.get_overrides().join(", "),
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(), "server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference "server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
}); });
let text = AdminTemplateData::diagnostics(diagnostics_json).render()?; let text = AdminTemplateData::with_data("admin/diagnostics", diagnostics_json).render()?;
Ok(Html(text)) Ok(Html(text))
} }

View File

@@ -49,6 +49,7 @@ struct RegisterData {
MasterPasswordHint: Option<String>, MasterPasswordHint: Option<String>,
Name: Option<String>, Name: Option<String>,
Token: Option<String>, Token: Option<String>,
#[allow(dead_code)]
OrganizationUserId: Option<String>, OrganizationUserId: Option<String>,
} }
@@ -62,11 +63,12 @@ struct KeysData {
#[post("/accounts/register", data = "<data>")] #[post("/accounts/register", data = "<data>")]
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult { fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
let data: RegisterData = data.into_inner().data; let data: RegisterData = data.into_inner().data;
let email = data.Email.to_lowercase();
let mut user = match User::find_by_mail(&data.Email, &conn) { let mut user = match User::find_by_mail(&email, &conn) {
Some(user) => { Some(user) => {
if !user.password_hash.is_empty() { if !user.password_hash.is_empty() {
if CONFIG.is_signup_allowed(&data.Email) { if CONFIG.is_signup_allowed(&email) {
err!("User already exists") err!("User already exists")
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
@@ -75,20 +77,24 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
if let Some(token) = data.Token { if let Some(token) = data.Token {
let claims = decode_invite(&token)?; let claims = decode_invite(&token)?;
if claims.email == data.Email { if claims.email == email {
user user
} else { } else {
err!("Registration email does not match invite email") err!("Registration email does not match invite email")
} }
} else if Invitation::take(&data.Email, &conn) { } else if Invitation::take(&email, &conn) {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() { for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32; user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&conn)?; user_org.save(&conn)?;
} }
user user
} else if CONFIG.is_signup_allowed(&data.Email) { } else if CONFIG.is_signup_allowed(&email) {
err!("Account with this email already exists") // check if it's invited by emergency contact
match EmergencyAccess::find_invited_by_grantee_email(&data.Email, &conn) {
Some(_) => user,
_ => err!("Account with this email already exists"),
}
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
} }
@@ -97,8 +103,8 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
// Order is important here; the invitation check must come first // Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless // because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions. // of other signup restrictions.
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) { if Invitation::take(&email, &conn) || CONFIG.is_signup_allowed(&email) {
User::new(data.Email.clone()) User::new(email.clone())
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
} }
@@ -106,7 +112,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
}; };
// Make sure we don't leave a lingering invitation. // Make sure we don't leave a lingering invitation.
Invitation::take(&data.Email, &conn); Invitation::take(&email, &conn);
if let Some(client_kdf_iter) = data.KdfIterations { if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter; user.client_kdf_iter = client_kdf_iter;
@@ -231,7 +237,10 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
err!("Invalid password") err!("Invalid password")
} }
user.set_password(&data.NewMasterPasswordHash, Some("post_rotatekey")); user.set_password(
&data.NewMasterPasswordHash,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
);
user.akey = data.Key; user.akey = data.Key;
user.save(&conn) user.save(&conn)
} }
@@ -320,7 +329,9 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
err!("The cipher is not owned by the user") err!("The cipher is not owned by the user")
} }
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::CipherUpdate)? // Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None)?
} }
// Update user data // Update user data
@@ -329,7 +340,6 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
user.akey = data.Key; user.akey = data.Key;
user.private_key = Some(data.PrivateKey); user.private_key = Some(data.PrivateKey);
user.reset_security_stamp(); user.reset_security_stamp();
user.reset_stamp_exception();
user.save(&conn) user.save(&conn)
} }
@@ -576,24 +586,45 @@ struct PasswordHintData {
#[post("/accounts/password-hint", data = "<data>")] #[post("/accounts/password-hint", data = "<data>")]
fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult { fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
err!("This server is not configured to provide password hints.");
}
const NO_HINT: &str = "Sorry, you have no password hint...";
let data: PasswordHintData = data.into_inner().data; let data: PasswordHintData = data.into_inner().data;
let email = &data.Email;
let hint = match User::find_by_mail(&data.Email, &conn) { match User::find_by_mail(email, &conn) {
Some(user) => user.password_hint, None => {
None => return Ok(()), // To prevent user enumeration, act as if the user exists.
};
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_password_hint(&data.Email, hint)?; // There is still a timing side channel here in that the code
} else if CONFIG.show_password_hint() { // paths that send mail take noticeably longer than ones that
if let Some(hint) = hint { // don't. Add a randomized sleep to mitigate this somewhat.
err!(format!("Your password hint is: {}", &hint)); use rand::{thread_rng, Rng};
} else { let mut rng = thread_rng();
err!("Sorry, you have no password hint..."); let base = 1000;
} let delta: i32 = 100;
} let sleep_ms = (base + rng.gen_range(-delta..=delta)) as u64;
std::thread::sleep(std::time::Duration::from_millis(sleep_ms));
Ok(()) Ok(())
} else {
err!(NO_HINT);
}
}
Some(user) => {
let hint: Option<String> = user.password_hint;
if CONFIG.mail_enabled() {
mail::send_password_hint(email, hint)?;
Ok(())
} else if let Some(hint) = hint {
err!(format!("Your password hint is: {}", hint));
} else {
err!(NO_HINT);
}
}
}
} }
#[derive(Deserialize)] #[derive(Deserialize)]

View File

@@ -1,12 +1,11 @@
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use std::path::Path; use std::path::{Path, PathBuf};
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use rocket::{http::ContentType, request::Form, Data, Route}; use rocket::{http::ContentType, request::Form, Data, Route};
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
use data_encoding::HEXLOWER;
use multipart::server::{save::SavedData, Multipart, SaveResult}; use multipart::server::{save::SavedData, Multipart, SaveResult};
use crate::{ use crate::{
@@ -39,8 +38,11 @@ pub fn routes() -> Vec<Route> {
post_ciphers_admin, post_ciphers_admin,
post_ciphers_create, post_ciphers_create,
post_ciphers_import, post_ciphers_import,
post_attachment, get_attachment,
post_attachment_admin, post_attachment_v2,
post_attachment_v2_data,
post_attachment, // legacy
post_attachment_admin, // legacy
post_attachment_share, post_attachment_share,
delete_attachment_post, delete_attachment_post,
delete_attachment_post_admin, delete_attachment_post_admin,
@@ -103,7 +105,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> Json<Value> {
let collections_json: Vec<Value> = let collections_json: Vec<Value> =
collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect(); collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect();
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn); let policies = OrgPolicy::find_confirmed_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect(); let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn); let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
@@ -199,6 +201,7 @@ pub struct CipherData {
Identity: Option<Value>, Identity: Option<Value>,
Favorite: Option<bool>, Favorite: Option<bool>,
Reprompt: Option<i32>,
PasswordHistory: Option<Value>, PasswordHistory: Option<Value>,
@@ -245,7 +248,7 @@ fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn
// This check is usually only needed in update_cipher_from_data(), but we // This check is usually only needed in update_cipher_from_data(), but we
// need it here as well to avoid creating an empty cipher in the call to // need it here as well to avoid creating an empty cipher in the call to
// cipher.save() below. // cipher.save() below.
enforce_personal_ownership_policy(&data.Cipher, &headers, &conn)?; enforce_personal_ownership_policy(Some(&data.Cipher), &headers, &conn)?;
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone()); let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
cipher.user_uuid = Some(headers.user.uuid.clone()); cipher.user_uuid = Some(headers.user.uuid.clone());
@@ -265,7 +268,13 @@ fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn
/// Called when creating a new user-owned cipher. /// Called when creating a new user-owned cipher.
#[post("/ciphers", data = "<data>")] #[post("/ciphers", data = "<data>")]
fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult { fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let data: CipherData = data.into_inner().data; let mut data: CipherData = data.into_inner().data;
// The web/browser clients set this field to null as expected, but the
// mobile clients seem to set the invalid value `0001-01-01T00:00:00`,
// which results in a warning message being logged. This field isn't
// needed when creating a new cipher, so just ignore it unconditionally.
data.LastKnownRevisionDate = None;
let mut cipher = Cipher::new(data.Type, data.Name.clone()); let mut cipher = Cipher::new(data.Type, data.Name.clone());
update_cipher_from_data(&mut cipher, data, &headers, false, &conn, &nt, UpdateType::CipherCreate)?; update_cipher_from_data(&mut cipher, data, &headers, false, &conn, &nt, UpdateType::CipherCreate)?;
@@ -280,8 +289,8 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
/// allowed to delete or share such ciphers to an org, however. /// allowed to delete or share such ciphers to an org, however.
/// ///
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership /// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
fn enforce_personal_ownership_policy(data: &CipherData, headers: &Headers, conn: &DbConn) -> EmptyResult { fn enforce_personal_ownership_policy(data: Option<&CipherData>, headers: &Headers, conn: &DbConn) -> EmptyResult {
if data.OrganizationId.is_none() { if data.is_none() || data.unwrap().OrganizationId.is_none() {
let user_uuid = &headers.user.uuid; let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::PersonalOwnership; let policy_type = OrgPolicyType::PersonalOwnership;
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) { if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
@@ -300,7 +309,7 @@ pub fn update_cipher_from_data(
nt: &Notify, nt: &Notify,
ut: UpdateType, ut: UpdateType,
) -> EmptyResult { ) -> EmptyResult {
enforce_personal_ownership_policy(&data, headers, conn)?; enforce_personal_ownership_policy(Some(&data), headers, conn)?;
// Check that the client isn't updating an existing cipher with stale data. // Check that the client isn't updating an existing cipher with stale data.
if let Some(dt) = data.LastKnownRevisionDate { if let Some(dt) = data.LastKnownRevisionDate {
@@ -319,12 +328,12 @@ pub fn update_cipher_from_data(
} }
if let Some(org_id) = data.OrganizationId { if let Some(org_id) = data.OrganizationId {
match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) { match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, conn) {
None => err!("You don't have permission to add item to organization"), None => err!("You don't have permission to add item to organization"),
Some(org_user) => { Some(org_user) => {
if shared_to_collection if shared_to_collection
|| org_user.has_full_access() || org_user.has_full_access()
|| cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) || cipher.is_write_accessible_to_user(&headers.user.uuid, conn)
{ {
cipher.organization_uuid = Some(org_id); cipher.organization_uuid = Some(org_id);
// After some discussion in PR #1329 re-added the user_uuid = None again. // After some discussion in PR #1329 re-added the user_uuid = None again.
@@ -356,7 +365,7 @@ pub fn update_cipher_from_data(
// Modify attachments name and keys when rotating // Modify attachments name and keys when rotating
if let Some(attachments) = data.Attachments2 { if let Some(attachments) = data.Attachments2 {
for (id, attachment) in attachments { for (id, attachment) in attachments {
let mut saved_att = match Attachment::find_by_id(&id, &conn) { let mut saved_att = match Attachment::find_by_id(&id, conn) {
Some(att) => att, Some(att) => att,
None => err!("Attachment doesn't exist"), None => err!("Attachment doesn't exist"),
}; };
@@ -371,7 +380,7 @@ pub fn update_cipher_from_data(
saved_att.akey = Some(attachment.Key); saved_att.akey = Some(attachment.Key);
saved_att.file_name = attachment.FileName; saved_att.file_name = attachment.FileName;
saved_att.save(&conn)?; saved_att.save(conn)?;
} }
} }
@@ -415,13 +424,14 @@ pub fn update_cipher_from_data(
cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string()); cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string());
cipher.data = type_data.to_string(); cipher.data = type_data.to_string();
cipher.password_history = data.PasswordHistory.map(|f| f.to_string()); cipher.password_history = data.PasswordHistory.map(|f| f.to_string());
cipher.reprompt = data.Reprompt;
cipher.save(&conn)?; cipher.save(conn)?;
cipher.move_to_folder(data.FolderId, &headers.user.uuid, &conn)?; cipher.move_to_folder(data.FolderId, &headers.user.uuid, conn)?;
cipher.set_favorite(data.Favorite, &headers.user.uuid, &conn)?; cipher.set_favorite(data.Favorite, &headers.user.uuid, conn)?;
if ut != UpdateType::None { if ut != UpdateType::None {
nt.send_cipher_update(ut, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(ut, cipher, &cipher.update_users_revision(conn));
} }
Ok(()) Ok(())
@@ -448,6 +458,8 @@ struct RelationsData {
#[post("/ciphers/import", data = "<data>")] #[post("/ciphers/import", data = "<data>")]
fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
enforce_personal_ownership_policy(None, &headers, &conn)?;
let data: ImportData = data.into_inner().data; let data: ImportData = data.into_inner().data;
// Read and create the folders // Read and create the folders
@@ -591,7 +603,7 @@ fn post_collections_admin(
cipher.get_collections(&headers.user.uuid, &conn).iter().cloned().collect(); cipher.get_collections(&headers.user.uuid, &conn).iter().cloned().collect();
for collection in posted_collections.symmetric_difference(&current_collections) { for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid(&collection, &conn) { match Collection::find_by_uuid(collection, &conn) {
None => err!("Invalid collection ID provided"), None => err!("Invalid collection ID provided"),
Some(collection) => { Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &conn) { if collection.is_writable_by_user(&headers.user.uuid, &conn) {
@@ -677,12 +689,6 @@ fn put_cipher_share_selected(
}; };
} }
let attachments = Attachment::find_by_ciphers(cipher_ids, &conn);
if !attachments.is_empty() {
err!("Ciphers should not have any attachments.")
}
while let Some(cipher) = data.Ciphers.pop() { while let Some(cipher) = data.Ciphers.pop() {
let mut shared_cipher_data = ShareCipherData { let mut shared_cipher_data = ShareCipherData {
Cipher: cipher, Cipher: cipher,
@@ -705,9 +711,9 @@ fn share_cipher_by_uuid(
conn: &DbConn, conn: &DbConn,
nt: &Notify, nt: &Notify,
) -> JsonResult { ) -> JsonResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) { let mut cipher = match Cipher::find_by_uuid(uuid, conn) {
Some(cipher) => { Some(cipher) => {
if cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if cipher.is_write_accessible_to_user(&headers.user.uuid, conn) {
cipher cipher
} else { } else {
err!("Cipher is not write accessible") err!("Cipher is not write accessible")
@@ -724,11 +730,11 @@ fn share_cipher_by_uuid(
None => {} None => {}
Some(organization_uuid) => { Some(organization_uuid) => {
for uuid in &data.CollectionIds { for uuid in &data.CollectionIds {
match Collection::find_by_uuid_and_org(uuid, &organization_uuid, &conn) { match Collection::find_by_uuid_and_org(uuid, &organization_uuid, conn) {
None => err!("Invalid collection ID provided"), None => err!("Invalid collection ID provided"),
Some(collection) => { Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, &conn) { if collection.is_writable_by_user(&headers.user.uuid, conn) {
CollectionCipher::save(&cipher.uuid, &collection.uuid, &conn)?; CollectionCipher::save(&cipher.uuid, &collection.uuid, conn)?;
shared_to_collection = true; shared_to_collection = true;
} else { } else {
err!("No rights to modify the collection") err!("No rights to modify the collection")
@@ -742,45 +748,126 @@ fn share_cipher_by_uuid(
update_cipher_from_data( update_cipher_from_data(
&mut cipher, &mut cipher,
data.Cipher, data.Cipher,
&headers, headers,
shared_to_collection, shared_to_collection,
&conn, conn,
&nt, nt,
UpdateType::CipherUpdate, UpdateType::CipherUpdate,
)?; )?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn))) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, conn)))
} }
#[post("/ciphers/<uuid>/attachment", format = "multipart/form-data", data = "<data>")] /// v2 API for downloading an attachment. This just redirects the client to
fn post_attachment( /// the actual location of an attachment.
///
/// Upstream added this v2 API to support direct download of attachments from
/// their object storage service. For self-hosted instances, it basically just
/// redirects to the same location as before the v2 API.
#[get("/ciphers/<uuid>/attachment/<attachment_id>")]
fn get_attachment(uuid: String, attachment_id: String, headers: Headers, conn: DbConn) -> JsonResult {
match Attachment::find_by_id(&attachment_id, &conn) {
Some(attachment) if uuid == attachment.cipher_uuid => Ok(Json(attachment.to_json(&headers.host))),
Some(_) => err!("Attachment doesn't belong to cipher"),
None => err!("Attachment doesn't exist"),
}
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct AttachmentRequestData {
Key: String,
FileName: String,
FileSize: i32,
AdminRequest: Option<bool>, // true when attaching from an org vault view
}
enum FileUploadType {
Direct = 0,
// Azure = 1, // only used upstream
}
/// v2 API for creating an attachment associated with a cipher.
/// This redirects the client to the API it should use to upload the attachment.
/// For upstream's cloud-hosted service, it's an Azure object storage API.
/// For self-hosted instances, it's another API on the local instance.
#[post("/ciphers/<uuid>/attachment/v2", data = "<data>")]
fn post_attachment_v2(
uuid: String, uuid: String,
data: Data, data: JsonUpcase<AttachmentRequestData>,
content_type: &ContentType,
headers: Headers, headers: Headers,
conn: DbConn, conn: DbConn,
nt: Notify,
) -> JsonResult { ) -> JsonResult {
let cipher = match Cipher::find_by_uuid(&uuid, &conn) { let cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) {
err!("Cipher is not write accessible")
}
let attachment_id = crypto::generate_attachment_id();
let data: AttachmentRequestData = data.into_inner().data;
let attachment =
Attachment::new(attachment_id.clone(), cipher.uuid.clone(), data.FileName, data.FileSize, Some(data.Key));
attachment.save(&conn).expect("Error saving attachment");
let url = format!("/ciphers/{}/attachment/{}", cipher.uuid, attachment_id);
let response_key = match data.AdminRequest {
Some(b) if b => "CipherMiniResponse",
_ => "CipherResponse",
};
Ok(Json(json!({ // AttachmentUploadDataResponseModel
"Object": "attachment-fileUpload",
"AttachmentId": attachment_id,
"Url": url,
"FileUploadType": FileUploadType::Direct as i32,
response_key: cipher.to_json(&headers.host, &headers.user.uuid, &conn),
})))
}
/// Saves the data content of an attachment to a file. This is common code
/// shared between the v2 and legacy attachment APIs.
///
/// When used with the legacy API, this function is responsible for creating
/// the attachment database record, so `attachment` is None.
///
/// When used with the v2 API, post_attachment_v2() has already created the
/// database record, which is passed in as `attachment`.
fn save_attachment(
mut attachment: Option<Attachment>,
cipher_uuid: String,
data: Data,
content_type: &ContentType,
headers: &Headers,
conn: &DbConn,
nt: Notify,
) -> Result<Cipher, crate::error::Error> {
let cipher = match Cipher::find_by_uuid(&cipher_uuid, conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err_discard!("Cipher doesn't exist", data), None => err_discard!("Cipher doesn't exist", data),
}; };
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn) {
err_discard!("Cipher is not write accessible", data) err_discard!("Cipher is not write accessible", data)
} }
let mut params = content_type.params(); // In the v2 API, the attachment record has already been created,
let boundary_pair = params.next().expect("No boundary provided"); // so the size limit needs to be adjusted to account for that.
let boundary = boundary_pair.1; let size_adjust = match &attachment {
None => 0, // Legacy API
Some(a) => a.file_size as i64, // v2 API
};
let size_limit = if let Some(ref user_uuid) = cipher.user_uuid { let size_limit = if let Some(ref user_uuid) = cipher.user_uuid {
match CONFIG.user_attachment_limit() { match CONFIG.user_attachment_limit() {
Some(0) => err_discard!("Attachments are disabled", data), Some(0) => err_discard!("Attachments are disabled", data),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, &conn); let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, conn) + size_adjust;
if left <= 0 { if left <= 0 {
err_discard!("Attachment size limit reached! Delete some files to open space", data) err_discard!("Attachment storage limit reached! Delete some attachments to free up space", data)
} }
Some(left as u64) Some(left as u64)
} }
@@ -790,9 +877,9 @@ fn post_attachment(
match CONFIG.org_attachment_limit() { match CONFIG.org_attachment_limit() {
Some(0) => err_discard!("Attachments are disabled", data), Some(0) => err_discard!("Attachments are disabled", data),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, &conn); let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, conn) + size_adjust;
if left <= 0 { if left <= 0 {
err_discard!("Attachment size limit reached! Delete some files to open space", data) err_discard!("Attachment storage limit reached! Delete some attachments to free up space", data)
} }
Some(left as u64) Some(left as u64)
} }
@@ -802,7 +889,12 @@ fn post_attachment(
err_discard!("Cipher is neither owned by a user nor an organization", data); err_discard!("Cipher is neither owned by a user nor an organization", data);
}; };
let base_path = Path::new(&CONFIG.attachments_folder()).join(&cipher.uuid); let mut params = content_type.params();
let boundary_pair = params.next().expect("No boundary provided");
let boundary = boundary_pair.1;
let base_path = Path::new(&CONFIG.attachments_folder()).join(&cipher_uuid);
let mut path = PathBuf::new();
let mut attachment_key = None; let mut attachment_key = None;
let mut error = None; let mut error = None;
@@ -818,35 +910,81 @@ fn post_attachment(
} }
} }
"data" => { "data" => {
// This is provided by the client, don't trust it // In the legacy API, this is the encrypted filename
let name = field.headers.filename.expect("No filename provided"); // provided by the client, stored to the database as-is.
// In the v2 API, this value doesn't matter, as it was
// already provided and stored via an earlier API call.
let encrypted_filename = field.headers.filename;
let file_name = HEXLOWER.encode(&crypto::get_random(vec![0; 10])); // This random ID is used as the name of the file on disk.
let path = base_path.join(&file_name); // In the legacy API, we need to generate this value here.
// In the v2 API, we use the value from post_attachment_v2().
let file_id = match &attachment {
Some(attachment) => attachment.id.clone(), // v2 API
None => crypto::generate_attachment_id(), // Legacy API
};
path = base_path.join(&file_id);
let size = let size =
match field.data.save().memory_threshold(0).size_limit(size_limit).with_path(path.clone()) { match field.data.save().memory_threshold(0).size_limit(size_limit).with_path(path.clone()) {
SaveResult::Full(SavedData::File(_, size)) => size as i32, SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => { SaveResult::Full(other) => {
std::fs::remove_file(path).ok();
error = Some(format!("Attachment is not a file: {:?}", other)); error = Some(format!("Attachment is not a file: {:?}", other));
return; return;
} }
SaveResult::Partial(_, reason) => { SaveResult::Partial(_, reason) => {
std::fs::remove_file(path).ok(); error = Some(format!("Attachment storage limit exceeded with this file: {:?}", reason));
error = Some(format!("Attachment size limit exceeded with this file: {:?}", reason));
return; return;
} }
SaveResult::Error(e) => { SaveResult::Error(e) => {
std::fs::remove_file(path).ok();
error = Some(format!("Error: {:?}", e)); error = Some(format!("Error: {:?}", e));
return; return;
} }
}; };
let mut attachment = Attachment::new(file_name, cipher.uuid.clone(), name, size); if let Some(attachment) = &mut attachment {
attachment.akey = attachment_key.clone(); // v2 API
attachment.save(&conn).expect("Error saving attachment");
// Check the actual size against the size initially provided by
// the client. Upstream allows +/- 1 MiB deviation from this
// size, but it's not clear when or why this is needed.
const LEEWAY: i32 = 1024 * 1024; // 1 MiB
let min_size = attachment.file_size - LEEWAY;
let max_size = attachment.file_size + LEEWAY;
if min_size <= size && size <= max_size {
if size != attachment.file_size {
// Update the attachment with the actual file size.
attachment.file_size = size;
attachment.save(conn).expect("Error updating attachment");
}
} else {
attachment.delete(conn).ok();
let err_msg = "Attachment size mismatch".to_string();
error!("{} (expected within [{}, {}], got {})", err_msg, min_size, max_size, size);
error = Some(err_msg);
}
} else {
// Legacy API
if encrypted_filename.is_none() {
error = Some("No filename provided".to_string());
return;
}
if attachment_key.is_none() {
error = Some("No attachment key provided".to_string());
return;
}
let attachment = Attachment::new(
file_id,
cipher_uuid.clone(),
encrypted_filename.unwrap(),
size,
attachment_key.clone(),
);
attachment.save(conn).expect("Error saving attachment");
}
} }
_ => error!("Invalid multipart name"), _ => error!("Invalid multipart name"),
} }
@@ -854,10 +992,55 @@ fn post_attachment(
.expect("Error processing multipart data"); .expect("Error processing multipart data");
if let Some(ref e) = error { if let Some(ref e) = error {
std::fs::remove_file(path).ok();
err!(e); err!(e);
} }
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(conn));
Ok(cipher)
}
/// v2 API for uploading the actual data content of an attachment.
/// This route needs a rank specified so that Rocket prioritizes the
/// /ciphers/<uuid>/attachment/v2 route, which would otherwise conflict
/// with this one.
#[post("/ciphers/<uuid>/attachment/<attachment_id>", format = "multipart/form-data", data = "<data>", rank = 1)]
fn post_attachment_v2_data(
uuid: String,
attachment_id: String,
data: Data,
content_type: &ContentType,
headers: Headers,
conn: DbConn,
nt: Notify,
) -> EmptyResult {
let attachment = match Attachment::find_by_id(&attachment_id, &conn) {
Some(attachment) if uuid == attachment.cipher_uuid => Some(attachment),
Some(_) => err!("Attachment doesn't belong to cipher"),
None => err!("Attachment doesn't exist"),
};
save_attachment(attachment, uuid, data, content_type, &headers, &conn, nt)?;
Ok(())
}
/// Legacy API for creating an attachment associated with a cipher.
#[post("/ciphers/<uuid>/attachment", format = "multipart/form-data", data = "<data>")]
fn post_attachment(
uuid: String,
data: Data,
content_type: &ContentType,
headers: Headers,
conn: DbConn,
nt: Notify,
) -> JsonResult {
// Setting this as None signifies to save_attachment() that it should create
// the attachment database record as well as saving the data to disk.
let attachment = None;
let cipher = save_attachment(attachment, uuid, data, content_type, &headers, &conn, nt)?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn))) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
} }
@@ -1122,22 +1305,22 @@ fn delete_all(
} }
fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_delete: bool, nt: &Notify) -> EmptyResult { fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_delete: bool, nt: &Notify) -> EmptyResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) { let mut cipher = match Cipher::find_by_uuid(uuid, conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
}; };
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn) {
err!("Cipher can't be deleted by user") err!("Cipher can't be deleted by user")
} }
if soft_delete { if soft_delete {
cipher.deleted_at = Some(Utc::now().naive_utc()); cipher.deleted_at = Some(Utc::now().naive_utc());
cipher.save(&conn)?; cipher.save(conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(conn));
} else { } else {
cipher.delete(&conn)?; cipher.delete(conn)?;
nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(conn));
} }
Ok(()) Ok(())
@@ -1170,20 +1353,20 @@ fn _delete_multiple_ciphers(
} }
fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult { fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) { let mut cipher = match Cipher::find_by_uuid(uuid, conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
}; };
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn) {
err!("Cipher can't be restored by user") err!("Cipher can't be restored by user")
} }
cipher.deleted_at = None; cipher.deleted_at = None;
cipher.save(&conn)?; cipher.save(conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(conn));
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn))) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, conn)))
} }
fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult { fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
@@ -1219,7 +1402,7 @@ fn _delete_cipher_attachment_by_id(
conn: &DbConn, conn: &DbConn,
nt: &Notify, nt: &Notify,
) -> EmptyResult { ) -> EmptyResult {
let attachment = match Attachment::find_by_id(&attachment_id, &conn) { let attachment = match Attachment::find_by_id(attachment_id, conn) {
Some(attachment) => attachment, Some(attachment) => attachment,
None => err!("Attachment doesn't exist"), None => err!("Attachment doesn't exist"),
}; };
@@ -1228,17 +1411,17 @@ fn _delete_cipher_attachment_by_id(
err!("Attachment from other cipher") err!("Attachment from other cipher")
} }
let cipher = match Cipher::find_by_uuid(&uuid, &conn) { let cipher = match Cipher::find_by_uuid(uuid, conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
}; };
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if !cipher.is_write_accessible_to_user(&headers.user.uuid, conn) {
err!("Cipher cannot be deleted by user") err!("Cipher cannot be deleted by user")
} }
// Delete attachment // Delete attachment
attachment.delete(&conn)?; attachment.delete(conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(conn));
Ok(()) Ok(())
} }

View File

@@ -0,0 +1,805 @@
use chrono::{Duration, Utc};
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use std::borrow::Borrow;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![
get_contacts,
get_grantees,
get_emergency_access,
put_emergency_access,
delete_emergency_access,
post_delete_emergency_access,
send_invite,
resend_invite,
accept_invite,
confirm_emergency_access,
initiate_emergency_access,
approve_emergency_access,
reject_emergency_access,
takeover_emergency_access,
password_emergency_access,
view_emergency_access,
policies_emergency_access,
]
}
// region get
#[get("/emergency-access/trusted")]
fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn);
let emergency_access_list_json: Vec<Value> =
emergency_access_list.iter().map(|e| e.to_json_grantee_details(&conn)).collect();
Ok(Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/emergency-access/granted")]
fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn);
let emergency_access_list_json: Vec<Value> =
emergency_access_list.iter().map(|e| e.to_json_grantor_details(&conn)).collect();
Ok(Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/emergency-access/<emer_id>")]
fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn))),
None => err!("Emergency access not valid."),
}
}
// endregion
// region put/post
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessUpdateData {
Type: NumberOrString,
WaitTimeDays: i32,
KeyEncrypted: Option<String>,
}
#[put("/emergency-access/<emer_id>", data = "<data>")]
fn put_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
post_emergency_access(emer_id, data, conn)
}
#[post("/emergency-access/<emer_id>", data = "<data>")]
fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
emergency_access.atype = new_type;
emergency_access.wait_time_days = data.WaitTimeDays;
emergency_access.key_encrypted = data.KeyEncrypted;
emergency_access.save(&conn)?;
Ok(Json(emergency_access.to_json()))
}
// endregion
// region delete
#[delete("/emergency-access/<emer_id>")]
fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
}
emer
}
None => err!("Emergency access not valid."),
};
emergency_access.delete(&conn)?;
Ok(())
}
#[post("/emergency-access/<emer_id>/delete")]
fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn)
}
// endregion
// region invite
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessInviteData {
Email: String,
Type: NumberOrString,
WaitTimeDays: i32,
}
#[post("/emergency-access/invite", data = "<data>")]
fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase();
let wait_time_days = data.WaitTimeDays;
let emergency_access_status = EmergencyAccessStatus::Invited as i32;
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
let grantor_user = headers.user;
// avoid setting yourself as emergency contact
if email == grantor_user.email {
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &conn) {
None => {
if !CONFIG.signups_allowed() {
err!(format!("Grantee user does not exist: {}", email))
}
if !CONFIG.is_email_domain_allowed(&email) {
err!("Email domain not eligible for invitations")
}
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(email.clone());
invitation.save(&conn)?;
}
let mut user = User::new(email.clone());
user.save(&conn)?;
user
}
Some(user) => user,
};
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
&grantor_user.uuid,
&grantee_user.uuid,
&grantee_user.email,
&conn,
)
.is_some()
{
err!(format!("Grantee user already invited: {}", email))
}
let mut new_emergency_access = EmergencyAccess::new(
grantor_user.uuid.clone(),
Some(grantee_user.email.clone()),
emergency_access_status,
new_type,
wait_time_days,
);
new_emergency_access.save(&conn)?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&grantee_user.email,
&grantee_user.uuid,
Some(new_emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &conn) {
Some(user) => {
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()) {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
None => err!("Grantee user not found."),
}
}
Ok(())
}
#[post("/emergency-access/<emer_id>/reinvite")]
fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.grantor_uuid != headers.user.uuid {
err!("Emergency access not valid.");
}
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
err!("The grantee user is already accepted or confirmed to the organization");
}
let email = match emergency_access.email.clone() {
Some(email) => email,
None => err!("Email not valid."),
};
let grantee_user = match User::find_by_mail(&email, &conn) {
Some(user) => user,
None => err!("Grantee user not found."),
};
let grantor_user = headers.user;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&email,
&grantor_user.uuid,
Some(emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
} else {
if Invitation::find_by_mail(&email, &conn).is_none() {
let invitation = Invitation::new(email);
invitation.save(&conn)?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow()) {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
Ok(())
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct AcceptData {
Token: String,
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
let claims = decode_emergency_access_invite(token)?;
let grantee_user = match User::find_by_mail(&claims.email, &conn) {
Some(user) => {
Invitation::take(&claims.email, &conn);
user
}
None => err!("Invited user not found"),
};
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
{
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn) {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email)?;
}
Ok(())
} else {
err!("Emergency access invitation error.")
}
}
fn accept_invite_process(grantee_uuid: String, emer_id: String, email: Option<String>, conn: &DbConn) -> EmptyResult {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let emer_email = emergency_access.email;
if emer_email.is_none() || emer_email != email {
err!("User email does not match invite.");
}
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(grantee_uuid);
emergency_access.email = None;
emergency_access.save(conn)
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct ConfirmData {
Key: String,
}
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
fn confirm_emergency_access(
emer_id: String,
data: JsonUpcase<ConfirmData>,
headers: Headers,
conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
|| emergency_access.grantor_uuid != confirming_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.key_encrypted = Some(key);
emergency_access.email = None;
emergency_access.save(&conn)?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
// endregion
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
let now = Utc::now().naive_utc();
emergency_access.status = EmergencyAccessStatus::RecoveryInitiated as i32;
emergency_access.updated_at = now;
emergency_access.recovery_initiated_at = Some(now);
emergency_access.last_notification_at = Some(now);
emergency_access.save(&conn)?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_initiated(
&grantor_user.email,
&initiating_user.name,
emergency_access.get_type_as_str(),
&emergency_access.wait_time_days.clone().to_string(),
)?;
}
Ok(Json(emergency_access.to_json()))
}
#[post("/emergency-access/<emer_id>/approve")]
fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let approving_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|| emergency_access.grantor_uuid != approving_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
emergency_access.save(&conn)?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
#[post("/emergency-access/<emer_id>/reject")]
fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let rejecting_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|| emergency_access.grantor_uuid != rejecting_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.key_encrypted = None;
emergency_access.save(&conn)?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
// endregion
// region action
#[post("/emergency-access/<emer_id>/view")]
fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let host = headers.host;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
}
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn);
let ciphers_json: Vec<Value> =
ciphers.iter().map(|c| c.to_json(&host, &emergency_access.grantor_uuid, &conn)).collect();
Ok(Json(json!({
"Ciphers": ciphers_json,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessView",
})))
}
#[post("/emergency-access/<emer_id>/takeover")]
fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
Ok(Json(json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessPasswordData {
NewMasterPasswordHash: String,
Key: String,
}
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
fn password_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessPasswordData>,
headers: Headers,
conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
let key = data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
// change grantor_user password
grantor_user.set_password(new_master_password_hash, None);
grantor_user.akey = key;
grantor_user.save(&conn)?;
// Disable TwoFactor providers since they will otherwise block logins
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn)?;
// Removing owner, check that there are at least another owner
let user_org_grantor = UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn);
// Remove grantor from all organisations unless Owner
for user_org in user_org_grantor {
if user_org.atype != UserOrgType::Owner as i32 {
user_org.delete(&conn)?;
}
}
Ok(())
}
// endregion
#[get("/emergency-access/<emer_id>/policies")]
fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
Some(user) => user,
None => err!("Grantor user not found."),
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
"Object": "list",
"ContinuationToken": null
})))
}
fn is_valid_request(
emergency_access: &EmergencyAccess,
requesting_user_uuid: String,
requested_access_type: EmergencyAccessType,
) -> bool {
emergency_access.grantee_uuid == Some(requesting_user_uuid)
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
&& emergency_access.atype == requested_access_type as i32
}
fn check_emergency_access_allowed() -> EmptyResult {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not allowed.")
}
Ok(())
}
pub fn emergency_request_timeout_job(pool: DbPool) {
debug!("Start emergency_request_timeout_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get() {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
if emergency_access_list.is_empty() {
debug!("No emergency request timeout to approve");
}
for mut emer in emergency_access_list {
if emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days(emer.wait_time_days as i64)
{
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
emer.save(&conn).expect("Cannot save emergency access on job");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.expect("Grantee user not found.");
mail::send_emergency_access_recovery_timed_out(
&grantor_user.email,
&grantee_user.name.clone(),
emer.get_type_as_str(),
)
.expect("Error on sending email");
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
.expect("Error on sending email");
}
}
}
} else {
error!("Failed to get DB connection while searching emergency request timed out")
}
}
pub fn emergency_notification_reminder_job(pool: DbPool) {
debug!("Start emergency_notification_reminder_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get() {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
if emergency_access_list.is_empty() {
debug!("No emergency request reminder notification to send");
}
for mut emer in emergency_access_list {
if (emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days((emer.wait_time_days as i64) - 1))
&& (emer.last_notification_at.is_none()
|| (emer.last_notification_at.is_some()
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
{
emer.save(&conn).expect("Cannot save emergency access on job");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.expect("Grantee user not found.");
mail::send_emergency_access_recovery_reminder(
&grantor_user.email,
&grantee_user.name.clone(),
emer.get_type_as_str(),
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
)
.expect("Error on sending email");
}
}
}
} else {
error!("Failed to get DB connection while searching emergency notification reminder")
}
}

View File

@@ -1,11 +1,13 @@
mod accounts; mod accounts;
mod ciphers; mod ciphers;
mod emergency_access;
mod folders; mod folders;
mod organizations; mod organizations;
mod sends; mod sends;
pub mod two_factor; pub mod two_factor;
pub use ciphers::purge_trashed_ciphers; pub use ciphers::purge_trashed_ciphers;
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use sends::purge_sends; pub use sends::purge_sends;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
@@ -15,6 +17,7 @@ pub fn routes() -> Vec<Route> {
let mut routes = Vec::new(); let mut routes = Vec::new();
routes.append(&mut accounts::routes()); routes.append(&mut accounts::routes());
routes.append(&mut ciphers::routes()); routes.append(&mut ciphers::routes());
routes.append(&mut emergency_access::routes());
routes.append(&mut folders::routes()); routes.append(&mut folders::routes());
routes.append(&mut organizations::routes()); routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes()); routes.append(&mut two_factor::routes());
@@ -27,7 +30,6 @@ pub fn routes() -> Vec<Route> {
// //
// Move this somewhere else // Move this somewhere else
// //
use rocket::response::Response;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
@@ -41,7 +43,7 @@ use crate::{
}; };
#[put("/devices/identifier/<uuid>/clear-token")] #[put("/devices/identifier/<uuid>/clear-token")]
fn clear_device_token<'a>(uuid: String) -> Response<'a> { fn clear_device_token(uuid: String) -> &'static str {
// This endpoint doesn't have auth header // This endpoint doesn't have auth header
let _ = uuid; let _ = uuid;
@@ -50,7 +52,7 @@ fn clear_device_token<'a>(uuid: String) -> Response<'a> {
// This only clears push token // This only clears push token
// https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109 // https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109
// https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37 // https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37
Response::new() ""
} }
#[put("/devices/identifier/<uuid>/token", data = "<data>")] #[put("/devices/identifier/<uuid>/token", data = "<data>")]

View File

@@ -35,12 +35,15 @@ pub fn routes() -> Vec<Route> {
get_org_users, get_org_users,
send_invite, send_invite,
reinvite_user, reinvite_user,
bulk_reinvite_user,
confirm_invite, confirm_invite,
bulk_confirm_invite,
accept_invite, accept_invite,
get_user, get_user,
edit_user, edit_user,
put_organization_user, put_organization_user,
delete_user, delete_user,
bulk_delete_user,
post_delete_user, post_delete_user,
post_org_import, post_org_import,
list_policies, list_policies,
@@ -51,6 +54,8 @@ pub fn routes() -> Vec<Route> {
get_plans, get_plans,
get_plans_tax_rates, get_plans_tax_rates,
import, import,
post_org_keys,
bulk_public_keys,
] ]
} }
@@ -61,6 +66,7 @@ struct OrgData {
CollectionName: String, CollectionName: String,
Key: String, Key: String,
Name: String, Name: String,
Keys: Option<OrgKeyData>,
#[serde(rename = "PlanType")] #[serde(rename = "PlanType")]
_PlanType: NumberOrString, // Ignored, always use the same plan _PlanType: NumberOrString, // Ignored, always use the same plan
} }
@@ -78,15 +84,39 @@ struct NewCollectionData {
Name: String, Name: String,
} }
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct OrgKeyData {
EncryptedPrivateKey: String,
PublicKey: String,
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgBulkIds {
Ids: Vec<String>,
}
#[post("/organizations", data = "<data>")] #[post("/organizations", data = "<data>")]
fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult { fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult {
if !CONFIG.is_org_creation_allowed(&headers.user.email) { if !CONFIG.is_org_creation_allowed(&headers.user.email) {
err!("User not allowed to create organizations") err!("User not allowed to create organizations")
} }
if OrgPolicy::is_applicable_to_user(&headers.user.uuid, OrgPolicyType::SingleOrg, &conn) {
err!(
"You may not create an organization. You belong to an organization which has a policy that prohibits you from being a member of any other organization."
)
}
let data: OrgData = data.into_inner().data; let data: OrgData = data.into_inner().data;
let (private_key, public_key) = if data.Keys.is_some() {
let keys: OrgKeyData = data.Keys.unwrap();
(Some(keys.EncryptedPrivateKey), Some(keys.PublicKey))
} else {
(None, None)
};
let org = Organization::new(data.Name, data.BillingEmail); let org = Organization::new(data.Name, data.BillingEmail, private_key, public_key);
let mut user_org = UserOrganization::new(headers.user.uuid, org.uuid.clone()); let mut user_org = UserOrganization::new(headers.user.uuid, org.uuid.clone());
let collection = Collection::new(org.uuid.clone(), data.CollectionName); let collection = Collection::new(org.uuid.clone(), data.CollectionName);
@@ -352,7 +382,7 @@ fn delete_organization_collection(
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[allow(non_snake_case)] #[allow(non_snake_case, dead_code)]
struct DeleteCollectionData { struct DeleteCollectionData {
Id: String, Id: String,
OrgId: String, OrgId: String,
@@ -397,7 +427,7 @@ fn get_collection_users(org_id: String, coll_id: String, _headers: ManagerHeader
.map(|col_user| { .map(|col_user| {
UserOrganization::find_by_user_and_org(&col_user.user_uuid, &org_id, &conn) UserOrganization::find_by_user_and_org(&col_user.user_uuid, &org_id, &conn)
.unwrap() .unwrap()
.to_json_user_access_restrictions(&col_user) .to_json_user_access_restrictions(col_user)
}) })
.collect(); .collect();
@@ -468,6 +498,32 @@ fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) ->
})) }))
} }
#[post("/organizations/<org_id>/keys", data = "<data>")]
fn post_org_keys(org_id: String, data: JsonUpcase<OrgKeyData>, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let data: OrgKeyData = data.into_inner().data;
let mut org = match Organization::find_by_uuid(&org_id, &conn) {
Some(organization) => {
if organization.private_key.is_some() && organization.public_key.is_some() {
err!("Organization Keys already exist")
}
organization
}
None => err!("Can't find organization details"),
};
org.private_key = Some(data.EncryptedPrivateKey);
org.public_key = Some(data.PublicKey);
org.save(&conn)?;
Ok(Json(json!({
"Object": "organizationKeys",
"PublicKey": org.public_key,
"PrivateKey": org.private_key,
})))
}
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct CollectionData { struct CollectionData {
@@ -499,6 +555,7 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
} }
for email in data.Emails.iter() { for email in data.Emails.iter() {
let email = email.to_lowercase();
let mut user_org_status = if CONFIG.mail_enabled() { let mut user_org_status = if CONFIG.mail_enabled() {
UserOrgStatus::Invited as i32 UserOrgStatus::Invited as i32
} else { } else {
@@ -573,8 +630,44 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
Ok(()) Ok(())
} }
#[post("/organizations/<org_id>/users/reinvite", data = "<data>")]
fn bulk_reinvite_user(
org_id: String,
data: JsonUpcase<OrgBulkIds>,
headers: AdminHeaders,
conn: DbConn,
) -> Json<Value> {
let data: OrgBulkIds = data.into_inner().data;
let mut bulk_response = Vec::new();
for org_user_id in data.Ids {
let err_msg = match _reinvite_user(&org_id, &org_user_id, &headers.user.email, &conn) {
Ok(_) => String::from(""),
Err(e) => format!("{:?}", e),
};
bulk_response.push(json!(
{
"Object": "OrganizationBulkConfirmResponseModel",
"Id": org_user_id,
"Error": err_msg
}
))
}
Json(json!({
"Data": bulk_response,
"Object": "list",
"ContinuationToken": null
}))
}
#[post("/organizations/<org_id>/users/<user_org>/reinvite")] #[post("/organizations/<org_id>/users/<user_org>/reinvite")]
fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult { fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
_reinvite_user(&org_id, &user_org, &headers.user.email, &conn)
}
fn _reinvite_user(org_id: &str, user_org: &str, invited_by_email: &str, conn: &DbConn) -> EmptyResult {
if !CONFIG.invitations_allowed() { if !CONFIG.invitations_allowed() {
err!("Invitations are not allowed.") err!("Invitations are not allowed.")
} }
@@ -583,7 +676,7 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
err!("SMTP is not configured.") err!("SMTP is not configured.")
} }
let user_org = match UserOrganization::find_by_uuid(&user_org, &conn) { let user_org = match UserOrganization::find_by_uuid(user_org, conn) {
Some(user_org) => user_org, Some(user_org) => user_org,
None => err!("The user hasn't been invited to the organization."), None => err!("The user hasn't been invited to the organization."),
}; };
@@ -592,12 +685,12 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
err!("The user is already accepted or confirmed to the organization") err!("The user is already accepted or confirmed to the organization")
} }
let user = match User::find_by_uuid(&user_org.user_uuid, &conn) { let user = match User::find_by_uuid(&user_org.user_uuid, conn) {
Some(user) => user, Some(user) => user,
None => err!("User not found."), None => err!("User not found."),
}; };
let org_name = match Organization::find_by_uuid(&org_id, &conn) { let org_name = match Organization::find_by_uuid(org_id, conn) {
Some(org) => org.name, Some(org) => org.name,
None => err!("Error looking up organization."), None => err!("Error looking up organization."),
}; };
@@ -606,14 +699,14 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
mail::send_invite( mail::send_invite(
&user.email, &user.email,
&user.uuid, &user.uuid,
Some(org_id), Some(org_id.to_string()),
Some(user_org.uuid), Some(user_org.uuid),
&org_name, &org_name,
Some(headers.user.email), Some(invited_by_email.to_string()),
)?; )?;
} else { } else {
let invitation = Invitation::new(user.email); let invitation = Invitation::new(user.email);
invitation.save(&conn)?; invitation.save(conn)?;
} }
Ok(()) Ok(())
@@ -630,7 +723,7 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
// The web-vault passes org_id and org_user_id in the URL, but we are just reading them from the JWT instead // The web-vault passes org_id and org_user_id in the URL, but we are just reading them from the JWT instead
let data: AcceptData = data.into_inner().data; let data: AcceptData = data.into_inner().data;
let token = &data.Token; let token = &data.Token;
let claims = decode_invite(&token)?; let claims = decode_invite(token)?;
match User::find_by_mail(&claims.email, &conn) { match User::find_by_mail(&claims.email, &conn) {
Some(_) => { Some(_) => {
@@ -646,6 +739,43 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
err!("User already accepted the invitation") err!("User already accepted the invitation")
} }
let user_twofactor_disabled = TwoFactor::find_by_user(&user_org.user_uuid, &conn).is_empty();
let policy = OrgPolicyType::TwoFactorAuthentication as i32;
let org_twofactor_policy_enabled =
match OrgPolicy::find_by_org_and_type(&user_org.org_uuid, policy, &conn) {
Some(p) => p.enabled,
None => false,
};
if org_twofactor_policy_enabled && user_twofactor_disabled {
err!("You cannot join this organization until you enable two-step login on your user account.")
}
// Enforce Single Organization Policy of organization user is trying to join
let single_org_policy_enabled =
match OrgPolicy::find_by_org_and_type(&user_org.org_uuid, OrgPolicyType::SingleOrg as i32, &conn) {
Some(p) => p.enabled,
None => false,
};
if single_org_policy_enabled && user_org.atype < UserOrgType::Admin {
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&user_org.user_uuid, &conn)
.into_iter()
.filter(|uo| uo.org_uuid != user_org.org_uuid)
.count()
> 1;
if is_member_of_another_org {
err!("You may not join this organization until you leave or remove all other organizations.")
}
}
// Enforce Single Organization Policy of other organizations user is a member of
if OrgPolicy::is_applicable_to_user(&user_org.user_uuid, OrgPolicyType::SingleOrg, &conn) {
err!(
"You cannot join this organization because you are a member of an organization which forbids it"
)
}
user_org.status = UserOrgStatus::Accepted as i32; user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&conn)?; user_org.save(&conn)?;
} }
@@ -656,7 +786,7 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let mut org_name = CONFIG.invitation_org_name(); let mut org_name = CONFIG.invitation_org_name();
if let Some(org_id) = &claims.org_id { if let Some(org_id) = &claims.org_id {
org_name = match Organization::find_by_uuid(&org_id, &conn) { org_name = match Organization::find_by_uuid(org_id, &conn) {
Some(org) => org.name, Some(org) => org.name,
None => err!("Organization not found."), None => err!("Organization not found."),
}; };
@@ -673,6 +803,40 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
Ok(()) Ok(())
} }
#[post("/organizations/<org_id>/users/confirm", data = "<data>")]
fn bulk_confirm_invite(org_id: String, data: JsonUpcase<Value>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
let data = data.into_inner().data;
let mut bulk_response = Vec::new();
match data["Keys"].as_array() {
Some(keys) => {
for invite in keys {
let org_user_id = invite["Id"].as_str().unwrap_or_default();
let user_key = invite["Key"].as_str().unwrap_or_default();
let err_msg = match _confirm_invite(&org_id, org_user_id, user_key, &headers, &conn) {
Ok(_) => String::from(""),
Err(e) => format!("{:?}", e),
};
bulk_response.push(json!(
{
"Object": "OrganizationBulkConfirmResponseModel",
"Id": org_user_id,
"Error": err_msg
}
));
}
}
None => error!("No keys to confirm"),
}
Json(json!({
"Data": bulk_response,
"Object": "list",
"ContinuationToken": null
}))
}
#[post("/organizations/<org_id>/users/<org_user_id>/confirm", data = "<data>")] #[post("/organizations/<org_id>/users/<org_user_id>/confirm", data = "<data>")]
fn confirm_invite( fn confirm_invite(
org_id: String, org_id: String,
@@ -682,8 +846,16 @@ fn confirm_invite(
conn: DbConn, conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
let data = data.into_inner().data; let data = data.into_inner().data;
let user_key = data["Key"].as_str().unwrap_or_default();
_confirm_invite(&org_id, &org_user_id, user_key, &headers, &conn)
}
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) { fn _confirm_invite(org_id: &str, org_user_id: &str, key: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
if key.is_empty() || org_user_id.is_empty() {
err!("Key or UserId is not set, unable to process request");
}
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
Some(user) => user, Some(user) => user,
None => err!("The specified user isn't a member of the organization"), None => err!("The specified user isn't a member of the organization"),
}; };
@@ -697,24 +869,21 @@ fn confirm_invite(
} }
user_to_confirm.status = UserOrgStatus::Confirmed as i32; user_to_confirm.status = UserOrgStatus::Confirmed as i32;
user_to_confirm.akey = match data["Key"].as_str() { user_to_confirm.akey = key.to_string();
Some(key) => key.to_string(),
None => err!("Invalid key provided"),
};
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org_name = match Organization::find_by_uuid(&org_id, &conn) { let org_name = match Organization::find_by_uuid(org_id, conn) {
Some(org) => org.name, Some(org) => org.name,
None => err!("Error looking up organization."), None => err!("Error looking up organization."),
}; };
let address = match User::find_by_uuid(&user_to_confirm.user_uuid, &conn) { let address = match User::find_by_uuid(&user_to_confirm.user_uuid, conn) {
Some(user) => user.email, Some(user) => user.email,
None => err!("Error looking up user."), None => err!("Error looking up user."),
}; };
mail::send_invite_confirmed(&address, &org_name)?; mail::send_invite_confirmed(&address, &org_name)?;
} }
user_to_confirm.save(&conn) user_to_confirm.save(conn)
} }
#[get("/organizations/<org_id>/users/<org_user_id>")] #[get("/organizations/<org_id>/users/<org_user_id>")]
@@ -815,9 +984,40 @@ fn edit_user(
user_to_edit.save(&conn) user_to_edit.save(&conn)
} }
#[delete("/organizations/<org_id>/users", data = "<data>")]
fn bulk_delete_user(org_id: String, data: JsonUpcase<OrgBulkIds>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
let data: OrgBulkIds = data.into_inner().data;
let mut bulk_response = Vec::new();
for org_user_id in data.Ids {
let err_msg = match _delete_user(&org_id, &org_user_id, &headers, &conn) {
Ok(_) => String::from(""),
Err(e) => format!("{:?}", e),
};
bulk_response.push(json!(
{
"Object": "OrganizationBulkConfirmResponseModel",
"Id": org_user_id,
"Error": err_msg
}
))
}
Json(json!({
"Data": bulk_response,
"Object": "list",
"ContinuationToken": null
}))
}
#[delete("/organizations/<org_id>/users/<org_user_id>")] #[delete("/organizations/<org_id>/users/<org_user_id>")]
fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult { fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
let user_to_delete = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) { _delete_user(&org_id, &org_user_id, &headers, &conn)
}
fn _delete_user(org_id: &str, org_user_id: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
let user_to_delete = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
Some(user) => user, Some(user) => user,
None => err!("User to delete isn't member of the organization"), None => err!("User to delete isn't member of the organization"),
}; };
@@ -828,14 +1028,14 @@ fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn:
if user_to_delete.atype == UserOrgType::Owner { if user_to_delete.atype == UserOrgType::Owner {
// Removing owner, check that there are at least another owner // Removing owner, check that there are at least another owner
let num_owners = UserOrganization::find_by_org_and_type(&org_id, UserOrgType::Owner as i32, &conn).len(); let num_owners = UserOrganization::find_by_org_and_type(org_id, UserOrgType::Owner as i32, conn).len();
if num_owners <= 1 { if num_owners <= 1 {
err!("Can't delete the last owner") err!("Can't delete the last owner")
} }
} }
user_to_delete.delete(&conn) user_to_delete.delete(conn)
} }
#[post("/organizations/<org_id>/users/<org_user_id>/delete")] #[post("/organizations/<org_id>/users/<org_user_id>/delete")]
@@ -843,6 +1043,38 @@ fn post_delete_user(org_id: String, org_user_id: String, headers: AdminHeaders,
delete_user(org_id, org_user_id, headers, conn) delete_user(org_id, org_user_id, headers, conn)
} }
#[post("/organizations/<org_id>/users/public-keys", data = "<data>")]
fn bulk_public_keys(org_id: String, data: JsonUpcase<OrgBulkIds>, _headers: AdminHeaders, conn: DbConn) -> Json<Value> {
let data: OrgBulkIds = data.into_inner().data;
let mut bulk_response = Vec::new();
// Check all received UserOrg UUID's and find the matching User to retreive the public-key.
// If the user does not exists, just ignore it, and do not return any information regarding that UserOrg UUID.
// The web-vault will then ignore that user for the folowing steps.
for user_org_id in data.Ids {
match UserOrganization::find_by_uuid_and_org(&user_org_id, &org_id, &conn) {
Some(user_org) => match User::find_by_uuid(&user_org.user_uuid, &conn) {
Some(user) => bulk_response.push(json!(
{
"Object": "organizationUserPublicKeyResponseModel",
"Id": user_org_id,
"UserId": user.uuid,
"Key": user.public_key
}
)),
None => debug!("User doesn't exist"),
},
None => debug!("UserOrg doesn't exist"),
}
}
Json(json!({
"Data": bulk_response,
"Object": "list",
"ContinuationToken": null
}))
}
use super::ciphers::update_cipher_from_data; use super::ciphers::update_cipher_from_data;
use super::ciphers::CipherData; use super::ciphers::CipherData;
@@ -980,7 +1212,7 @@ struct PolicyData {
enabled: bool, enabled: bool,
#[serde(rename = "type")] #[serde(rename = "type")]
_type: i32, _type: i32,
data: Value, data: Option<Value>,
} }
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")] #[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
@@ -998,6 +1230,56 @@ fn put_policy(
None => err!("Invalid policy type"), None => err!("Invalid policy type"),
}; };
// If enabling the TwoFactorAuthentication policy, remove this org's members that do have 2FA
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
let org_members = UserOrganization::find_by_org(&org_id, &conn);
for member in org_members.into_iter() {
let user_twofactor_disabled = TwoFactor::find_by_user(&member.user_uuid, &conn).is_empty();
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if user_twofactor_disabled
&& member.atype < UserOrgType::Admin
&& member.status != UserOrgStatus::Invited as i32
{
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
}
member.delete(&conn)?;
}
}
}
// If enabling the SingleOrg policy, remove this org's members that are members of other orgs
if pol_type_enum == OrgPolicyType::SingleOrg && data.enabled {
let org_members = UserOrganization::find_by_org(&org_id, &conn);
for member in org_members.into_iter() {
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if member.atype < UserOrgType::Admin && member.status != UserOrgStatus::Invited as i32 {
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&member.user_uuid, &conn)
.into_iter()
// Other UserOrganization's where they have accepted being a member of
.filter(|uo| uo.uuid != member.uuid && uo.status != UserOrgStatus::Invited as i32)
.count()
> 1;
if is_member_of_another_org {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
mail::send_single_org_removed_from_org(&user.email, &org.name)?;
}
member.delete(&conn)?;
}
}
}
}
let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) { let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
Some(p) => p, Some(p) => p,
None => OrgPolicy::new(org_id, pol_type_enum, "{}".to_string()), None => OrgPolicy::new(org_id, pol_type_enum, "{}".to_string()),
@@ -1080,7 +1362,7 @@ fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> Json<Value> {
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[allow(non_snake_case)] #[allow(non_snake_case, dead_code)]
struct OrgImportGroupData { struct OrgImportGroupData {
Name: String, // "GroupName" Name: String, // "GroupName"
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com" ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
@@ -1091,6 +1373,7 @@ struct OrgImportGroupData {
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct OrgImportUserData { struct OrgImportUserData {
Email: String, // "user@maildomain.net" Email: String, // "user@maildomain.net"
#[allow(dead_code)]
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com" ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
Deleted: bool, Deleted: bool,
} }
@@ -1098,6 +1381,7 @@ struct OrgImportUserData {
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct OrgImportData { struct OrgImportData {
#[allow(dead_code)]
Groups: Vec<OrgImportGroupData>, Groups: Vec<OrgImportGroupData>,
OverwriteExisting: bool, OverwriteExisting: bool,
Users: Vec<OrgImportUserData>, Users: Vec<OrgImportUserData>,

View File

@@ -2,7 +2,7 @@ use std::{io::Read, path::Path};
use chrono::{DateTime, Duration, Utc}; use chrono::{DateTime, Duration, Utc};
use multipart::server::{save::SavedData, Multipart, SaveResult}; use multipart::server::{save::SavedData, Multipart, SaveResult};
use rocket::{http::ContentType, Data}; use rocket::{http::ContentType, response::NamedFile, Data};
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
@@ -10,13 +10,25 @@ use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType}, api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
auth::{Headers, Host}, auth::{Headers, Host},
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
util::SafeString,
CONFIG, CONFIG,
}; };
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available"; const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
pub fn routes() -> Vec<rocket::Route> { pub fn routes() -> Vec<rocket::Route> {
routes![post_send, post_send_file, post_access, post_access_file, put_send, delete_send, put_remove_password] routes![
get_sends,
get_send,
post_send,
post_send_file,
post_access,
post_access_file,
put_send,
delete_send,
put_remove_password,
download_send
]
} }
pub fn purge_sends(pool: DbPool) { pub fn purge_sends(pool: DbPool) {
@@ -38,6 +50,7 @@ pub struct SendData {
pub ExpirationDate: Option<DateTime<Utc>>, pub ExpirationDate: Option<DateTime<Utc>>,
pub DeletionDate: DateTime<Utc>, pub DeletionDate: DateTime<Utc>,
pub Disabled: bool, pub Disabled: bool,
pub HideEmail: Option<bool>,
// Data field // Data field
pub Name: String, pub Name: String,
@@ -51,15 +64,36 @@ pub struct SendData {
/// modify existing ones, but is allowed to delete them. /// modify existing ones, but is allowed to delete them.
/// ///
/// Ref: https://bitwarden.com/help/article/policies/#disable-send /// Ref: https://bitwarden.com/help/article/policies/#disable-send
///
/// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally.
fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult { fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid; let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::DisableSend; let policy_type = OrgPolicyType::DisableSend;
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) { if !CONFIG.sends_allowed() || OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.") err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
} }
Ok(()) Ok(())
} }
/// Enforces the `DisableHideEmail` option of the `Send Options` policy.
/// A non-owner/admin user belonging to an org with this option enabled isn't
/// allowed to hide their email address from the recipient of a Bitwarden Send,
/// but is allowed to remove this option from an existing Send.
///
/// Ref: https://bitwarden.com/help/article/policies/#send-options
fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let hide_email = data.HideEmail.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn) {
err!(
"Due to an Enterprise Policy, you are not allowed to hide your email address \
from recipients when creating or editing a Send."
)
}
Ok(())
}
fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> { fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
let data_val = if data.Type == SendType::Text as i32 { let data_val = if data.Type == SendType::Text as i32 {
data.Text data.Text
@@ -88,6 +122,7 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
send.max_access_count = data.MaxAccessCount; send.max_access_count = data.MaxAccessCount;
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc()); send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.disabled = data.Disabled; send.disabled = data.Disabled;
send.hide_email = data.HideEmail;
send.atype = data.Type; send.atype = data.Type;
send.set_password(data.Password.as_deref()); send.set_password(data.Password.as_deref());
@@ -95,19 +130,46 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
Ok(send) Ok(send)
} }
#[get("/sends")]
fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &conn);
let sends_json: Vec<Value> = sends.iter().map(|s| s.to_json()).collect();
Json(json!({
"Data": sends_json,
"Object": "list",
"ContinuationToken": null
}))
}
#[get("/sends/<uuid>")]
fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &conn) {
Some(send) => send,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
Ok(Json(send.to_json()))
}
#[post("/sends", data = "<data>")] #[post("/sends", data = "<data>")]
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult { fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?; enforce_disable_send_policy(&headers, &conn)?;
let data: SendData = data.into_inner().data; let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
if data.Type == SendType::File as i32 { if data.Type == SendType::File as i32 {
err!("File sends should use /api/sends/file") err!("File sends should use /api/sends/file")
} }
let mut send = create_send(data, headers.user.uuid.clone())?; let mut send = create_send(data, headers.user.uuid)?;
send.save(&conn)?; send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user); nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn));
Ok(Json(send.to_json())) Ok(Json(send.to_json()))
} }
@@ -130,25 +192,26 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
let mut buf = String::new(); let mut buf = String::new();
model_entry.data.read_to_string(&mut buf)?; model_entry.data.read_to_string(&mut buf)?;
let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?; let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?;
enforce_disable_hide_email_policy(&data.data, &headers, &conn)?;
// Get the file length and add an extra 10% to avoid issues // Get the file length and add an extra 5% to avoid issues
const SIZE_110_MB: u64 = 115_343_360; const SIZE_525_MB: u64 = 550_502_400;
let size_limit = match CONFIG.user_attachment_limit() { let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"), Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => { Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn); let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn);
if left <= 0 { if left <= 0 {
err!("Attachment size limit reached! Delete some files to open space") err!("Attachment storage limit reached! Delete some attachments to free up space")
} }
std::cmp::Ord::max(left as u64, SIZE_110_MB) std::cmp::Ord::max(left as u64, SIZE_525_MB)
} }
None => SIZE_110_MB, None => SIZE_525_MB,
}; };
// Create the Send // Create the Send
let mut send = create_send(data.data, headers.user.uuid.clone())?; let mut send = create_send(data.data, headers.user.uuid)?;
let file_id: String = data_encoding::HEXLOWER.encode(&crate::crypto::get_random(vec![0; 32])); let file_id = crate::crypto::generate_send_id();
if send.atype != SendType::File as i32 { if send.atype != SendType::File as i32 {
err!("Send content is not a file"); err!("Send content is not a file");
@@ -171,7 +234,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
} }
SaveResult::Partial(_, reason) => { SaveResult::Partial(_, reason) => {
std::fs::remove_file(&file_path).ok(); std::fs::remove_file(&file_path).ok();
err!(format!("Attachment size limit exceeded with this file: {:?}", reason)); err!(format!("Attachment storage limit exceeded with this file: {:?}", reason));
} }
SaveResult::Error(e) => { SaveResult::Error(e) => {
std::fs::remove_file(&file_path).ok(); std::fs::remove_file(&file_path).ok();
@@ -190,7 +253,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
// Save the changes in the database // Save the changes in the database
send.save(&conn)?; send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user); nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
Ok(Json(send.to_json())) Ok(Json(send.to_json()))
} }
@@ -243,7 +306,7 @@ fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn
send.save(&conn)?; send.save(&conn)?;
Ok(Json(send.to_json_access())) Ok(Json(send.to_json_access(&conn)))
} }
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")] #[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
@@ -291,18 +354,31 @@ fn post_access_file(
send.save(&conn)?; send.save(&conn)?;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({ Ok(Json(json!({
"Object": "send-fileDownload", "Object": "send-fileDownload",
"Id": file_id, "Id": file_id,
"Url": format!("{}/sends/{}/{}", &host.host, send_id, file_id) "Url": format!("{}/api/sends/{}/{}?t={}", &host.host, send_id, file_id, token)
}))) })))
} }
#[get("/sends/<send_id>/<file_id>?<t>")]
fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(&t) {
if claims.sub == format!("{}/{}", send_id, file_id) {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok();
}
}
None
}
#[put("/sends/<id>", data = "<data>")] #[put("/sends/<id>", data = "<data>")]
fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult { fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?; enforce_disable_send_policy(&headers, &conn)?;
let data: SendData = data.into_inner().data; let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
let mut send = match Send::find_by_uuid(&id, &conn) { let mut send = match Send::find_by_uuid(&id, &conn) {
Some(s) => s, Some(s) => s,
@@ -340,6 +416,7 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
send.notes = data.Notes; send.notes = data.Notes;
send.max_access_count = data.MaxAccessCount; send.max_access_count = data.MaxAccessCount;
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc()); send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.hide_email = data.HideEmail;
send.disabled = data.Disabled; send.disabled = data.Disabled;
// Only change the value if it's present // Only change the value if it's present
@@ -348,7 +425,7 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
} }
send.save(&conn)?; send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user); nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
Ok(Json(send.to_json())) Ok(Json(send.to_json()))
} }
@@ -365,7 +442,7 @@ fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyR
} }
send.delete(&conn)?; send.delete(&conn)?;
nt.send_user_update(UpdateType::SyncSendDelete, &headers.user); nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn));
Ok(()) Ok(())
} }
@@ -385,7 +462,7 @@ fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -
send.set_password(None); send.set_password(None);
send.save(&conn)?; send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user); nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
Ok(Json(send.to_json())) Ok(Json(send.to_json()))
} }

View File

@@ -62,7 +62,7 @@ fn activate_authenticator(
let data: EnableAuthenticatorData = data.into_inner().data; let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash; let password_hash = data.MasterPasswordHash;
let key = data.Key; let key = data.Key;
let token = data.Token.into_i32()? as u64; let token = data.Token.into_string();
let mut user = headers.user; let mut user = headers.user;
@@ -81,7 +81,7 @@ fn activate_authenticator(
} }
// Validate the token provided with the key, and save new twofactor // Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, token, &key.to_uppercase(), &ip, &conn)?; validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn)?;
_generate_recover_code(&mut user, &conn); _generate_recover_code(&mut user, &conn);
@@ -109,57 +109,57 @@ pub fn validate_totp_code_str(
ip: &ClientIp, ip: &ClientIp,
conn: &DbConn, conn: &DbConn,
) -> EmptyResult { ) -> EmptyResult {
let totp_code: u64 = match totp_code.parse() { if !totp_code.chars().all(char::is_numeric) {
Ok(code) => code, err!("TOTP code is not a number");
_ => err!("TOTP code is not a number"),
};
validate_totp_code(user_uuid, totp_code, secret, ip, &conn)
} }
pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult { validate_totp_code(user_uuid, totp_code, secret, ip, conn)
use oath::{totp_raw_custom_time, HashType}; }
pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
let decoded_secret = match BASE32.decode(secret.as_bytes()) { let decoded_secret = match BASE32.decode(secret.as_bytes()) {
Ok(s) => s, Ok(s) => s,
Err(_) => err!("Invalid TOTP secret"), Err(_) => err!("Invalid TOTP secret"),
}; };
let mut twofactor = match TwoFactor::find_by_user_and_type(&user_uuid, TwoFactorType::Authenticator as i32, &conn) { let mut twofactor = match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn) {
Some(tf) => tf, Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()), _ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
}; };
// Get the current system time in UNIX Epoch (UTC)
let current_time = chrono::Utc::now();
let current_timestamp = current_time.timestamp();
// The amount of steps back and forward in time // The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes. // Also check if we need to disable time drifted TOTP codes.
// If that is the case, we set the steps to 0 so only the current TOTP is valid. // If that is the case, we set the steps to 0 so only the current TOTP is valid.
let steps = !CONFIG.authenticator_disable_time_drift() as i64; let steps = !CONFIG.authenticator_disable_time_drift() as i64;
// Get the current system time in UNIX Epoch (UTC)
let current_time = chrono::Utc::now();
let current_timestamp = current_time.timestamp();
for step in -steps..=steps { for step in -steps..=steps {
let time_step = current_timestamp / 30i64 + step; let time_step = current_timestamp / 30i64 + step;
// We need to calculate the time offsite and cast it as an i128.
// Else we can't do math with it on a default u64 variable. // We need to calculate the time offsite and cast it as an u64.
// Since we only have times into the future and the totp generator needs an u64 instead of the default i64.
let time = (current_timestamp + step * 30i64) as u64; let time = (current_timestamp + step * 30i64) as u64;
let generated = totp_raw_custom_time(&decoded_secret, 6, 0, 30, time, &HashType::SHA1); let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
// Check the the given code equals the generated and if the time_step is larger then the one last used. // Check the the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > twofactor.last_used as i64 { if generated == totp_code && time_step > twofactor.last_used as i64 {
// If the step does not equals 0 the time is drifted either server or client side. // If the step does not equals 0 the time is drifted either server or client side.
if step != 0 { if step != 0 {
info!("TOTP Time drift detected. The step offset is {}", step); warn!("TOTP Time drift detected. The step offset is {}", step);
} }
// Save the last used time step so only totp time steps higher then this one are allowed. // Save the last used time step so only totp time steps higher then this one are allowed.
// This will also save a newly created twofactor if the code is correct. // This will also save a newly created twofactor if the code is correct.
twofactor.last_used = time_step as i32; twofactor.last_used = time_step as i32;
twofactor.save(&conn)?; twofactor.save(conn)?;
return Ok(()); return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used as i64 { } else if generated == totp_code && time_step <= twofactor.last_used as i64 {
warn!("This or a TOTP code within {} steps back and forward has already been used!", steps); warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip)); err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
} }
} }

View File

@@ -226,7 +226,7 @@ fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32; let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled // If the user doesn't have an entry, disabled
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, &conn) { let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn) {
Some(t) => t, Some(t) => t,
None => return DuoStatus::Disabled(DuoData::global().is_some()), None => return DuoStatus::Disabled(DuoData::global().is_some()),
}; };
@@ -247,8 +247,8 @@ fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
// let (ik, sk, ak, host) = get_duo_keys(); // let (ik, sk, ak, host) = get_duo_keys();
fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> { fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> {
let data = User::find_by_mail(email, &conn) let data = User::find_by_mail(email, conn)
.and_then(|u| get_user_duo_data(&u.uuid, &conn).data()) .and_then(|u| get_user_duo_data(&u.uuid, conn).data())
.or_else(DuoData::global) .or_else(DuoData::global)
.map_res("Can't fetch Duo keys")?; .map_res("Can't fetch Duo keys")?;
@@ -343,7 +343,7 @@ fn parse_duo_values(key: &str, val: &str, ikey: &str, prefix: &str, time: i64) -
err!("Invalid ikey") err!("Invalid ikey")
} }
let expire = match expire.parse() { let expire: i64 = match expire.parse() {
Ok(e) => e, Ok(e) => e,
Err(_) => err!("Invalid expire time"), Err(_) => err!("Invalid expire time"),
}; };

View File

@@ -56,14 +56,14 @@ fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> Empty
/// Generate the token, save the data for later verification and send email to user /// Generate the token, save the data for later verification and send email to user
pub fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32; let type_ = TwoFactorType::Email as i32;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, type_, &conn).map_res("Two factor not found")?; let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, type_, conn).map_res("Two factor not found")?;
let generated_token = crypto::generate_token(CONFIG.email_token_size())?; let generated_token = crypto::generate_token(CONFIG.email_token_size())?;
let mut twofactor_data = EmailTokenData::from_json(&twofactor.data)?; let mut twofactor_data = EmailTokenData::from_json(&twofactor.data)?;
twofactor_data.set_token(generated_token); twofactor_data.set_token(generated_token);
twofactor.data = twofactor_data.to_json(); twofactor.data = twofactor_data.to_json();
twofactor.save(&conn)?; twofactor.save(conn)?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?; mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
@@ -80,14 +80,16 @@ fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) ->
err!("Invalid password"); err!("Invalid password");
} }
let type_ = TwoFactorType::Email as i32; let (enabled, mfa_email) = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn) {
let enabled = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) { Some(x) => {
Some(x) => x.enabled, let twofactor_data = EmailTokenData::from_json(&x.data)?;
_ => false, (true, json!(twofactor_data.email))
}
_ => (false, json!(null)),
}; };
Ok(Json(json!({ Ok(Json(json!({
"Email": user.email, "Email": mfa_email,
"Enabled": enabled, "Enabled": enabled,
"Object": "twoFactorEmail" "Object": "twoFactorEmail"
}))) })))
@@ -181,8 +183,8 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
/// Validate the email code when used as TwoFactor token mechanism /// Validate the email code when used as TwoFactor token mechanism
pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult { pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(&data)?; let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(&user_uuid, TwoFactorType::Email as i32, &conn) let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
.map_res("Two factor not found")?; .map_res("Two factor not found")?;
let issued_token = match &email_data.last_token { let issued_token = match &email_data.last_token {
Some(t) => t, Some(t) => t,
@@ -195,14 +197,14 @@ pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &
email_data.reset_token(); email_data.reset_token();
} }
twofactor.data = email_data.to_json(); twofactor.data = email_data.to_json();
twofactor.save(&conn)?; twofactor.save(conn)?;
err!("Token is invalid") err!("Token is invalid")
} }
email_data.reset_token(); email_data.reset_token();
twofactor.data = email_data.to_json(); twofactor.data = email_data.to_json();
twofactor.save(&conn)?; twofactor.save(conn)?;
let date = NaiveDateTime::from_timestamp(email_data.token_sent, 0); let date = NaiveDateTime::from_timestamp(email_data.token_sent, 0);
let max_time = CONFIG.email_expiration_time() as i64; let max_time = CONFIG.email_expiration_time() as i64;
@@ -255,7 +257,7 @@ impl EmailTokenData {
} }
pub fn from_json(string: &str) -> Result<EmailTokenData, Error> { pub fn from_json(string: &str) -> Result<EmailTokenData, Error> {
let res: Result<EmailTokenData, crate::serde_json::Error> = serde_json::from_str(&string); let res: Result<EmailTokenData, crate::serde_json::Error> = serde_json::from_str(string);
match res { match res {
Ok(x) => Ok(x), Ok(x) => Ok(x),
Err(_) => err!("Could not decode EmailTokenData from string"), Err(_) => err!("Could not decode EmailTokenData from string"),
@@ -292,7 +294,7 @@ mod tests {
fn test_obscure_email_long() { fn test_obscure_email_long() {
let email = "bytes@example.ext"; let email = "bytes@example.ext";
let result = obscure_email(&email); let result = obscure_email(email);
// Only first two characters should be visible. // Only first two characters should be visible.
assert_eq!(result, "by***@example.ext"); assert_eq!(result, "by***@example.ext");
@@ -302,7 +304,7 @@ mod tests {
fn test_obscure_email_short() { fn test_obscure_email_short() {
let email = "byt@example.ext"; let email = "byt@example.ext";
let result = obscure_email(&email); let result = obscure_email(email);
// If it's smaller than 3 characters it should only show asterisks. // If it's smaller than 3 characters it should only show asterisks.
assert_eq!(result, "***@example.ext"); assert_eq!(result, "***@example.ext");

View File

@@ -7,16 +7,15 @@ use crate::{
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData}, api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::Headers, auth::Headers,
crypto, crypto,
db::{ db::{models::*, DbConn},
models::{TwoFactor, User}, mail, CONFIG,
DbConn,
},
}; };
pub mod authenticator; pub mod authenticator;
pub mod duo; pub mod duo;
pub mod email; pub mod email;
pub mod u2f; pub mod u2f;
pub mod webauthn;
pub mod yubikey; pub mod yubikey;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
@@ -26,6 +25,7 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut duo::routes()); routes.append(&mut duo::routes());
routes.append(&mut email::routes()); routes.append(&mut email::routes());
routes.append(&mut u2f::routes()); routes.append(&mut u2f::routes());
routes.append(&mut webauthn::routes());
routes.append(&mut yubikey::routes()); routes.append(&mut yubikey::routes());
routes routes
@@ -128,6 +128,23 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
twofactor.delete(&conn)?; twofactor.delete(&conn)?;
} }
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).is_empty();
if twofactor_disabled {
let policy_type = OrgPolicyType::TwoFactorAuthentication;
let org_list = UserOrganization::find_by_user_and_policy(&user.uuid, policy_type, &conn);
for user_org in org_list.into_iter() {
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
}
user_org.delete(&conn)?;
}
}
}
Ok(Json(json!({ Ok(Json(json!({
"Enabled": false, "Enabled": false,
"Type": type_, "Type": type_,

View File

@@ -94,13 +94,14 @@ struct RegistrationDef {
} }
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
struct U2FRegistration { pub struct U2FRegistration {
id: i32, pub id: i32,
name: String, pub name: String,
#[serde(with = "RegistrationDef")] #[serde(with = "RegistrationDef")]
reg: Registration, pub reg: Registration,
counter: u32, pub counter: u32,
compromised: bool, compromised: bool,
pub migrated: Option<bool>,
} }
impl U2FRegistration { impl U2FRegistration {
@@ -168,6 +169,7 @@ fn activate_u2f(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn)
reg: registration, reg: registration,
compromised: false, compromised: false,
counter: 0, counter: 0,
migrated: None,
}; };
let mut regs = get_u2f_registrations(&user.uuid, &conn)?.1; let mut regs = get_u2f_registrations(&user.uuid, &conn)?.1;
@@ -246,7 +248,7 @@ fn _create_u2f_challenge(user_uuid: &str, type_: TwoFactorType, conn: &DbConn) -
} }
fn save_u2f_registrations(user_uuid: &str, regs: &[U2FRegistration], conn: &DbConn) -> EmptyResult { fn save_u2f_registrations(user_uuid: &str, regs: &[U2FRegistration], conn: &DbConn) -> EmptyResult {
TwoFactor::new(user_uuid.into(), TwoFactorType::U2f, serde_json::to_string(regs)?).save(&conn) TwoFactor::new(user_uuid.into(), TwoFactorType::U2f, serde_json::to_string(regs)?).save(conn)
} }
fn get_u2f_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<U2FRegistration>), Error> { fn get_u2f_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<U2FRegistration>), Error> {
@@ -273,10 +275,11 @@ fn get_u2f_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<U2
reg: old_regs.remove(0), reg: old_regs.remove(0),
compromised: false, compromised: false,
counter: 0, counter: 0,
migrated: None,
}]; }];
// Save new format // Save new format
save_u2f_registrations(user_uuid, &new_regs, &conn)?; save_u2f_registrations(user_uuid, &new_regs, conn)?;
new_regs new_regs
} }
@@ -308,12 +311,12 @@ pub fn generate_u2f_login(user_uuid: &str, conn: &DbConn) -> ApiResult<U2fSignRe
pub fn validate_u2f_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult { pub fn validate_u2f_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
let challenge_type = TwoFactorType::U2fLoginChallenge as i32; let challenge_type = TwoFactorType::U2fLoginChallenge as i32;
let tf_challenge = TwoFactor::find_by_user_and_type(user_uuid, challenge_type, &conn); let tf_challenge = TwoFactor::find_by_user_and_type(user_uuid, challenge_type, conn);
let challenge = match tf_challenge { let challenge = match tf_challenge {
Some(tf_challenge) => { Some(tf_challenge) => {
let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?; let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?;
tf_challenge.delete(&conn)?; tf_challenge.delete(conn)?;
challenge challenge
} }
None => err!("Can't recover login challenge"), None => err!("Can't recover login challenge"),
@@ -329,13 +332,13 @@ pub fn validate_u2f_login(user_uuid: &str, response: &str, conn: &DbConn) -> Emp
match response { match response {
Ok(new_counter) => { Ok(new_counter) => {
reg.counter = new_counter; reg.counter = new_counter;
save_u2f_registrations(user_uuid, &registrations, &conn)?; save_u2f_registrations(user_uuid, &registrations, conn)?;
return Ok(()); return Ok(());
} }
Err(u2f::u2ferror::U2fError::CounterTooLow) => { Err(u2f::u2ferror::U2fError::CounterTooLow) => {
reg.compromised = true; reg.compromised = true;
save_u2f_registrations(user_uuid, &registrations, &conn)?; save_u2f_registrations(user_uuid, &registrations, conn)?;
err!("This device might be compromised!"); err!("This device might be compromised!");
} }

View File

@@ -0,0 +1,389 @@
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
use crate::{
api::{
core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
},
auth::Headers,
db::{
models::{TwoFactor, TwoFactorType},
DbConn,
},
error::Error,
CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![get_webauthn, generate_webauthn_challenge, activate_webauthn, activate_webauthn_put, delete_webauthn,]
}
struct WebauthnConfig {
url: String,
origin: String,
rpid: String,
}
impl WebauthnConfig {
fn load() -> Webauthn<Self> {
let domain = CONFIG.domain();
let domain_origin = CONFIG.domain_origin();
Webauthn::new(Self {
rpid: reqwest::Url::parse(&domain)
.map(|u| u.domain().map(str::to_owned))
.ok()
.flatten()
.unwrap_or_default(),
url: domain,
origin: domain_origin,
})
}
}
impl webauthn_rs::WebauthnConfig for WebauthnConfig {
fn get_relying_party_name(&self) -> &str {
&self.url
}
fn get_origin(&self) -> &str {
&self.origin
}
fn get_relying_party_id(&self) -> &str {
&self.rpid
}
/// We have WebAuthn configured to discourage user verification
/// if we leave this enabled, it will cause verification issues when a keys send UV=1.
/// Upstream (the library they use) ignores this when set to discouraged, so we should too.
fn get_require_uv_consistency(&self) -> bool {
false
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct WebauthnRegistration {
pub id: i32,
pub name: String,
pub migrated: bool,
pub credential: Credential,
}
impl WebauthnRegistration {
fn to_json(&self) -> Value {
json!({
"Id": self.id,
"Name": self.name,
"migrated": self.migrated,
})
}
}
#[post("/two-factor/get-webauthn", data = "<data>")]
fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &conn)?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": enabled,
"Keys": registrations_json,
"Object": "twoFactorWebAuthn"
})))
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let registrations = get_webauthn_registrations(&headers.user.uuid, &conn)?
.1
.into_iter()
.map(|r| r.credential.cred_id) // We return the credentialIds to the clients to avoid double registering
.collect();
let (challenge, state) = WebauthnConfig::load().generate_challenge_register_options(
headers.user.uuid.as_bytes().to_vec(),
headers.user.email,
headers.user.name,
Some(registrations),
None,
None,
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&conn)?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
challenge_value["errorMessage"] = "".into();
Ok(Json(challenge_value))
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
struct EnableWebauthnData {
Id: NumberOrString, // 1..5
Name: String,
MasterPasswordHash: String,
DeviceResponse: RegisterPublicKeyCredentialCopy,
}
// This is copied from RegisterPublicKeyCredential to change the Response objects casing
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
struct RegisterPublicKeyCredentialCopy {
pub Id: String,
pub RawId: Base64UrlSafeData,
pub Response: AuthenticatorAttestationResponseRawCopy,
pub Type: String,
}
// This is copied from AuthenticatorAttestationResponseRaw to change clientDataJSON to clientDataJson
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
pub struct AuthenticatorAttestationResponseRawCopy {
pub AttestationObject: Base64UrlSafeData,
pub ClientDataJson: Base64UrlSafeData,
}
impl From<RegisterPublicKeyCredentialCopy> for RegisterPublicKeyCredential {
fn from(r: RegisterPublicKeyCredentialCopy) -> Self {
Self {
id: r.Id,
raw_id: r.RawId,
response: AuthenticatorAttestationResponseRaw {
attestation_object: r.Response.AttestationObject,
client_data_json: r.Response.ClientDataJson,
},
type_: r.Type,
}
}
}
// This is copied from PublicKeyCredential to change the Response objects casing
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
pub struct PublicKeyCredentialCopy {
pub Id: String,
pub RawId: Base64UrlSafeData,
pub Response: AuthenticatorAssertionResponseRawCopy,
pub Extensions: Option<AuthenticationExtensionsClientOutputsCopy>,
pub Type: String,
}
// This is copied from AuthenticatorAssertionResponseRaw to change clientDataJSON to clientDataJson
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
pub struct AuthenticatorAssertionResponseRawCopy {
pub AuthenticatorData: Base64UrlSafeData,
pub ClientDataJson: Base64UrlSafeData,
pub Signature: Base64UrlSafeData,
pub UserHandle: Option<Base64UrlSafeData>,
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
pub struct AuthenticationExtensionsClientOutputsCopy {
#[serde(default)]
pub Appid: bool,
}
impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
fn from(r: PublicKeyCredentialCopy) -> Self {
Self {
id: r.Id,
raw_id: r.RawId,
response: AuthenticatorAssertionResponseRaw {
authenticator_data: r.Response.AuthenticatorData,
client_data_json: r.Response.ClientDataJson,
signature: r.Response.Signature,
user_handle: r.Response.UserHandle,
},
extensions: r.Extensions.map(|e| AuthenticationExtensionsClientOutputs {
appid: e.Appid,
}),
type_: r.Type,
}
}
}
#[post("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
// Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
Some(tf) => {
let state: RegistrationState = serde_json::from_str(&tf.data)?;
tf.delete(&conn)?;
state
}
None => err!("Can't recover challenge"),
};
// Verify the credentials with the saved state
let (credential, _data) =
WebauthnConfig::load().register_credential(&data.DeviceResponse.into(), &state, |_| Ok(false))?;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &conn)?.1;
// TODO: Check for repeated ID's
registrations.push(WebauthnRegistration {
id: data.Id.into_i32()?,
name: data.Name,
migrated: false,
credential,
});
// Save the registrations and return them
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?).save(&conn)?;
_generate_recover_code(&mut user, &conn);
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
#[put("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct DeleteU2FData {
Id: NumberOrString,
MasterPasswordHash: String,
}
#[delete("/two-factor/webauthn", data = "<data>")]
fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let id = data.data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn) {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
let item_pos = match data.iter().position(|r| r.id == id) {
Some(p) => p,
None => err!("Webauthn entry not found"),
};
let removed_item = data.remove(item_pos);
tf.data = serde_json::to_string(&data)?;
tf.save(&conn)?;
drop(tf);
// If entry is migrated from u2f, delete the u2f entry as well
if let Some(mut u2f) = TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &conn) {
use crate::api::core::two_factor::u2f::U2FRegistration;
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&u2f.data) {
Ok(d) => d,
Err(_) => err!("Error parsing U2F data"),
};
data.retain(|r| r.reg.key_handle != removed_item.credential.cred_id);
let new_data_str = serde_json::to_string(&data)?;
u2f.data = new_data_str;
u2f.save(&conn)?;
}
let keys_json: Vec<Value> = data.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
pub fn get_webauthn_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)),
None => Ok((false, Vec::new())), // If no data, return empty list
}
}
pub fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
// Load saved credentials
let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.credential).collect();
if creds.is_empty() {
err!("No Webauthn devices registered")
}
// Generate a challenge based on the credentials
let ext = RequestAuthenticationExtensions::builder().appid(format!("{}/app-id.json", &CONFIG.domain())).build();
let (response, state) = WebauthnConfig::load().generate_challenge_authenticate_options(creds, Some(ext))?;
// Save the challenge state for later validation
TwoFactor::new(user_uuid.into(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
.save(conn)?;
// Return challenge to the clients
Ok(Json(serde_json::to_value(response.public_key)?))
}
pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
Some(tf) => {
let state: AuthenticationState = serde_json::from_str(&tf.data)?;
tf.delete(conn)?;
state
}
None => err!("Can't recover login challenge"),
};
let rsp: crate::util::UpCase<PublicKeyCredentialCopy> = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.data.into();
let mut registrations = get_webauthn_registrations(user_uuid, conn)?.1;
// If the credential we received is migrated from U2F, enable the U2F compatibility
//let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0);
let (cred_id, auth_data) = WebauthnConfig::load().authenticate_credential(&rsp, &state)?;
for reg in &mut registrations {
if &reg.credential.cred_id == cred_id {
reg.credential.counter = auth_data.counter;
TwoFactor::new(user_uuid.to_string(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn)?;
return Ok(());
}
}
err!("Credential not present")
}

View File

@@ -3,14 +3,14 @@ use std::{
fs::{create_dir_all, remove_file, symlink_metadata, File}, fs::{create_dir_all, remove_file, symlink_metadata, File},
io::prelude::*, io::prelude::*,
net::{IpAddr, ToSocketAddrs}, net::{IpAddr, ToSocketAddrs},
sync::RwLock, sync::{Arc, RwLock},
time::{Duration, SystemTime}, time::{Duration, SystemTime},
}; };
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use regex::Regex; use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header, Url}; use reqwest::{blocking::Client, blocking::Response, header};
use rocket::{http::ContentType, http::Cookie, response::Content, Route}; use rocket::{http::ContentType, response::Content, Route};
use crate::{ use crate::{
error::Error, error::Error,
@@ -25,19 +25,17 @@ pub fn routes() -> Vec<Route> {
static CLIENT: Lazy<Client> = Lazy::new(|| { static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers // Generate the default headers
let mut default_headers = header::HeaderMap::new(); let mut default_headers = header::HeaderMap::new();
default_headers.insert(header::USER_AGENT, header::HeaderValue::from_static("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15")); default_headers
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en-US,en;q=0.8")); .insert(header::USER_AGENT, header::HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers
.insert(header::ACCEPT, header::HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache")); default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache")); default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
default_headers.insert(
header::ACCEPT,
header::HeaderValue::from_static(
"text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8",
),
);
// Reuse the client between requests // Reuse the client between requests
get_reqwest_client_builder() get_reqwest_client_builder()
.cookie_provider(Arc::new(Jar::default()))
.timeout(Duration::from_secs(CONFIG.icon_download_timeout())) .timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers) .default_headers(default_headers)
.build() .build()
@@ -80,7 +78,7 @@ fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "_-."; const ALLOWED_CHARS: &str = "_-.";
// If parsing the domain fails using Url, it will not work with reqwest. // If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = Url::parse(format!("https://{}", domain).as_str()) { if let Err(parse_error) = url::Url::parse(format!("https://{}", domain).as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error); debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
return false; return false;
} else if domain.is_empty() } else if domain.is_empty()
@@ -251,8 +249,8 @@ fn is_domain_blacklisted(domain: &str) -> bool {
}; };
// Use the pre-generate Regex stored in a Lazy HashMap. // Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(&domain) { if regex.is_match(domain) {
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist); warn!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
is_blacklisted = true; is_blacklisted = true;
} }
} }
@@ -282,7 +280,7 @@ fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
} }
// Get the icon, or None in case of error // Get the icon, or None in case of error
match download_icon(&domain) { match download_icon(domain) {
Ok((icon, icon_type)) => { Ok((icon, icon_type)) => {
save_icon(&path, &icon); save_icon(&path, &icon);
Some((icon, icon_type.unwrap_or("x-icon").to_string())) Some((icon, icon_type.unwrap_or("x-icon").to_string()))
@@ -354,13 +352,57 @@ struct Icon {
impl Icon { impl Icon {
const fn new(priority: u8, href: String) -> Self { const fn new(priority: u8, href: String) -> Self {
Self { Self {
href,
priority, priority,
href,
} }
} }
} }
fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Vec<Icon>, url: &Url) { /// Iterates over the HTML document to find <base href="http://domain.tld">
/// When found it will stop the iteration and the found base href will be shared deref via `base_href`.
///
/// # Arguments
/// * `node` - A Parsed HTML document via html5ever::parse_document()
/// * `base_href` - a mutable url::Url which will be overwritten when a base href tag has been found.
///
fn get_base_href(node: &std::rc::Rc<markup5ever_rcdom::Node>, base_href: &mut url::Url) -> bool {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "base" {
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
if attr_name == "href" {
debug!("Found base href: {}", attr_value);
*base_href = match base_href.join(attr_value) {
Ok(href) => href,
_ => base_href.clone(),
};
return true;
}
}
return true;
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
// Check if we got a true back and stop the iter.
// This means we found a <base> tag and can stop processing the html.
if get_base_href(child, base_href) {
return true;
}
}
false
}
fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Vec<Icon>, url: &url::Url) {
if let markup5ever_rcdom::NodeData::Element { if let markup5ever_rcdom::NodeData::Element {
name, name,
attrs, attrs,
@@ -389,7 +431,7 @@ fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Ve
if has_rel { if has_rel {
if let Some(inner_href) = href { if let Some(inner_href) = href {
if let Ok(full_href) = url.join(&inner_href).map(|h| h.into_string()) { if let Ok(full_href) = url.join(inner_href).map(String::from) {
let priority = get_icon_priority(&full_href, sizes); let priority = get_icon_priority(&full_href, sizes);
icons.push(Icon::new(priority, full_href)); icons.push(Icon::new(priority, full_href));
} }
@@ -406,12 +448,11 @@ fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Ve
struct IconUrlResult { struct IconUrlResult {
iconlist: Vec<Icon>, iconlist: Vec<Icon>,
cookies: String,
referer: String, referer: String,
} }
/// Returns a Result/Tuple which holds a Vector IconList and a string which holds the cookies from the last response. /// Returns a IconUrlResult which holds a Vector IconList and a string which holds the referer.
/// There will always be a result with a string which will contain https://example.com/favicon.ico and an empty string for the cookies. /// There will always two items within the iconlist which holds http(s)://domain.tld/favicon.ico.
/// This does not mean that that location does exists, but it is the default location browser use. /// This does not mean that that location does exists, but it is the default location browser use.
/// ///
/// # Argument /// # Argument
@@ -419,8 +460,8 @@ struct IconUrlResult {
/// ///
/// # Example /// # Example
/// ``` /// ```
/// let (mut iconlist, cookie_str) = get_icon_url("github.com")?; /// let icon_result = get_icon_url("github.com")?;
/// let (mut iconlist, cookie_str) = get_icon_url("gitlab.com")?; /// let icon_result = get_icon_url("vaultwarden.discourse.group")?;
/// ``` /// ```
fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> { fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Default URL with secure and insecure schemes // Default URL with secure and insecure schemes
@@ -468,49 +509,30 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Create the iconlist // Create the iconlist
let mut iconlist: Vec<Icon> = Vec::new(); let mut iconlist: Vec<Icon> = Vec::new();
let mut referer = String::from("");
// Create the cookie_str to fill it all the cookies from the response
// These cookies can be used to request/download the favicon image.
// Some sites have extra security in place with for example XSRF Tokens.
let mut cookie_str = "".to_string();
let mut referer = "".to_string();
if let Ok(content) = resp { if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com) // Extract the URL from the respose in case redirects occured (like @ gitlab.com)
let url = content.url().clone(); let url = content.url().clone();
// Get all the cookies and pass it on to the next function.
// Needed for XSRF Cookies for example (like @ mijn.ing.nl)
let raw_cookies = content.headers().get_all("set-cookie");
cookie_str = raw_cookies
.iter()
.filter_map(|raw_cookie| raw_cookie.to_str().ok())
.map(|cookie_str| {
if let Ok(cookie) = Cookie::parse(cookie_str) {
format!("{}={}; ", cookie.name(), cookie.value())
} else {
String::new()
}
})
.collect::<String>();
// Set the referer to be used on the final request, some sites check this. // Set the referer to be used on the final request, some sites check this.
// Mostly used to prevent direct linking and other security resons. // Mostly used to prevent direct linking and other security resons.
referer = url.as_str().to_string(); referer = url.as_str().to_string();
// Add the default favicon.ico to the list with the domain the content responded from. // Add the default favicon.ico to the list with the domain the content responded from.
iconlist.push(Icon::new(35, url.join("/favicon.ico").unwrap().into_string())); iconlist.push(Icon::new(35, String::from(url.join("/favicon.ico").unwrap())));
// 512KB should be more than enough for the HTML, though as we only really need // 384KB should be more than enough for the HTML, though as we only really need the HTML header.
// the HTML header, it could potentially be reduced even further let mut limited_reader = content.take(384 * 1024);
let mut limited_reader = content.take(512 * 1024);
use html5ever::tendril::TendrilSink; use html5ever::tendril::TendrilSink;
let dom = html5ever::parse_document(markup5ever_rcdom::RcDom::default(), Default::default()) let dom = html5ever::parse_document(markup5ever_rcdom::RcDom::default(), Default::default())
.from_utf8() .from_utf8()
.read_from(&mut limited_reader)?; .read_from(&mut limited_reader)?;
get_favicons_node(&dom.document, &mut iconlist, &url); let mut base_url: url::Url = url;
get_base_href(&dom.document, &mut base_url);
get_favicons_node(&dom.document, &mut iconlist, &base_url);
} else { } else {
// Add the default favicon.ico to the list with just the given domain // Add the default favicon.ico to the list with just the given domain
iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain))); iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain)));
@@ -523,29 +545,28 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// There always is an icon in the list, so no need to check if it exists, and just return the first one // There always is an icon in the list, so no need to check if it exists, and just return the first one
Ok(IconUrlResult { Ok(IconUrlResult {
iconlist, iconlist,
cookies: cookie_str,
referer, referer,
}) })
} }
fn get_page(url: &str) -> Result<Response, Error> { fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_cookies(url, "", "") get_page_with_referer(url, "")
} }
fn get_page_with_cookies(url: &str, cookie_str: &str, referer: &str) -> Result<Response, Error> { fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(Url::parse(url).unwrap().host_str().unwrap_or_default()) { if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
err!("Favicon rel linked to a blacklisted domain!"); err!("Favicon resolves to a blacklisted domain or IP!", url);
} }
let mut client = CLIENT.get(url); let mut client = CLIENT.get(url);
if !cookie_str.is_empty() {
client = client.header("Cookie", cookie_str)
}
if !referer.is_empty() { if !referer.is_empty() {
client = client.header("Referer", referer) client = client.header("Referer", referer)
} }
client.send()?.error_for_status().map_err(Into::into) match client.send() {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{}", e)),
}
} }
/// Returns a Integer with the priority of the type of the icon which to prefer. /// Returns a Integer with the priority of the type of the icon which to prefer.
@@ -573,7 +594,7 @@ fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
1 1
} else if width == 64 { } else if width == 64 {
2 2
} else if (24..=128).contains(&width) { } else if (24..=192).contains(&width) {
3 3
} else if width == 16 { } else if width == 16 {
4 4
@@ -629,10 +650,10 @@ fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> { fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if is_domain_blacklisted(domain) { if is_domain_blacklisted(domain) {
err!("Domain is blacklisted", domain) err_silent!("Domain is blacklisted", domain)
} }
let icon_result = get_icon_url(&domain)?; let icon_result = get_icon_url(domain)?;
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let mut icon_type: Option<&str> = None; let mut icon_type: Option<&str> = None;
@@ -658,10 +679,10 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
break; break;
} }
} }
_ => warn!("Extracted icon from data:image uri is invalid"), _ => debug!("Extracted icon from data:image uri is invalid"),
}; };
} else { } else {
match get_page_with_cookies(&icon.href, &icon_result.cookies, &icon_result.referer) { match get_page_with_referer(&icon.href, &icon_result.referer) {
Ok(mut res) => { Ok(mut res) => {
res.copy_to(&mut buffer)?; res.copy_to(&mut buffer)?;
// Check if the icon type is allowed, else try an icon from the list. // Check if the icon type is allowed, else try an icon from the list.
@@ -674,13 +695,13 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
info!("Downloaded icon from {}", icon.href); info!("Downloaded icon from {}", icon.href);
break; break;
} }
_ => warn!("Download failed for {}", icon.href), Err(e) => debug!("{:?}", e),
}; };
} }
} }
if buffer.is_empty() { if buffer.is_empty() {
err!("Empty response downloading icon") err_silent!("Empty response or unable find a valid icon", domain);
} }
Ok((buffer, icon_type)) Ok((buffer, icon_type))
@@ -706,7 +727,54 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
[0, 0, 1, 0, ..] => Some("x-icon"), [0, 0, 1, 0, ..] => Some("x-icon"),
[82, 73, 70, 70, ..] => Some("webp"), [82, 73, 70, 70, ..] => Some("webp"),
[255, 216, 255, ..] => Some("jpeg"), [255, 216, 255, ..] => Some("jpeg"),
[71, 73, 70, 56, ..] => Some("gif"),
[66, 77, ..] => Some("bmp"), [66, 77, ..] => Some("bmp"),
_ => None, _ => None,
} }
} }
/// This is an implementation of the default Cookie Jar from Reqwest and reqwest_cookie_store build by pfernie.
/// The default cookie jar used by Reqwest keeps all the cookies based upon the Max-Age or Expires which could be a long time.
/// That could be used for tracking, to prevent this we force the lifespan of the cookies to always be max two minutes.
/// A Cookie Jar is needed because some sites force a redirect with cookies to verify if a request uses cookies or not.
use cookie_store::CookieStore;
#[derive(Default)]
pub struct Jar(RwLock<CookieStore>);
impl reqwest::cookie::CookieStore for Jar {
fn set_cookies(&self, cookie_headers: &mut dyn Iterator<Item = &header::HeaderValue>, url: &url::Url) {
use cookie::{Cookie as RawCookie, ParseError as RawCookieParseError};
use time::Duration;
let mut cookie_store = self.0.write().unwrap();
let cookies = cookie_headers.filter_map(|val| {
std::str::from_utf8(val.as_bytes())
.map_err(RawCookieParseError::from)
.and_then(RawCookie::parse)
.map(|mut c| {
c.set_expires(None);
c.set_max_age(Some(Duration::minutes(2)));
c.into_owned()
})
.ok()
});
cookie_store.store_response_cookies(cookies, url);
}
fn cookies(&self, url: &url::Url) -> Option<header::HeaderValue> {
use bytes::Bytes;
let cookie_store = self.0.read().unwrap();
let s = cookie_store
.get_request_values(url)
.map(|(name, value)| format!("{}={}", name, value))
.collect::<Vec<_>>()
.join("; ");
if s.is_empty() {
return None;
}
header::HeaderValue::from_maybe_shared(Bytes::from(s)).ok()
}
}

View File

@@ -56,7 +56,7 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
// COMMON // COMMON
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap(); let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
let orgs = UserOrganization::find_by_user(&user.uuid, &conn); let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
let (access_token, expires_in) = device.refresh_tokens(&user, orgs); let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
@@ -134,7 +134,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
let (mut device, new_device) = get_device(&data, &conn, &user); let (mut device, new_device) = get_device(&data, &conn, &user);
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, &ip, &conn)?; let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, &conn)?;
if CONFIG.mail_enabled() && new_device { if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) { if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) {
@@ -147,7 +147,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
} }
// Common // Common
let orgs = UserOrganization::find_by_user(&user.uuid, &conn); let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
let (access_token, expires_in) = device.refresh_tokens(&user, orgs); let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
device.save(&conn)?; device.save(&conn)?;
@@ -185,7 +185,7 @@ fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool)
let mut new_device = false; let mut new_device = false;
// Find device or create new // Find device or create new
let device = match Device::find_by_uuid(&device_id, &conn) { let device = match Device::find_by_uuid(&device_id, conn) {
Some(device) => { Some(device) => {
// Check if owned device, and recreate if not // Check if owned device, and recreate if not
if device.user_uuid != user.uuid { if device.user_uuid != user.uuid {
@@ -240,6 +240,7 @@ fn twofactor_auth(
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)? _tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?
} }
Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?, Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::Webauthn) => _tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?, Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::Duo) => { Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)? _tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?
@@ -309,8 +310,13 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
}); });
} }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn)?;
result["TwoFactorProviders2"][provider.to_string()] = request.0;
}
Some(TwoFactorType::Duo) => { Some(TwoFactorType::Duo) => {
let email = match User::find_by_uuid(user_uuid, &conn) { let email = match User::find_by_uuid(user_uuid, conn) {
Some(u) => u.email, Some(u) => u.email,
None => err!("User does not exist"), None => err!("User does not exist"),
}; };
@@ -324,7 +330,7 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
} }
Some(tf_type @ TwoFactorType::YubiKey) => { Some(tf_type @ TwoFactorType::YubiKey) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, &conn) { let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
Some(tf) => tf, Some(tf) => tf,
None => err!("No YubiKey devices registered"), None => err!("No YubiKey devices registered"),
}; };
@@ -339,14 +345,14 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Some(tf_type @ TwoFactorType::Email) => { Some(tf_type @ TwoFactorType::Email) => {
use crate::api::core::two_factor as _tf; use crate::api::core::two_factor as _tf;
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, &conn) { let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
Some(tf) => tf, Some(tf) => tf,
None => err!("No twofactor email registered"), None => err!("No twofactor email registered"),
}; };
// Send email immediately if email is the only 2FA option // Send email immediately if email is the only 2FA option
if providers.len() == 1 { if providers.len() == 1 {
_tf::email::send_token(&user_uuid, &conn)? _tf::email::send_token(user_uuid, conn)?
} }
let email_data = EmailTokenData::from_json(&twofactor.data)?; let email_data = EmailTokenData::from_json(&twofactor.data)?;

View File

@@ -13,6 +13,7 @@ pub use crate::api::{
core::purge_sends, core::purge_sends,
core::purge_trashed_ciphers, core::purge_trashed_ciphers,
core::routes as core_routes, core::routes as core_routes,
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
icons::routes as icons_routes, icons::routes as icons_routes,
identity::routes as identity_routes, identity::routes as identity_routes,
notifications::routes as notifications_routes, notifications::routes as notifications_routes,
@@ -51,10 +52,11 @@ impl NumberOrString {
} }
} }
fn into_i32(self) -> ApiResult<i32> { #[allow(clippy::wrong_self_convention)]
fn into_i32(&self) -> ApiResult<i32> {
use std::num::ParseIntError as PIE; use std::num::ParseIntError as PIE;
match self { match self {
NumberOrString::Number(n) => Ok(n), NumberOrString::Number(n) => Ok(*n),
NumberOrString::String(s) => { NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string())) s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
} }

View File

@@ -65,7 +65,7 @@ use chashmap::CHashMap;
use chrono::NaiveDateTime; use chrono::NaiveDateTime;
use serde_json::from_str; use serde_json::from_str;
use crate::db::models::{Cipher, Folder, User}; use crate::db::models::{Cipher, Folder, Send, User};
use rmpv::Value; use rmpv::Value;
@@ -332,7 +332,24 @@ impl WebSocketUsers {
); );
for uuid in user_uuids { for uuid in user_uuids {
self.send_update(&uuid, &data).ok(); self.send_update(uuid, &data).ok();
}
}
pub fn send_send_update(&self, ut: UpdateType, send: &Send, user_uuids: &[String]) {
let user_uuid = convert_option(send.user_uuid.clone());
let data = create_update(
vec![
("Id".into(), send.uuid.clone().into()),
("UserId".into(), user_uuid),
("RevisionDate".into(), serialize_date(send.revision_date)),
],
ut,
);
for uuid in user_uuids {
self.send_update(uuid, &data).ok();
} }
} }
} }
@@ -408,12 +425,10 @@ pub fn start_notification_server() -> WebSocketUsers {
if CONFIG.websocket_enabled() { if CONFIG.websocket_enabled() {
thread::spawn(move || { thread::spawn(move || {
let settings = ws::Settings { let mut settings = ws::Settings::default();
max_connections: 500, settings.max_connections = 500;
queue_size: 2, settings.queue_size = 2;
panic_on_internal: false, settings.panic_on_internal = false;
..Default::default()
};
ws::Builder::new() ws::Builder::new()
.with_settings(settings) .with_settings(settings)

View File

@@ -4,13 +4,17 @@ use rocket::{http::ContentType, response::content::Content, response::NamedFile,
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
use crate::{error::Error, util::Cached, CONFIG}; use crate::{
error::Error,
util::{Cached, SafeString},
CONFIG,
};
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
// If addding more routes here, consider also adding them to // If addding more routes here, consider also adding them to
// crate::utils::LOGGED_ROUTES to make sure they appear in the log // crate::utils::LOGGED_ROUTES to make sure they appear in the log
if CONFIG.web_vault_enabled() { if CONFIG.web_vault_enabled() {
routes![web_index, app_id, web_files, attachments, sends, alive, static_files] routes![web_index, app_id, web_files, attachments, alive, static_files]
} else { } else {
routes![attachments, alive, static_files] routes![attachments, alive, static_files]
} }
@@ -55,18 +59,15 @@ fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).ok()) Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).ok())
} }
#[get("/attachments/<uuid>/<file..>")] #[get("/attachments/<uuid>/<file_id>")]
fn attachments(uuid: String, file: PathBuf) -> Option<NamedFile> { fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file)).ok() NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).ok()
}
#[get("/sends/<send_id>/<file_id>")]
fn sends(send_id: String, file_id: String) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok()
} }
// We use DbConn here to let the alive healthcheck also verify the database connection.
use crate::db::DbConn;
#[get("/alive")] #[get("/alive")]
fn alive() -> Json<String> { fn alive(_conn: DbConn) -> Json<String> {
use crate::util::format_date; use crate::util::format_date;
use chrono::Utc; use chrono::Utc;
@@ -78,9 +79,11 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
match filename.as_ref() { match filename.as_ref() {
"mail-github.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/mail-github.png"))), "mail-github.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))), "logo-gray.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
"shield-white.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/shield-white.png"))),
"error-x.svg" => Ok(Content(ContentType::SVG, include_bytes!("../static/images/error-x.svg"))), "error-x.svg" => Ok(Content(ContentType::SVG, include_bytes!("../static/images/error-x.svg"))),
"hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))), "hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"vaultwarden-icon.png" => {
Ok(Content(ContentType::PNG, include_bytes!("../static/images/vaultwarden-icon.png")))
}
"bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))), "bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => { "bootstrap-native.js" => {
@@ -89,8 +92,8 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
"identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))), "identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))), "datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))), "datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.5.1.slim.js" => { "jquery-3.6.0.slim.js" => {
Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.5.1.slim.js"))) Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.0.slim.js")))
} }
_ => err!(format!("Static file not found: {}", filename)), _ => err!(format!("Static file not found: {}", filename)),
} }

View File

@@ -19,22 +19,36 @@ const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
pub static DEFAULT_VALIDITY: Lazy<Duration> = Lazy::new(|| Duration::hours(2)); pub static DEFAULT_VALIDITY: Lazy<Duration> = Lazy::new(|| Duration::hours(2));
static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM)); static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin())); pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin())); static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
static JWT_EMERGENCY_ACCESS_INVITE_ISSUER: Lazy<String> =
Lazy::new(|| format!("{}|emergencyaccessinvite", CONFIG.domain_origin()));
static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin())); static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin())); static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin())); static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: Lazy<Vec<u8>> = Lazy::new(|| match read_file(&CONFIG.private_rsa_key()) { static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin()));
Ok(key) => key,
Err(e) => panic!("Error loading private RSA Key.\n Error: {}", e), static PRIVATE_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
read_file(&CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key.\n{}", e))
}); });
static PUBLIC_RSA_KEY: Lazy<Vec<u8>> = Lazy::new(|| match read_file(&CONFIG.public_rsa_key()) { static PRIVATE_RSA_KEY: Lazy<EncodingKey> = Lazy::new(|| {
Ok(key) => key, EncodingKey::from_rsa_pem(&PRIVATE_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{}", e))
Err(e) => panic!("Error loading public RSA Key.\n Error: {}", e), });
static PUBLIC_RSA_KEY_VEC: Lazy<Vec<u8>> = Lazy::new(|| {
read_file(&CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key.\n{}", e))
});
static PUBLIC_RSA_KEY: Lazy<DecodingKey> = Lazy::new(|| {
DecodingKey::from_rsa_pem(&PUBLIC_RSA_KEY_VEC).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{}", e))
}); });
pub fn load_keys() {
Lazy::force(&PRIVATE_RSA_KEY);
Lazy::force(&PUBLIC_RSA_KEY);
}
pub fn encode_jwt<T: Serialize>(claims: &T) -> String { pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
match jsonwebtoken::encode(&JWT_HEADER, claims, &EncodingKey::from_rsa_der(&PRIVATE_RSA_KEY)) { match jsonwebtoken::encode(&JWT_HEADER, claims, &PRIVATE_RSA_KEY) {
Ok(token) => token, Ok(token) => token,
Err(e) => panic!("Error encoding jwt {}", e), Err(e) => panic!("Error encoding jwt {}", e),
} }
@@ -52,10 +66,7 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
}; };
let token = token.replace(char::is_whitespace, ""); let token = token.replace(char::is_whitespace, "");
jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation).map(|d| d.claims).map_res("Error decoding JWT")
jsonwebtoken::decode(&token, &DecodingKey::from_rsa_der(&PUBLIC_RSA_KEY), &validation)
.map(|d| d.claims)
.map_res("Error decoding JWT")
} }
pub fn decode_login(token: &str) -> Result<LoginJwtClaims, Error> { pub fn decode_login(token: &str) -> Result<LoginJwtClaims, Error> {
@@ -66,18 +77,26 @@ pub fn decode_invite(token: &str) -> Result<InviteJwtClaims, Error> {
decode_jwt(token, JWT_INVITE_ISSUER.to_string()) decode_jwt(token, JWT_INVITE_ISSUER.to_string())
} }
pub fn decode_delete(token: &str) -> Result<DeleteJwtClaims, Error> { pub fn decode_emergency_access_invite(token: &str) -> Result<EmergencyAccessInviteJwtClaims, Error> {
decode_jwt(token, JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string())
}
pub fn decode_delete(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_DELETE_ISSUER.to_string()) decode_jwt(token, JWT_DELETE_ISSUER.to_string())
} }
pub fn decode_verify_email(token: &str) -> Result<VerifyEmailJwtClaims, Error> { pub fn decode_verify_email(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_VERIFYEMAIL_ISSUER.to_string()) decode_jwt(token, JWT_VERIFYEMAIL_ISSUER.to_string())
} }
pub fn decode_admin(token: &str) -> Result<AdminJwtClaims, Error> { pub fn decode_admin(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_ADMIN_ISSUER.to_string()) decode_jwt(token, JWT_ADMIN_ISSUER.to_string())
} }
pub fn decode_send(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_SEND_ISSUER.to_string())
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct LoginJwtClaims { pub struct LoginJwtClaims {
// Not before // Not before
@@ -146,8 +165,46 @@ pub fn generate_invite_claims(
} }
} }
// var token = _dataProtector.Protect($"EmergencyAccessInvite {emergencyAccess.Id} {emergencyAccess.Email} {nowMillis}");
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct DeleteJwtClaims { pub struct EmergencyAccessInviteJwtClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub email: String,
pub emer_id: Option<String>,
pub grantor_name: Option<String>,
pub grantor_email: Option<String>,
}
pub fn generate_emergency_access_invite_claims(
uuid: String,
email: String,
emer_id: Option<String>,
grantor_name: Option<String>,
grantor_email: Option<String>,
) -> EmergencyAccessInviteJwtClaims {
let time_now = Utc::now().naive_utc();
EmergencyAccessInviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid,
email,
emer_id,
grantor_name,
grantor_email,
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -158,9 +215,9 @@ pub struct DeleteJwtClaims {
pub sub: String, pub sub: String,
} }
pub fn generate_delete_claims(uuid: String) -> DeleteJwtClaims { pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
DeleteJwtClaims { BasicJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(), exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_DELETE_ISSUER.to_string(), iss: JWT_DELETE_ISSUER.to_string(),
@@ -168,21 +225,9 @@ pub fn generate_delete_claims(uuid: String) -> DeleteJwtClaims {
} }
} }
#[derive(Debug, Serialize, Deserialize)] pub fn generate_verify_email_claims(uuid: String) -> BasicJwtClaims {
pub struct VerifyEmailJwtClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
}
pub fn generate_verify_email_claims(uuid: String) -> DeleteJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
DeleteJwtClaims { BasicJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(), exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(), iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
@@ -190,21 +235,9 @@ pub fn generate_verify_email_claims(uuid: String) -> DeleteJwtClaims {
} }
} }
#[derive(Debug, Serialize, Deserialize)] pub fn generate_admin_claims() -> BasicJwtClaims {
pub struct AdminJwtClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
}
pub fn generate_admin_claims() -> AdminJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
AdminJwtClaims { BasicJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(20)).timestamp(), exp: (time_now + Duration::minutes(20)).timestamp(),
iss: JWT_ADMIN_ISSUER.to_string(), iss: JWT_ADMIN_ISSUER.to_string(),
@@ -212,6 +245,16 @@ pub fn generate_admin_claims() -> AdminJwtClaims {
} }
} }
pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(2)).timestamp(),
iss: JWT_SEND_ISSUER.to_string(),
sub: format!("{}/{}", send_id, file_id),
}
}
// //
// Bearer token authentication // Bearer token authentication
// //
@@ -326,8 +369,19 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
_ => err_handler!("Error getting current route for stamp exception"), _ => err_handler!("Error getting current route for stamp exception"),
}; };
// Check if both match, if not this route is not allowed with the current security stamp. // Check if the stamp exception has expired first.
if stamp_exception.route != current_route { // Then, check if the current route matches any of the allowed routes.
// After that check the stamp in exception matches the one in the claims.
if Utc::now().naive_utc().timestamp() > stamp_exception.expire {
// If the stamp exception has been expired remove it from the database.
// This prevents checking this stamp exception for new requests.
let mut user = user;
user.reset_stamp_exception();
if let Err(e) = user.save(&conn) {
error!("Error updating user: {:#?}", e);
}
err_handler!("Stamp exception is expired")
} else if !stamp_exception.routes.contains(&current_route.to_string()) {
err_handler!("Invalid security stamp: Current route and exception route do not match") err_handler!("Invalid security stamp: Current route and exception route do not match")
} else if stamp_exception.security_stamp != claims.sstamp { } else if stamp_exception.security_stamp != claims.sstamp {
err_handler!("Invalid security stamp for matched stamp exception") err_handler!("Invalid security stamp for matched stamp exception")

View File

@@ -57,6 +57,8 @@ macro_rules! make_config {
_env: ConfigBuilder, _env: ConfigBuilder,
_usr: ConfigBuilder, _usr: ConfigBuilder,
_overrides: Vec<String>,
} }
#[derive(Debug, Clone, Default, Deserialize, Serialize)] #[derive(Debug, Clone, Default, Deserialize, Serialize)]
@@ -79,20 +81,16 @@ macro_rules! make_config {
dotenv::Error::Io(ioerr) => match ioerr.kind() { dotenv::Error::Io(ioerr) => match ioerr.kind() {
std::io::ErrorKind::NotFound => { std::io::ErrorKind::NotFound => {
println!("[INFO] No .env file found.\n"); println!("[INFO] No .env file found.\n");
()
}, },
std::io::ErrorKind::PermissionDenied => { std::io::ErrorKind::PermissionDenied => {
println!("[WARNING] Permission Denied while trying to read the .env file!\n"); println!("[WARNING] Permission Denied while trying to read the .env file!\n");
()
}, },
_ => { _ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", ioerr); println!("[WARNING] Reading the .env file failed:\n{:?}\n", ioerr);
()
} }
}, },
_ => { _ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", e); println!("[WARNING] Reading the .env file failed:\n{:?}\n", e);
()
} }
} }
}; };
@@ -113,8 +111,7 @@ macro_rules! make_config {
/// Merges the values of both builders into a new builder. /// Merges the values of both builders into a new builder.
/// If both have the same element, `other` wins. /// If both have the same element, `other` wins.
fn merge(&self, other: &Self, show_overrides: bool) -> Self { fn merge(&self, other: &Self, show_overrides: bool, overrides: &mut Vec<String>) -> Self {
let mut overrides = Vec::new();
let mut builder = self.clone(); let mut builder = self.clone();
$($( $($(
if let v @Some(_) = &other.$name { if let v @Some(_) = &other.$name {
@@ -176,9 +173,9 @@ macro_rules! make_config {
)+)+ )+)+
pub fn prepare_json(&self) -> serde_json::Value { pub fn prepare_json(&self) -> serde_json::Value {
let (def, cfg) = { let (def, cfg, overriden) = {
let inner = &self.inner.read().unwrap(); let inner = &self.inner.read().unwrap();
(inner._env.build(), inner.config.clone()) (inner._env.build(), inner.config.clone(), inner._overrides.clone())
}; };
fn _get_form_type(rust_type: &str) -> &'static str { fn _get_form_type(rust_type: &str) -> &'static str {
@@ -210,6 +207,7 @@ macro_rules! make_config {
"default": def.$name, "default": def.$name,
"type": _get_form_type(stringify!($ty)), "type": _get_form_type(stringify!($ty)),
"doc": _get_doc(concat!($($doc),+)), "doc": _get_doc(concat!($($doc),+)),
"overridden": overriden.contains(&stringify!($name).to_uppercase()),
}, )+ }, )+
]}, )+ ]) ]}, )+ ])
} }
@@ -224,6 +222,15 @@ macro_rules! make_config {
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action }, stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
)+)+ }) )+)+ })
} }
pub fn get_overrides(&self) -> Vec<String> {
let overrides = {
let inner = &self.inner.read().unwrap();
inner._overrides.clone()
};
overrides
}
} }
}; };
@@ -326,6 +333,12 @@ make_config! {
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently. /// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
/// Defaults to daily. Set blank to disable this job. /// Defaults to daily. Set blank to disable this job.
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string(); trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
/// Emergency notification reminder schedule |> Cron schedule of the job that sends expiration reminders to emergency access grantors.
/// Defaults to hourly. Set blank to disable this job.
emergency_notification_reminder_schedule: String, false, def, "0 5 * * * *".to_string();
/// Emergency request timeout schedule |> Cron schedule of the job that grants emergency access requests that have met the required wait time.
/// Defaults to hourly. Set blank to disable this job.
emergency_request_timeout_schedule: String, false, def, "0 5 * * * *".to_string();
}, },
/// General settings /// General settings
@@ -342,12 +355,16 @@ make_config! {
/// Enable web vault /// Enable web vault
web_vault_enabled: bool, false, def, true; web_vault_enabled: bool, false, def, true;
/// Allow Sends |> Controls whether users are allowed to create Bitwarden Sends.
/// This setting applies globally to all users. To control this on a per-org basis instead, use the "Disable Send" org policy.
sends_allowed: bool, true, def, true;
/// HIBP Api Key |> HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key /// HIBP Api Key |> HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
hibp_api_key: Pass, true, option; hibp_api_key: Pass, true, option;
/// Per-user attachment limit (KB) |> Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more /// Per-user attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per user. When this limit is reached, the user will not be allowed to upload further attachments.
user_attachment_limit: i64, true, option; user_attachment_limit: i64, true, option;
/// Per-organization attachment limit (KB) |> Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more /// Per-organization attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per org. When this limit is reached, org members will not be allowed to upload further attachments for ciphers owned by that org.
org_attachment_limit: i64, true, option; org_attachment_limit: i64, true, option;
/// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item. /// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item.
@@ -374,12 +391,15 @@ make_config! {
org_creation_users: String, true, def, "".to_string(); org_creation_users: String, true, def, "".to_string();
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled /// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
invitations_allowed: bool, true, def, true; invitations_allowed: bool, true, def, true;
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
emergency_access_allowed: bool, true, def, true;
/// Password iterations |> Number of server-side passwords hashing iterations. /// Password iterations |> Number of server-side passwords hashing iterations.
/// The changes only apply when a user changes their password. Not recommended to lower the value /// The changes only apply when a user changes their password. Not recommended to lower the value
password_iterations: i32, true, def, 100_000; password_iterations: i32, true, def, 100_000;
/// Show password hints |> Controls if the password hint should be shown directly in the web page. /// Show password hint |> Controls whether a password hint should be shown directly in the web page
/// Otherwise, if email is disabled, there is no way to see the password hint /// if SMTP service is not configured. Not recommended for publicly-accessible instances as this
show_password_hint: bool, true, def, true; /// provides unauthenticated access to potentially sensitive data.
show_password_hint: bool, true, def, false;
/// Admin page token |> The token used to authenticate in this very same page. Changing it here won't deauthorize the current session /// Admin page token |> The token used to authenticate in this very same page. Changing it here won't deauthorize the current session
admin_token: Pass, true, option; admin_token: Pass, true, option;
@@ -501,7 +521,7 @@ make_config! {
/// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters /// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters
helo_name: String, true, option; helo_name: String, true, option;
/// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting! /// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
smtp_debug: bool, true, def, false; smtp_debug: bool, false, def, false;
/// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks! /// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks!
smtp_accept_invalid_certs: bool, true, def, false; smtp_accept_invalid_certs: bool, true, def, false;
/// Accept Invalid Hostnames (Know the risks!) |> DANGEROUS: Allow invalid hostnames. This option introduces significant vulnerabilities to man-in-the-middle attacks! /// Accept Invalid Hostnames (Know the risks!) |> DANGEROUS: Allow invalid hostnames. This option introduces significant vulnerabilities to man-in-the-middle attacks!
@@ -595,7 +615,7 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
// Check if the icon blacklist regex is valid // Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex { if let Some(ref r) = cfg.icon_blacklist_regex {
let validate_regex = Regex::new(&r); let validate_regex = Regex::new(r);
match validate_regex { match validate_regex {
Ok(_) => (), Ok(_) => (),
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)), Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
@@ -635,7 +655,8 @@ impl Config {
let _usr = ConfigBuilder::from_file(&CONFIG_FILE).unwrap_or_default(); let _usr = ConfigBuilder::from_file(&CONFIG_FILE).unwrap_or_default();
// Create merged config, config file overwrites env // Create merged config, config file overwrites env
let builder = _env.merge(&_usr, true); let mut _overrides = Vec::new();
let builder = _env.merge(&_usr, true, &mut _overrides);
// Fill any missing with defaults // Fill any missing with defaults
let config = builder.build(); let config = builder.build();
@@ -647,6 +668,7 @@ impl Config {
config, config,
_env, _env,
_usr, _usr,
_overrides,
}), }),
}) })
} }
@@ -662,9 +684,10 @@ impl Config {
let config_str = serde_json::to_string_pretty(&builder)?; let config_str = serde_json::to_string_pretty(&builder)?;
// Prepare the combined config // Prepare the combined config
let mut overrides = Vec::new();
let config = { let config = {
let env = &self.inner.read().unwrap()._env; let env = &self.inner.read().unwrap()._env;
env.merge(&builder, false).build() env.merge(&builder, false, &mut overrides).build()
}; };
validate_config(&config)?; validate_config(&config)?;
@@ -673,6 +696,7 @@ impl Config {
let mut writer = self.inner.write().unwrap(); let mut writer = self.inner.write().unwrap();
writer.config = config; writer.config = config;
writer._usr = builder; writer._usr = builder;
writer._overrides = overrides;
} }
//Save to file //Save to file
@@ -686,7 +710,8 @@ impl Config {
pub fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> { pub fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
let builder = { let builder = {
let usr = &self.inner.read().unwrap()._usr; let usr = &self.inner.read().unwrap()._usr;
usr.merge(&other, false) let mut _overrides = Vec::new();
usr.merge(&other, false, &mut _overrides)
}; };
self.update_config(builder) self.update_config(builder)
} }
@@ -747,19 +772,17 @@ impl Config {
let mut writer = self.inner.write().unwrap(); let mut writer = self.inner.write().unwrap();
writer.config = config; writer.config = config;
writer._usr = usr; writer._usr = usr;
writer._overrides = Vec::new();
} }
Ok(()) Ok(())
} }
pub fn private_rsa_key(&self) -> String { pub fn private_rsa_key(&self) -> String {
format!("{}.der", CONFIG.rsa_key_filename())
}
pub fn private_rsa_key_pem(&self) -> String {
format!("{}.pem", CONFIG.rsa_key_filename()) format!("{}.pem", CONFIG.rsa_key_filename())
} }
pub fn public_rsa_key(&self) -> String { pub fn public_rsa_key(&self) -> String {
format!("{}.pub.der", CONFIG.rsa_key_filename()) format!("{}.pub.pem", CONFIG.rsa_key_filename())
} }
pub fn mail_enabled(&self) -> bool { pub fn mail_enabled(&self) -> bool {
let inner = &self.inner.read().unwrap().config; let inner = &self.inner.read().unwrap().config;
@@ -832,14 +855,28 @@ where
} }
// First register default templates here // First register default templates here
reg!("email/email_header");
reg!("email/email_footer");
reg!("email/email_footer_text");
reg!("email/change_email", ".html"); reg!("email/change_email", ".html");
reg!("email/delete_account", ".html"); reg!("email/delete_account", ".html");
reg!("email/invite_accepted", ".html"); reg!("email/invite_accepted", ".html");
reg!("email/invite_confirmed", ".html"); reg!("email/invite_confirmed", ".html");
reg!("email/emergency_access_invite_accepted", ".html");
reg!("email/emergency_access_invite_confirmed", ".html");
reg!("email/emergency_access_recovery_approved", ".html");
reg!("email/emergency_access_recovery_initiated", ".html");
reg!("email/emergency_access_recovery_rejected", ".html");
reg!("email/emergency_access_recovery_reminder", ".html");
reg!("email/emergency_access_recovery_timed_out", ".html");
reg!("email/new_device_logged_in", ".html"); reg!("email/new_device_logged_in", ".html");
reg!("email/pw_hint_none", ".html"); reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html"); reg!("email/pw_hint_some", ".html");
reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/send_org_invite", ".html"); reg!("email/send_org_invite", ".html");
reg!("email/send_emergency_access_invite", ".html");
reg!("email/twofactor_email", ".html"); reg!("email/twofactor_email", ".html");
reg!("email/verify_email", ".html"); reg!("email/verify_email", ".html");
reg!("email/welcome", ".html"); reg!("email/welcome", ".html");

View File

@@ -3,6 +3,7 @@
// //
use std::num::NonZeroU32; use std::num::NonZeroU32;
use data_encoding::HEXLOWER;
use ring::{digest, hmac, pbkdf2}; use ring::{digest, hmac, pbkdf2};
use crate::error::Error; use crate::error::Error;
@@ -28,8 +29,6 @@ pub fn verify_password_hash(secret: &[u8], salt: &[u8], previous: &[u8], iterati
// HMAC // HMAC
// //
pub fn hmac_sign(key: &str, data: &str) -> String { pub fn hmac_sign(key: &str, data: &str) -> String {
use data_encoding::HEXLOWER;
let key = hmac::Key::new(hmac::HMAC_SHA1_FOR_LEGACY_USE_ONLY, key.as_bytes()); let key = hmac::Key::new(hmac::HMAC_SHA1_FOR_LEGACY_USE_ONLY, key.as_bytes());
let signature = hmac::sign(&key, data.as_bytes()); let signature = hmac::sign(&key, data.as_bytes());
@@ -52,6 +51,20 @@ pub fn get_random(mut array: Vec<u8>) -> Vec<u8> {
array array
} }
pub fn generate_id(num_bytes: usize) -> String {
HEXLOWER.encode(&get_random(vec![0; num_bytes]))
}
pub fn generate_send_id() -> String {
// Send IDs are globally scoped, so make them longer to avoid collisions.
generate_id(32) // 256 bits
}
pub fn generate_attachment_id() -> String {
// Attachment IDs are scoped to a cipher, so they can be smaller.
generate_id(10) // 80 bits
}
pub fn generate_token(token_size: u32) -> Result<String, Error> { pub fn generate_token(token_size: u32) -> Result<String, Error> {
// A u64 can represent all whole numbers up to 19 digits long. // A u64 can represent all whole numbers up to 19 digits long.
if token_size > 19 { if token_size > 19 {

View File

@@ -114,7 +114,7 @@ macro_rules! db_run {
}; };
// Different code for each db // Different code for each db
( $conn:ident: $( $($db:ident),+ $body:block )+ ) => { ( $conn:ident: $( $($db:ident),+ $body:block )+ ) => {{
#[allow(unused)] use diesel::prelude::*; #[allow(unused)] use diesel::prelude::*;
match $conn { match $conn {
$($( $($(
@@ -128,7 +128,7 @@ macro_rules! db_run {
$body $body
}, },
)+)+ )+)+
} }}
}; };
// Same for all dbs // Same for all dbs
@@ -157,6 +157,24 @@ pub trait FromDb {
fn from_db(self) -> Self::Output; fn from_db(self) -> Self::Output;
} }
impl<T: FromDb> FromDb for Vec<T> {
type Output = Vec<T::Output>;
#[allow(clippy::wrong_self_convention)]
#[inline(always)]
fn from_db(self) -> Self::Output {
self.into_iter().map(crate::db::FromDb::from_db).collect()
}
}
impl<T: FromDb> FromDb for Option<T> {
type Output = Option<T::Output>;
#[allow(clippy::wrong_self_convention)]
#[inline(always)]
fn from_db(self) -> Self::Output {
self.map(crate::db::FromDb::from_db)
}
}
// For each struct eg. Cipher, we create a CipherDb inside a module named __$db_model (where $db is sqlite, mysql or postgresql), // For each struct eg. Cipher, we create a CipherDb inside a module named __$db_model (where $db is sqlite, mysql or postgresql),
// to implement the Diesel traits. We also provide methods to convert between them and the basic structs. Later, that module will be auto imported when using db_run! // to implement the Diesel traits. We also provide methods to convert between them and the basic structs. Later, that module will be auto imported when using db_run!
#[macro_export] #[macro_export]
@@ -197,18 +215,9 @@ macro_rules! db_object {
impl crate::db::FromDb for [<$name Db>] { impl crate::db::FromDb for [<$name Db>] {
type Output = super::$name; type Output = super::$name;
#[allow(clippy::wrong_self_convention)]
#[inline(always)] fn from_db(self) -> Self::Output { super::$name { $( $field: self.$field, )+ } } #[inline(always)] fn from_db(self) -> Self::Output { super::$name { $( $field: self.$field, )+ } }
} }
impl crate::db::FromDb for Vec<[<$name Db>]> {
type Output = Vec<super::$name>;
#[inline(always)] fn from_db(self) -> Self::Output { self.into_iter().map(crate::db::FromDb::from_db).collect() }
}
impl crate::db::FromDb for Option<[<$name Db>]> {
type Output = Option<super::$name>;
#[inline(always)] fn from_db(self) -> Self::Output { self.map(crate::db::FromDb::from_db) }
}
} }
}; };
} }
@@ -269,7 +278,6 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html // https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
#[cfg(sqlite)] #[cfg(sqlite)]
mod sqlite_migrations { mod sqlite_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/sqlite"); embed_migrations!("migrations/sqlite");
pub fn run_migrations() -> Result<(), super::Error> { pub fn run_migrations() -> Result<(), super::Error> {
@@ -306,7 +314,6 @@ mod sqlite_migrations {
#[cfg(mysql)] #[cfg(mysql)]
mod mysql_migrations { mod mysql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/mysql"); embed_migrations!("migrations/mysql");
pub fn run_migrations() -> Result<(), super::Error> { pub fn run_migrations() -> Result<(), super::Error> {
@@ -327,7 +334,6 @@ mod mysql_migrations {
#[cfg(postgresql)] #[cfg(postgresql)]
mod postgresql_migrations { mod postgresql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/postgresql"); embed_migrations!("migrations/postgresql");
pub fn run_migrations() -> Result<(), super::Error> { pub fn run_migrations() -> Result<(), super::Error> {

View File

@@ -1,3 +1,5 @@
use std::io::ErrorKind;
use serde_json::Value; use serde_json::Value;
use super::Cipher; use super::Cipher;
@@ -12,7 +14,7 @@ db_object! {
pub struct Attachment { pub struct Attachment {
pub id: String, pub id: String,
pub cipher_uuid: String, pub cipher_uuid: String,
pub file_name: String, pub file_name: String, // encrypted
pub file_size: i32, pub file_size: i32,
pub akey: Option<String>, pub akey: Option<String>,
} }
@@ -20,13 +22,13 @@ db_object! {
/// Local methods /// Local methods
impl Attachment { impl Attachment {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32) -> Self { pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32, akey: Option<String>) -> Self {
Self { Self {
id, id,
cipher_uuid, cipher_uuid,
file_name, file_name,
file_size, file_size,
akey: None, akey,
} }
} }
@@ -34,18 +36,17 @@ impl Attachment {
format!("{}/{}/{}", CONFIG.attachments_folder(), self.cipher_uuid, self.id) format!("{}/{}/{}", CONFIG.attachments_folder(), self.cipher_uuid, self.id)
} }
pub fn get_url(&self, host: &str) -> String {
format!("{}/attachments/{}/{}", host, self.cipher_uuid, self.id)
}
pub fn to_json(&self, host: &str) -> Value { pub fn to_json(&self, host: &str) -> Value {
use crate::util::get_display_size;
let web_path = format!("{}/attachments/{}/{}", host, self.cipher_uuid, self.id);
let display_size = get_display_size(self.file_size);
json!({ json!({
"Id": self.id, "Id": self.id,
"Url": web_path, "Url": self.get_url(host),
"FileName": self.file_name, "FileName": self.file_name,
"Size": self.file_size.to_string(), "Size": self.file_size.to_string(),
"SizeName": display_size, "SizeName": crate::util::get_display_size(self.file_size),
"Key": self.akey, "Key": self.akey,
"Object": "attachment" "Object": "attachment"
}) })
@@ -91,7 +92,7 @@ impl Attachment {
} }
} }
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
crate::util::retry( crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn), || diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
@@ -99,14 +100,25 @@ impl Attachment {
) )
.map_res("Error deleting attachment")?; .map_res("Error deleting attachment")?;
crate::util::delete_file(&self.get_file_path())?; let file_path = &self.get_file_path();
match crate::util::delete_file(file_path) {
// Ignore "file not found" errors. This can happen when the
// upstream caller has already cleaned up the file as part of
// its own error handling.
Err(e) if e.kind() == ErrorKind::NotFound => {
debug!("File '{}' already deleted.", file_path);
Ok(()) Ok(())
}
Err(e) => Err(e.into()),
_ => Ok(()),
}
}} }}
} }
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(&cipher_uuid, &conn) { for attachment in Attachment::find_by_cipher(cipher_uuid, conn) {
attachment.delete(&conn)?; attachment.delete(conn)?;
} }
Ok(()) Ok(())
} }
@@ -131,16 +143,6 @@ impl Attachment {
}} }}
} }
pub fn find_by_ciphers(cipher_uuids: Vec<String>, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
.load::<AttachmentDb>(conn)
.expect("Error loading attachments")
.from_db()
}}
}
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 { pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
let result: Option<i64> = attachments::table let result: Option<i64> = attachments::table

View File

@@ -38,9 +38,16 @@ db_object! {
pub password_history: Option<String>, pub password_history: Option<String>,
pub deleted_at: Option<NaiveDateTime>, pub deleted_at: Option<NaiveDateTime>,
pub reprompt: Option<i32>,
} }
} }
#[allow(dead_code)]
pub enum RepromptType {
None = 0,
Password = 1, // not currently used in server
}
/// Local methods /// Local methods
impl Cipher { impl Cipher {
pub fn new(atype: i32, name: String) -> Self { pub fn new(atype: i32, name: String) -> Self {
@@ -63,6 +70,7 @@ impl Cipher {
data: String::new(), data: String::new(),
password_history: None, password_history: None,
deleted_at: None, deleted_at: None,
reprompt: None,
} }
} }
} }
@@ -89,7 +97,7 @@ impl Cipher {
let password_history_json = let password_history_json =
self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let (read_only, hide_passwords) = match self.get_access_restrictions(&user_uuid, conn) { let (read_only, hide_passwords) = match self.get_access_restrictions(user_uuid, conn) {
Some((ro, hp)) => (ro, hp), Some((ro, hp)) => (ro, hp),
None => { None => {
error!("Cipher ownership assertion failure"); error!("Cipher ownership assertion failure");
@@ -136,8 +144,9 @@ impl Cipher {
"Type": self.atype, "Type": self.atype,
"RevisionDate": format_date(&self.updated_at), "RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))), "DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"FolderId": self.get_folder_uuid(&user_uuid, conn), "FolderId": self.get_folder_uuid(user_uuid, conn),
"Favorite": self.is_favorite(&user_uuid, conn), "Favorite": self.is_favorite(user_uuid, conn),
"Reprompt": self.reprompt.unwrap_or(RepromptType::None as i32),
"OrganizationId": self.organization_uuid, "OrganizationId": self.organization_uuid,
"Attachments": attachments_json, "Attachments": attachments_json,
// We have UseTotp set to true by default within the Organization model. // We have UseTotp set to true by default within the Organization model.
@@ -184,13 +193,13 @@ impl Cipher {
let mut user_uuids = Vec::new(); let mut user_uuids = Vec::new();
match self.user_uuid { match self.user_uuid {
Some(ref user_uuid) => { Some(ref user_uuid) => {
User::update_uuid_revision(&user_uuid, conn); User::update_uuid_revision(user_uuid, conn);
user_uuids.push(user_uuid.clone()) user_uuids.push(user_uuid.clone())
} }
None => { None => {
// Belongs to Organization, need to update affected users // Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
UserOrganization::find_by_cipher_and_org(&self.uuid, &org_uuid, conn).iter().for_each(|user_org| { UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).iter().for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn); User::update_uuid_revision(&user_org.user_uuid, conn);
user_uuids.push(user_org.user_uuid.clone()) user_uuids.push(user_org.user_uuid.clone())
}); });
@@ -251,15 +260,15 @@ impl Cipher {
} }
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for cipher in Self::find_by_org(org_uuid, &conn) { for cipher in Self::find_by_org(org_uuid, conn) {
cipher.delete(&conn)?; cipher.delete(conn)?;
} }
Ok(()) Ok(())
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, &conn) { for cipher in Self::find_owned_by_user(user_uuid, conn) {
cipher.delete(&conn)?; cipher.delete(conn)?;
} }
Ok(()) Ok(())
} }
@@ -270,7 +279,7 @@ impl Cipher {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days); let dt = now - Duration::days(auto_delete_days);
for cipher in Self::find_deleted_before(&dt, conn) { for cipher in Self::find_deleted_before(&dt, conn) {
cipher.delete(&conn).ok(); cipher.delete(conn).ok();
} }
} }
} }
@@ -278,7 +287,7 @@ impl Cipher {
pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn); User::update_uuid_revision(user_uuid, conn);
match (self.get_folder_uuid(&user_uuid, conn), folder_uuid) { match (self.get_folder_uuid(user_uuid, conn), folder_uuid) {
// No changes // No changes
(None, None) => Ok(()), (None, None) => Ok(()),
(Some(ref old), Some(ref new)) if old == new => Ok(()), (Some(ref old), Some(ref new)) if old == new => Ok(()),
@@ -310,7 +319,7 @@ impl Cipher {
/// Returns whether this cipher is owned by an org in which the user has full access. /// Returns whether this cipher is owned by an org in which the user has full access.
pub fn is_in_full_access_org(&self, user_uuid: &str, conn: &DbConn) -> bool { pub fn is_in_full_access_org(&self, user_uuid: &str, conn: &DbConn) -> bool {
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user_uuid, &org_uuid, conn) { if let Some(user_org) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
return user_org.has_full_access(); return user_org.has_full_access();
} }
} }
@@ -327,7 +336,7 @@ impl Cipher {
// Check whether this cipher is directly owned by the user, or is in // Check whether this cipher is directly owned by the user, or is in
// a collection that the user has full access to. If so, there are no // a collection that the user has full access to. If so, there are no
// access restrictions. // access restrictions.
if self.is_owned_by_user(&user_uuid) || self.is_in_full_access_org(&user_uuid, &conn) { if self.is_owned_by_user(user_uuid) || self.is_in_full_access_org(user_uuid, conn) {
return Some((false, false)); return Some((false, false));
} }
@@ -368,14 +377,14 @@ impl Cipher {
} }
pub fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool { pub fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match self.get_access_restrictions(&user_uuid, &conn) { match self.get_access_restrictions(user_uuid, conn) {
Some((read_only, _hide_passwords)) => !read_only, Some((read_only, _hide_passwords)) => !read_only,
None => false, None => false,
} }
} }
pub fn is_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool { pub fn is_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
self.get_access_restrictions(&user_uuid, &conn).is_some() self.get_access_restrictions(user_uuid, conn).is_some()
} }
// Returns whether this cipher is a favorite of the specified user. // Returns whether this cipher is a favorite of the specified user.

View File

@@ -109,8 +109,8 @@ impl Collection {
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn); self.update_users_revision(conn);
CollectionCipher::delete_all_by_collection(&self.uuid, &conn)?; CollectionCipher::delete_all_by_collection(&self.uuid, conn)?;
CollectionUser::delete_all_by_collection(&self.uuid, &conn)?; CollectionUser::delete_all_by_collection(&self.uuid, conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid))) diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
@@ -120,8 +120,8 @@ impl Collection {
} }
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, &conn) { for collection in Self::find_by_organization(org_uuid, conn) {
collection.delete(&conn)?; collection.delete(conn)?;
} }
Ok(()) Ok(())
} }
@@ -220,7 +220,7 @@ impl Collection {
} }
pub fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool { pub fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(&user_uuid, &self.org_uuid, &conn) { match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn) {
None => false, // Not in Org None => false, // Not in Org
Some(user_org) => { Some(user_org) => {
if user_org.has_full_access() { if user_org.has_full_access() {
@@ -242,7 +242,7 @@ impl Collection {
} }
pub fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool { pub fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(&user_uuid, &self.org_uuid, &conn) { match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn) {
None => true, // Not in Org None => true, // Not in Org
Some(user_org) => { Some(user_org) => {
if user_org.has_full_access() { if user_org.has_full_access() {
@@ -286,7 +286,7 @@ impl CollectionUser {
hide_passwords: bool, hide_passwords: bool,
conn: &DbConn, conn: &DbConn,
) -> EmptyResult { ) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn); User::update_uuid_revision(user_uuid, conn);
db_run! { conn: db_run! { conn:
sqlite, mysql { sqlite, mysql {
@@ -375,7 +375,7 @@ impl CollectionUser {
} }
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
CollectionUser::find_by_collection(&collection_uuid, conn).iter().for_each(|collection| { CollectionUser::find_by_collection(collection_uuid, conn).iter().for_each(|collection| {
User::update_uuid_revision(&collection.user_uuid, conn); User::update_uuid_revision(&collection.user_uuid, conn);
}); });
@@ -406,7 +406,7 @@ impl CollectionUser {
/// Database methods /// Database methods
impl CollectionCipher { impl CollectionCipher {
pub fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(&collection_uuid, conn); Self::update_users_revision(collection_uuid, conn);
db_run! { conn: db_run! { conn:
sqlite, mysql { sqlite, mysql {
@@ -436,7 +436,7 @@ impl CollectionCipher {
} }
pub fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(&collection_uuid, conn); Self::update_users_revision(collection_uuid, conn);
db_run! { conn: { db_run! { conn: {
diesel::delete( diesel::delete(

View File

@@ -143,8 +143,8 @@ impl Device {
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for device in Self::find_by_user(user_uuid, &conn) { for device in Self::find_by_user(user_uuid, conn) {
device.delete(&conn)?; device.delete(conn)?;
} }
Ok(()) Ok(())
} }

View File

@@ -0,0 +1,282 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::User;
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "emergency_access"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "grantor_uuid")]
#[primary_key(uuid)]
pub struct EmergencyAccess {
pub uuid: String,
pub grantor_uuid: String,
pub grantee_uuid: Option<String>,
pub email: Option<String>,
pub key_encrypted: Option<String>,
pub atype: i32, //EmergencyAccessType
pub status: i32, //EmergencyAccessStatus
pub wait_time_days: i32,
pub recovery_initiated_at: Option<NaiveDateTime>,
pub last_notification_at: Option<NaiveDateTime>,
pub updated_at: NaiveDateTime,
pub created_at: NaiveDateTime,
}
}
/// Local methods
impl EmergencyAccess {
pub fn new(grantor_uuid: String, email: Option<String>, status: i32, atype: i32, wait_time_days: i32) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
grantor_uuid,
grantee_uuid: None,
email,
status,
atype,
wait_time_days,
recovery_initiated_at: None,
created_at: now,
updated_at: now,
key_encrypted: None,
last_notification_at: None,
}
}
pub fn get_type_as_str(&self) -> &'static str {
if self.atype == EmergencyAccessType::View as i32 {
"View"
} else {
"Takeover"
}
}
pub fn has_type(&self, access_type: EmergencyAccessType) -> bool {
self.atype == access_type as i32
}
pub fn has_status(&self, status: EmergencyAccessStatus) -> bool {
self.status == status as i32
}
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"Object": "emergencyAccess",
})
}
pub fn to_json_grantor_details(&self, conn: &DbConn) -> Value {
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).expect("Grantor user not found.");
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GrantorId": grantor_user.uuid,
"Email": grantor_user.email,
"Name": grantor_user.name,
"Object": "emergencyAccessGrantorDetails",
})
}
#[allow(clippy::manual_map)]
pub fn to_json_grantee_details(&self, conn: &DbConn) -> Value {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
Some(User::find_by_uuid(grantee_uuid, conn).expect("Grantee user not found."))
} else if let Some(email) = self.email.as_deref() {
Some(User::find_by_mail(email, conn).expect("Grantee user not found."))
} else {
None
};
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
"Object": "emergencyAccessGranteeDetails",
})
}
}
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
pub enum EmergencyAccessType {
View = 0,
Takeover = 1,
}
impl EmergencyAccessType {
pub fn from_str(s: &str) -> Option<Self> {
match s {
"0" | "View" => Some(EmergencyAccessType::View),
"1" | "Takeover" => Some(EmergencyAccessType::Takeover),
_ => None,
}
}
}
impl PartialEq<i32> for EmergencyAccessType {
fn eq(&self, other: &i32) -> bool {
*other == *self as i32
}
}
impl PartialEq<EmergencyAccessType> for i32 {
fn eq(&self, other: &EmergencyAccessType) -> bool {
*self == *other as i32
}
}
pub enum EmergencyAccessStatus {
Invited = 0,
Accepted = 1,
Confirmed = 2,
RecoveryInitiated = 3,
RecoveryApproved = 4,
}
// region Database methods
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
impl EmergencyAccess {
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn);
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(emergency_access::table)
.values(EmergencyAccessDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(emergency_access::table)
.filter(emergency_access::uuid.eq(&self.uuid))
.set(EmergencyAccessDb::to_db(self))
.execute(conn)
.map_res("Error updating emergency access")
}
Err(e) => Err(e.into()),
}.map_res("Error saving emergency access")
}
postgresql {
let value = EmergencyAccessDb::to_db(self);
diesel::insert_into(emergency_access::table)
.values(&value)
.on_conflict(emergency_access::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving emergency access")
}
}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn) {
ea.delete(conn)?;
}
for ea in Self::find_all_by_grantee_uuid(user_uuid, conn) {
ea.delete(conn)?;
}
Ok(())
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn);
db_run! { conn: {
diesel::delete(emergency_access::table.filter(emergency_access::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error removing user from emergency access")
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub fn find_by_grantor_uuid_and_grantee_uuid_or_email(
grantor_uuid: &str,
grantee_uuid: &str,
email: &str,
conn: &DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.filter(emergency_access::grantee_uuid.eq(grantee_uuid).or(emergency_access::email.eq(email)))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub fn find_all_recoveries(conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::status.eq(EmergencyAccessStatus::RecoveryInitiated as i32))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub fn find_invited_by_grantee_email(grantee_email: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::email.eq(grantee_email))
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
}
// endregion

View File

@@ -32,10 +32,10 @@ impl Favorite {
// Sets whether the specified cipher is a favorite of the specified user. // Sets whether the specified cipher is a favorite of the specified user.
pub fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, &conn), favorite); let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn), favorite);
match (old, new) { match (old, new) {
(false, true) => { (false, true) => {
User::update_uuid_revision(user_uuid, &conn); User::update_uuid_revision(user_uuid, conn);
db_run! { conn: { db_run! { conn: {
diesel::insert_into(favorites::table) diesel::insert_into(favorites::table)
.values(( .values((
@@ -47,7 +47,7 @@ impl Favorite {
}} }}
} }
(true, false) => { (true, false) => {
User::update_uuid_revision(user_uuid, &conn); User::update_uuid_revision(user_uuid, conn);
db_run! { conn: { db_run! { conn: {
diesel::delete( diesel::delete(
favorites::table favorites::table

View File

@@ -107,7 +107,7 @@ impl Folder {
pub fn delete(&self, conn: &DbConn) -> EmptyResult { pub fn delete(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn); User::update_uuid_revision(&self.user_uuid, conn);
FolderCipher::delete_all_by_folder(&self.uuid, &conn)?; FolderCipher::delete_all_by_folder(&self.uuid, conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid))) diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid)))
@@ -117,8 +117,8 @@ impl Folder {
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, &conn) { for folder in Self::find_by_user(user_uuid, conn) {
folder.delete(&conn)?; folder.delete(conn)?;
} }
Ok(()) Ok(())
} }

View File

@@ -2,6 +2,7 @@ mod attachment;
mod cipher; mod cipher;
mod collection; mod collection;
mod device; mod device;
mod emergency_access;
mod favorite; mod favorite;
mod folder; mod folder;
mod org_policy; mod org_policy;
@@ -14,6 +15,7 @@ pub use self::attachment::Attachment;
pub use self::cipher::Cipher; pub use self::cipher::Cipher;
pub use self::collection::{Collection, CollectionCipher, CollectionUser}; pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::Device; pub use self::device::Device;
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
pub use self::favorite::Favorite; pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher}; pub use self::folder::{Folder, FolderCipher};
pub use self::org_policy::{OrgPolicy, OrgPolicyType}; pub use self::org_policy::{OrgPolicy, OrgPolicyType};

View File

@@ -1,8 +1,10 @@
use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use crate::api::EmptyResult; use crate::api::EmptyResult;
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use crate::util::UpCase;
use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization}; use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
@@ -20,15 +22,23 @@ db_object! {
} }
} }
#[derive(Copy, Clone, num_derive::FromPrimitive)] #[derive(Copy, Clone, PartialEq, num_derive::FromPrimitive)]
pub enum OrgPolicyType { pub enum OrgPolicyType {
TwoFactorAuthentication = 0, TwoFactorAuthentication = 0,
MasterPassword = 1, MasterPassword = 1,
PasswordGenerator = 2, PasswordGenerator = 2,
// SingleOrg = 3, // Not currently supported. SingleOrg = 3,
// RequireSso = 4, // Not currently supported. // RequireSso = 4, // Not currently supported.
PersonalOwnership = 5, PersonalOwnership = 5,
DisableSend = 6, DisableSend = 6,
SendOptions = 7,
}
// https://github.com/bitwarden/server/blob/master/src/Core/Models/Data/SendOptionsPolicyData.cs
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendOptionsPolicyData {
pub DisableHideEmail: bool,
} }
/// Local methods /// Local methods
@@ -133,7 +143,7 @@ impl OrgPolicy {
}} }}
} }
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table
.inner_join( .inner_join(
@@ -174,8 +184,8 @@ impl OrgPolicy {
/// and the user is not an owner or admin of that org. This is only useful for checking /// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics. /// applicability of policy types that have these particular semantics.
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool { pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
// Returns confirmed users only. // TODO: Should check confirmed and accepted users
for policy in OrgPolicy::find_by_user(user_uuid, conn) { for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
if policy.enabled && policy.has_type(policy_type) { if policy.enabled && policy.has_type(policy_type) {
let org_uuid = &policy.org_uuid; let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) { if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
@@ -188,6 +198,29 @@ impl OrgPolicy {
false false
} }
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
pub fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
if user.atype < UserOrgType::Admin {
match serde_json::from_str::<UpCase<SendOptionsPolicyData>>(&policy.data) {
Ok(opts) => {
if opts.data.DisableHideEmail {
return true;
}
}
_ => error!("Failed to deserialize policy data: {}", policy.data),
}
}
}
}
}
false
}
/*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { /*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid))) diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))

View File

@@ -2,7 +2,7 @@ use num_traits::FromPrimitive;
use serde_json::Value; use serde_json::Value;
use std::cmp::Ordering; use std::cmp::Ordering;
use super::{CollectionUser, OrgPolicy, User}; use super::{CollectionUser, OrgPolicy, OrgPolicyType, User};
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -12,6 +12,8 @@ db_object! {
pub uuid: String, pub uuid: String,
pub name: String, pub name: String,
pub billing_email: String, pub billing_email: String,
pub private_key: Option<String>,
pub public_key: Option<String>,
} }
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -122,12 +124,13 @@ impl PartialOrd<UserOrgType> for i32 {
/// Local methods /// Local methods
impl Organization { impl Organization {
pub fn new(name: String, billing_email: String) -> Self { pub fn new(name: String, billing_email: String, private_key: Option<String>, public_key: Option<String>) -> Self {
Self { Self {
uuid: crate::util::get_uuid(), uuid: crate::util::get_uuid(),
name, name,
billing_email, billing_email,
private_key,
public_key,
} }
} }
@@ -140,14 +143,16 @@ impl Organization {
"MaxCollections": 10, // The value doesn't matter, we don't check server-side "MaxCollections": 10, // The value doesn't matter, we don't check server-side
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side "MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
"Use2fa": true, "Use2fa": true,
"UseDirectory": false, "UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
"UseEvents": false, "UseEvents": false, // not supported by us
"UseGroups": false, "UseGroups": false, // not supported by us
"UseTotp": true, "UseTotp": true,
"UsePolicies": true, "UsePolicies": true,
"UseSso": false, // We do not support SSO "UseSso": false, // We do not support SSO
"SelfHost": true, "SelfHost": true,
"UseApi": false, // not supported by us "UseApi": false, // not supported by us
"HasPublicAndPrivateKeys": self.private_key.is_some() && self.public_key.is_some(),
"ResetPasswordEnrolled": false, // not supported by us
"BusinessName": null, "BusinessName": null,
"BusinessAddress1": null, "BusinessAddress1": null,
@@ -228,10 +233,10 @@ impl Organization {
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(self, conn: &DbConn) -> EmptyResult {
use super::{Cipher, Collection}; use super::{Cipher, Collection};
Cipher::delete_all_by_organization(&self.uuid, &conn)?; Cipher::delete_all_by_organization(&self.uuid, conn)?;
Collection::delete_all_by_organization(&self.uuid, &conn)?; Collection::delete_all_by_organization(&self.uuid, conn)?;
UserOrganization::delete_all_by_organization(&self.uuid, &conn)?; UserOrganization::delete_all_by_organization(&self.uuid, conn)?;
OrgPolicy::delete_all_by_organization(&self.uuid, &conn)?; OrgPolicy::delete_all_by_organization(&self.uuid, conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid))) diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
@@ -269,13 +274,15 @@ impl UserOrganization {
"UsersGetPremium": true, "UsersGetPremium": true,
"Use2fa": true, "Use2fa": true,
"UseDirectory": false, "UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
"UseEvents": false, "UseEvents": false, // not supported by us
"UseGroups": false, "UseGroups": false, // not supported by us
"UseTotp": true, "UseTotp": true,
"UsePolicies": true, "UsePolicies": true,
"UseApi": false, // not supported by us "UseApi": false, // not supported by us
"SelfHost": true, "SelfHost": true,
"HasPublicAndPrivateKeys": org.private_key.is_some() && org.public_key.is_some(),
"ResetPasswordEnrolled": false, // not supported by us
"SsoBound": false, // We do not support SSO "SsoBound": false, // We do not support SSO
"UseSso": false, // We do not support SSO "UseSso": false, // We do not support SSO
// TODO: Add support for Business Portal // TODO: Add support for Business Portal
@@ -283,6 +290,8 @@ impl UserOrganization {
// For now they still have that code also in the web-vault, but they will remove it at some point. // For now they still have that code also in the web-vault, but they will remove it at some point.
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/ // https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
"UseBusinessPortal": false, // Disable BusinessPortal Button "UseBusinessPortal": false, // Disable BusinessPortal Button
"ProviderId": null,
"ProviderName": null,
// TODO: Add support for Custom User Roles // TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role // See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
@@ -293,10 +302,12 @@ impl UserOrganization {
// "AccessReports": false, // "AccessReports": false,
// "ManageAllCollections": false, // "ManageAllCollections": false,
// "ManageAssignedCollections": false, // "ManageAssignedCollections": false,
// "ManageCiphers": false,
// "ManageGroups": false, // "ManageGroups": false,
// "ManagePolicies": false, // "ManagePolicies": false,
// "ManageResetPassword": false,
// "ManageSso": false, // "ManageSso": false,
// "ManageUsers": false // "ManageUsers": false,
// }, // },
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side "MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
@@ -402,7 +413,7 @@ impl UserOrganization {
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn); User::update_uuid_revision(&self.user_uuid, conn);
CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, &conn)?; CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid))) diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
@@ -412,22 +423,22 @@ impl UserOrganization {
} }
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_by_org(&org_uuid, &conn) { for user_org in Self::find_by_org(org_uuid, conn) {
user_org.delete(&conn)?; user_org.delete(conn)?;
} }
Ok(()) Ok(())
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_any_state_by_user(&user_uuid, &conn) { for user_org in Self::find_any_state_by_user(user_uuid, conn) {
user_org.delete(&conn)?; user_org.delete(conn)?;
} }
Ok(()) Ok(())
} }
pub fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> { pub fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn) { if let Some(user) = super::User::find_by_mail(email, conn) {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, &conn) { if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, conn) {
return Some(user_org); return Some(user_org);
} }
} }
@@ -466,7 +477,7 @@ impl UserOrganization {
}} }}
} }
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
users_organizations::table users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
@@ -535,6 +546,25 @@ impl UserOrganization {
}} }}
} }
pub fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.inner_join(
org_policies::table.on(
org_policies::org_uuid.eq(users_organizations::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid))
.and(org_policies::atype.eq(policy_type as i32))
.and(org_policies::enabled.eq(true)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.select(users_organizations::all_columns)
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
users_organizations::table users_organizations::table

View File

@@ -36,6 +36,7 @@ db_object! {
pub deletion_date: NaiveDateTime, pub deletion_date: NaiveDateTime,
pub disabled: bool, pub disabled: bool,
pub hide_email: Option<bool>,
} }
} }
@@ -73,6 +74,7 @@ impl Send {
deletion_date, deletion_date,
disabled: false, disabled: false,
hide_email: None,
} }
} }
@@ -101,6 +103,22 @@ impl Send {
} }
} }
pub fn creator_identifier(&self, conn: &DbConn) -> Option<String> {
if let Some(hide_email) = self.hide_email {
if hide_email {
return None;
}
}
if let Some(user_uuid) = &self.user_uuid {
if let Some(user) = User::find_by_uuid(user_uuid, conn) {
return Some(user.email);
}
}
None
}
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
use crate::util::format_date; use crate::util::format_date;
use data_encoding::BASE64URL_NOPAD; use data_encoding::BASE64URL_NOPAD;
@@ -123,6 +141,7 @@ impl Send {
"AccessCount": self.access_count, "AccessCount": self.access_count,
"Password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)), "Password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)),
"Disabled": self.disabled, "Disabled": self.disabled,
"HideEmail": self.hide_email,
"RevisionDate": format_date(&self.revision_date), "RevisionDate": format_date(&self.revision_date),
"ExpirationDate": self.expiration_date.as_ref().map(format_date), "ExpirationDate": self.expiration_date.as_ref().map(format_date),
@@ -131,7 +150,7 @@ impl Send {
}) })
} }
pub fn to_json_access(&self) -> Value { pub fn to_json_access(&self, conn: &DbConn) -> Value {
use crate::util::format_date; use crate::util::format_date;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default(); let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
@@ -145,6 +164,7 @@ impl Send {
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None }, "File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"ExpirationDate": self.expiration_date.as_ref().map(format_date), "ExpirationDate": self.expiration_date.as_ref().map(format_date),
"CreatorIdentifier": self.creator_identifier(conn),
"Object": "send-access", "Object": "send-access",
}) })
} }
@@ -207,25 +227,28 @@ impl Send {
/// Purge all sends that are past their deletion date. /// Purge all sends that are past their deletion date.
pub fn purge(conn: &DbConn) { pub fn purge(conn: &DbConn) {
for send in Self::find_by_past_deletion_date(&conn) { for send in Self::find_by_past_deletion_date(conn) {
send.delete(&conn).ok(); send.delete(conn).ok();
} }
} }
pub fn update_users_revision(&self, conn: &DbConn) { pub fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
let mut user_uuids = Vec::new();
match &self.user_uuid { match &self.user_uuid {
Some(user_uuid) => { Some(user_uuid) => {
User::update_uuid_revision(&user_uuid, conn); User::update_uuid_revision(user_uuid, conn);
user_uuids.push(user_uuid.clone())
} }
None => { None => {
// Belongs to Organization, not implemented // Belongs to Organization, not implemented
} }
} };
user_uuids
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, &conn) { for send in Self::find_by_user(user_uuid, conn) {
send.delete(&conn)?; send.delete(conn)?;
} }
Ok(()) Ok(())
} }

View File

@@ -31,11 +31,14 @@ pub enum TwoFactorType {
U2f = 4, U2f = 4,
Remember = 5, Remember = 5,
OrganizationDuo = 6, OrganizationDuo = 6,
Webauthn = 7,
// These are implementation details // These are implementation details
U2fRegisterChallenge = 1000, U2fRegisterChallenge = 1000,
U2fLoginChallenge = 1001, U2fLoginChallenge = 1001,
EmailVerificationChallenge = 1002, EmailVerificationChallenge = 1002,
WebauthnRegisterChallenge = 1003,
WebauthnLoginChallenge = 1004,
} }
/// Local methods /// Local methods
@@ -146,4 +149,73 @@ impl TwoFactor {
.map_res("Error deleting twofactors") .map_res("Error deleting twofactors")
}} }}
} }
pub fn migrate_u2f_to_webauthn(conn: &DbConn) -> EmptyResult {
let u2f_factors = db_run! { conn: {
twofactor::table
.filter(twofactor::atype.eq(TwoFactorType::U2f as i32))
.load::<TwoFactorDb>(conn)
.expect("Error loading twofactor")
.from_db()
}};
use crate::api::core::two_factor::u2f::U2FRegistration;
use crate::api::core::two_factor::webauthn::{get_webauthn_registrations, WebauthnRegistration};
use std::convert::TryInto;
use webauthn_rs::proto::*;
for mut u2f in u2f_factors {
let mut regs: Vec<U2FRegistration> = serde_json::from_str(&u2f.data)?;
// If there are no registrations or they are migrated (we do the migration in batch so we can consider them all migrated when the first one is)
if regs.is_empty() || regs[0].migrated == Some(true) {
continue;
}
let (_, mut webauthn_regs) = get_webauthn_registrations(&u2f.user_uuid, conn)?;
// If the user already has webauthn registrations saved, don't overwrite them
if !webauthn_regs.is_empty() {
continue;
}
for reg in &mut regs {
let x: [u8; 32] = reg.reg.pub_key[1..33].try_into().unwrap();
let y: [u8; 32] = reg.reg.pub_key[33..65].try_into().unwrap();
let key = COSEKey {
type_: COSEAlgorithm::ES256,
key: COSEKeyType::EC_EC2(COSEEC2Key {
curve: ECDSACurve::SECP256R1,
x,
y,
}),
};
let new_reg = WebauthnRegistration {
id: reg.id,
migrated: true,
name: reg.name.clone(),
credential: Credential {
counter: reg.counter,
verified: false,
cred: key,
cred_id: reg.reg.key_handle.clone(),
registration_policy: UserVerificationPolicy::Discouraged,
},
};
webauthn_regs.push(new_reg);
reg.migrated = Some(true);
}
u2f.data = serde_json::to_string(&regs)?;
u2f.save(conn)?;
TwoFactor::new(u2f.user_uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&webauthn_regs)?)
.save(conn)?;
}
Ok(())
}
} }

View File

@@ -1,4 +1,4 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{Duration, NaiveDateTime, Utc};
use serde_json::Value; use serde_json::Value;
use crate::crypto; use crate::crypto;
@@ -63,8 +63,9 @@ enum UserStatus {
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct UserStampException { pub struct UserStampException {
pub route: String, pub routes: Vec<String>,
pub security_stamp: String, pub security_stamp: String,
pub expire: i64,
} }
/// Local methods /// Local methods
@@ -72,9 +73,9 @@ impl User {
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0 pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000; pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000;
pub fn new(mail: String) -> Self { pub fn new(email: String) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
let email = mail.to_lowercase(); let email = email.to_lowercase();
Self { Self {
uuid: crate::util::get_uuid(), uuid: crate::util::get_uuid(),
@@ -135,9 +136,11 @@ impl User {
/// # Arguments /// # Arguments
/// ///
/// * `password` - A str which contains a hashed version of the users master password. /// * `password` - A str which contains a hashed version of the users master password.
/// * `allow_next_route` - A Option<&str> with the function name of the next allowed (rocket) route. /// * `allow_next_route` - A Option<Vec<String>> with the function names of the next allowed (rocket) routes.
/// These routes are able to use the previous stamp id for the next 2 minutes.
/// After these 2 minutes this stamp will expire.
/// ///
pub fn set_password(&mut self, password: &str, allow_next_route: Option<&str>) { pub fn set_password(&mut self, password: &str, allow_next_route: Option<Vec<String>>) {
self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32); self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32);
if let Some(route) = allow_next_route { if let Some(route) = allow_next_route {
@@ -154,30 +157,26 @@ impl User {
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp. /// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
/// ///
/// # Arguments /// # Arguments
/// * `route_exception` - A str with the function name of the next allowed (rocket) route. /// * `route_exception` - A Vec<String> with the function names of the next allowed (rocket) routes.
/// These routes are able to use the previous stamp id for the next 2 minutes.
/// After these 2 minutes this stamp will expire.
/// ///
/// ### Future pub fn set_stamp_exception(&mut self, route_exception: Vec<String>) {
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn set_stamp_exception(&mut self, route_exception: &str) {
let stamp_exception = UserStampException { let stamp_exception = UserStampException {
route: route_exception.to_string(), routes: route_exception,
security_stamp: self.security_stamp.to_string(), security_stamp: self.security_stamp.to_string(),
expire: (Utc::now().naive_utc() + Duration::minutes(2)).timestamp(),
}; };
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default()); self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
} }
/// Resets the stamp_exception to prevent re-use of the previous security-stamp /// Resets the stamp_exception to prevent re-use of the previous security-stamp
///
/// ### Future
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn reset_stamp_exception(&mut self) { pub fn reset_stamp_exception(&mut self) {
self.stamp_exception = None; self.stamp_exception = None;
} }
} }
use super::{Cipher, Device, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization}; use super::{Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization};
use crate::db::DbConn; use crate::db::DbConn;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -186,8 +185,8 @@ use crate::error::MapResult;
/// Database methods /// Database methods
impl User { impl User {
pub fn to_json(&self, conn: &DbConn) -> Value { pub fn to_json(&self, conn: &DbConn) -> Value {
let orgs = UserOrganization::find_by_user(&self.uuid, conn); let orgs = UserOrganization::find_confirmed_by_user(&self.uuid, conn);
let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(&conn)).collect(); let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(conn)).collect();
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty(); let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty();
// TODO: Might want to save the status field in the DB // TODO: Might want to save the status field in the DB
@@ -211,7 +210,10 @@ impl User {
"PrivateKey": self.private_key, "PrivateKey": self.private_key,
"SecurityStamp": self.security_stamp, "SecurityStamp": self.security_stamp,
"Organizations": orgs_json, "Organizations": orgs_json,
"Object": "profile" "Providers": [],
"ProviderOrganizations": [],
"ForcePasswordReset": false,
"Object": "profile",
}) })
} }
@@ -254,7 +256,7 @@ impl User {
} }
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(self, conn: &DbConn) -> EmptyResult {
for user_org in UserOrganization::find_by_user(&self.uuid, conn) { for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn) {
if user_org.atype == UserOrgType::Owner { if user_org.atype == UserOrgType::Owner {
let owner_type = UserOrgType::Owner as i32; let owner_type = UserOrgType::Owner as i32;
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 { if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
@@ -264,6 +266,7 @@ impl User {
} }
Send::delete_all_by_user(&self.uuid, conn)?; Send::delete_all_by_user(&self.uuid, conn)?;
EmergencyAccess::delete_all_by_user(&self.uuid, conn)?;
UserOrganization::delete_all_by_user(&self.uuid, conn)?; UserOrganization::delete_all_by_user(&self.uuid, conn)?;
Cipher::delete_all_by_user(&self.uuid, conn)?; Cipher::delete_all_by_user(&self.uuid, conn)?;
Favorite::delete_all_by_user(&self.uuid, conn)?; Favorite::delete_all_by_user(&self.uuid, conn)?;
@@ -347,7 +350,8 @@ impl User {
} }
impl Invitation { impl Invitation {
pub const fn new(email: String) -> Self { pub fn new(email: String) -> Self {
let email = email.to_lowercase();
Self { Self {
email, email,
} }
@@ -398,8 +402,8 @@ impl Invitation {
} }
pub fn take(mail: &str, conn: &DbConn) -> bool { pub fn take(mail: &str, conn: &DbConn) -> bool {
match Self::find_by_mail(mail, &conn) { match Self::find_by_mail(mail, conn) {
Some(invitation) => invitation.delete(&conn).is_ok(), Some(invitation) => invitation.delete(conn).is_ok(),
None => false, None => false,
} }
} }

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Datetime>, deleted_at -> Nullable<Datetime>,
reprompt -> Nullable<Integer>,
} }
} }
@@ -99,6 +100,8 @@ table! {
uuid -> Text, uuid -> Text,
name -> Text, name -> Text,
billing_email -> Text, billing_email -> Text,
private_key -> Nullable<Text>,
public_key -> Nullable<Text>,
} }
} }
@@ -122,6 +125,7 @@ table! {
expiration_date -> Nullable<Datetime>, expiration_date -> Nullable<Datetime>,
deletion_date -> Datetime, deletion_date -> Datetime,
disabled -> Bool, disabled -> Bool,
hide_email -> Nullable<Bool>,
} }
} }
@@ -188,6 +192,23 @@ table! {
} }
} }
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid)); joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid)); joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid)); joinable!(ciphers -> users (user_uuid));
@@ -206,6 +227,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid)); joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid)); joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!( allow_tables_to_appear_in_same_query!(
attachments, attachments,
@@ -223,4 +245,5 @@ allow_tables_to_appear_in_same_query!(
users, users,
users_collections, users_collections,
users_organizations, users_organizations,
emergency_access,
); );

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>, deleted_at -> Nullable<Timestamp>,
reprompt -> Nullable<Integer>,
} }
} }
@@ -99,6 +100,8 @@ table! {
uuid -> Text, uuid -> Text,
name -> Text, name -> Text,
billing_email -> Text, billing_email -> Text,
private_key -> Nullable<Text>,
public_key -> Nullable<Text>,
} }
} }
@@ -122,6 +125,7 @@ table! {
expiration_date -> Nullable<Timestamp>, expiration_date -> Nullable<Timestamp>,
deletion_date -> Timestamp, deletion_date -> Timestamp,
disabled -> Bool, disabled -> Bool,
hide_email -> Nullable<Bool>,
} }
} }
@@ -188,6 +192,23 @@ table! {
} }
} }
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid)); joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid)); joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid)); joinable!(ciphers -> users (user_uuid));
@@ -206,6 +227,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid)); joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid)); joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!( allow_tables_to_appear_in_same_query!(
attachments, attachments,
@@ -223,4 +245,5 @@ allow_tables_to_appear_in_same_query!(
users, users,
users_collections, users_collections,
users_organizations, users_organizations,
emergency_access,
); );

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>, deleted_at -> Nullable<Timestamp>,
reprompt -> Nullable<Integer>,
} }
} }
@@ -99,6 +100,8 @@ table! {
uuid -> Text, uuid -> Text,
name -> Text, name -> Text,
billing_email -> Text, billing_email -> Text,
private_key -> Nullable<Text>,
public_key -> Nullable<Text>,
} }
} }
@@ -122,6 +125,7 @@ table! {
expiration_date -> Nullable<Timestamp>, expiration_date -> Nullable<Timestamp>,
deletion_date -> Timestamp, deletion_date -> Timestamp,
disabled -> Bool, disabled -> Bool,
hide_email -> Nullable<Bool>,
} }
} }
@@ -188,6 +192,23 @@ table! {
} }
} }
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid)); joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid)); joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid)); joinable!(ciphers -> users (user_uuid));
@@ -206,6 +227,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid)); joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid)); joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!( allow_tables_to_appear_in_same_query!(
attachments, attachments,
@@ -223,4 +245,5 @@ allow_tables_to_appear_in_same_query!(
users, users,
users_collections, users_collections,
users_organizations, users_organizations,
emergency_access,
); );

View File

@@ -39,20 +39,19 @@ use diesel::ConnectionError as DieselConErr;
use diesel_migrations::RunMigrationsError as DieselMigErr; use diesel_migrations::RunMigrationsError as DieselMigErr;
use handlebars::RenderError as HbErr; use handlebars::RenderError as HbErr;
use jsonwebtoken::errors::Error as JwtErr; use jsonwebtoken::errors::Error as JwtErr;
use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr;
use lettre::transport::smtp::Error as SmtpErr;
use openssl::error::ErrorStack as SSLErr;
use regex::Error as RegexErr; use regex::Error as RegexErr;
use reqwest::Error as ReqErr; use reqwest::Error as ReqErr;
use serde_json::{Error as SerdeErr, Value}; use serde_json::{Error as SerdeErr, Value};
use std::io::Error as IoErr; use std::io::Error as IoErr;
use std::time::SystemTimeError as TimeErr; use std::time::SystemTimeError as TimeErr;
use u2f::u2ferror::U2fError as U2fErr; use u2f::u2ferror::U2fError as U2fErr;
use webauthn_rs::error::WebauthnError as WebauthnErr;
use yubico::yubicoerror::YubicoError as YubiErr; use yubico::yubicoerror::YubicoError as YubiErr;
use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr;
use lettre::message::mime::FromStrError as FromStrErr;
use lettre::transport::smtp::Error as SmtpErr;
#[derive(Serialize)] #[derive(Serialize)]
pub struct Empty {} pub struct Empty {}
@@ -63,31 +62,32 @@ pub struct Empty {}
// The second one contains the function used to obtain the response sent to the client // The second one contains the function used to obtain the response sent to the client
make_error! { make_error! {
// Just an empty error // Just an empty error
EmptyError(Empty): _no_source, _serialize, Empty(Empty): _no_source, _serialize,
// Used to represent err! calls // Used to represent err! calls
SimpleError(String): _no_source, _api_error, Simple(String): _no_source, _api_error,
// Used for special return values, like 2FA errors // Used for special return values, like 2FA errors
JsonError(Value): _no_source, _serialize, Json(Value): _no_source, _serialize,
DbError(DieselErr): _has_source, _api_error, Db(DieselErr): _has_source, _api_error,
R2d2Error(R2d2Err): _has_source, _api_error, R2d2(R2d2Err): _has_source, _api_error,
U2fError(U2fErr): _has_source, _api_error, U2f(U2fErr): _has_source, _api_error,
SerdeError(SerdeErr): _has_source, _api_error, Serde(SerdeErr): _has_source, _api_error,
JWtError(JwtErr): _has_source, _api_error, JWt(JwtErr): _has_source, _api_error,
TemplError(HbErr): _has_source, _api_error, Handlebars(HbErr): _has_source, _api_error,
//WsError(ws::Error): _has_source, _api_error, //WsError(ws::Error): _has_source, _api_error,
IoError(IoErr): _has_source, _api_error, Io(IoErr): _has_source, _api_error,
TimeError(TimeErr): _has_source, _api_error, Time(TimeErr): _has_source, _api_error,
ReqError(ReqErr): _has_source, _api_error, Req(ReqErr): _has_source, _api_error,
RegexError(RegexErr): _has_source, _api_error, Regex(RegexErr): _has_source, _api_error,
YubiError(YubiErr): _has_source, _api_error, Yubico(YubiErr): _has_source, _api_error,
LettreError(LettreErr): _has_source, _api_error, Lettre(LettreErr): _has_source, _api_error,
AddressError(AddrErr): _has_source, _api_error, Address(AddrErr): _has_source, _api_error,
SmtpError(SmtpErr): _has_source, _api_error, Smtp(SmtpErr): _has_source, _api_error,
FromStrError(FromStrErr): _has_source, _api_error, OpenSSL(SSLErr): _has_source, _api_error,
DieselConError(DieselConErr): _has_source, _api_error, DieselCon(DieselConErr): _has_source, _api_error,
DieselMigError(DieselMigErr): _has_source, _api_error, DieselMig(DieselMigErr): _has_source, _api_error,
Webauthn(WebauthnErr): _has_source, _api_error,
} }
impl std::fmt::Debug for Error { impl std::fmt::Debug for Error {
@@ -95,15 +95,15 @@ impl std::fmt::Debug for Error {
match self.source() { match self.source() {
Some(e) => write!(f, "{}.\n[CAUSE] {:#?}", self.message, e), Some(e) => write!(f, "{}.\n[CAUSE] {:#?}", self.message, e),
None => match self.error { None => match self.error {
ErrorKind::EmptyError(_) => Ok(()), ErrorKind::Empty(_) => Ok(()),
ErrorKind::SimpleError(ref s) => { ErrorKind::Simple(ref s) => {
if &self.message == s { if &self.message == s {
write!(f, "{}", self.message) write!(f, "{}", self.message)
} else { } else {
write!(f, "{}. {}", self.message, s) write!(f, "{}. {}", self.message, s)
} }
} }
ErrorKind::JsonError(_) => write!(f, "{}", self.message), ErrorKind::Json(_) => write!(f, "{}", self.message),
_ => unreachable!(), _ => unreachable!(),
}, },
} }
@@ -166,7 +166,7 @@ fn _serialize(e: &impl serde::Serialize, _msg: &str) -> String {
fn _api_error(_: &impl std::any::Any, msg: &str) -> String { fn _api_error(_: &impl std::any::Any, msg: &str) -> String {
let json = json!({ let json = json!({
"Message": "", "Message": msg,
"error": "", "error": "",
"error_description": "", "error_description": "",
"ValidationErrors": {"": [ msg ]}, "ValidationErrors": {"": [ msg ]},
@@ -174,6 +174,9 @@ fn _api_error(_: &impl std::any::Any, msg: &str) -> String {
"Message": msg, "Message": msg,
"Object": "error" "Object": "error"
}, },
"ExceptionMessage": null,
"ExceptionStackTrace": null,
"InnerExceptionMessage": null,
"Object": "error" "Object": "error"
}); });
_serialize(&json, "") _serialize(&json, "")
@@ -191,8 +194,8 @@ use rocket::response::{self, Responder, Response};
impl<'r> Responder<'r> for Error { impl<'r> Responder<'r> for Error {
fn respond_to(self, _: &Request) -> response::Result<'r> { fn respond_to(self, _: &Request) -> response::Result<'r> {
match self.error { match self.error {
ErrorKind::EmptyError(_) => {} // Don't print the error in this situation ErrorKind::Empty(_) => {} // Don't print the error in this situation
ErrorKind::SimpleError(_) => {} // Don't print the error in this situation ErrorKind::Simple(_) => {} // Don't print the error in this situation
_ => error!(target: "error", "{:#?}", self), _ => error!(target: "error", "{:#?}", self),
}; };
@@ -217,13 +220,22 @@ macro_rules! err {
}}; }};
} }
macro_rules! err_silent {
($msg:expr) => {{
return Err(crate::error::Error::new($msg, $msg));
}};
($usr_msg:expr, $log_value:expr) => {{
return Err(crate::error::Error::new($usr_msg, $log_value));
}};
}
#[macro_export] #[macro_export]
macro_rules! err_code { macro_rules! err_code {
($msg:expr, $err_code: literal) => {{ ($msg:expr, $err_code: expr) => {{
error!("{}", $msg); error!("{}", $msg);
return Err(crate::error::Error::new($msg, $msg).with_code($err_code)); return Err(crate::error::Error::new($msg, $msg).with_code($err_code));
}}; }};
($usr_msg:expr, $log_value:expr, $err_code: literal) => {{ ($usr_msg:expr, $log_value:expr, $err_code: expr) => {{
error!("{}. {}", $usr_msg, $log_value); error!("{}. {}", $usr_msg, $log_value);
return Err(crate::error::Error::new($usr_msg, $log_value).with_code($err_code)); return Err(crate::error::Error::new($usr_msg, $log_value).with_code($err_code));
}}; }};

View File

@@ -13,7 +13,10 @@ use lettre::{
use crate::{ use crate::{
api::EmptyResult, api::EmptyResult,
auth::{encode_jwt, generate_delete_claims, generate_invite_claims, generate_verify_email_claims}, auth::{
encode_jwt, generate_delete_claims, generate_emergency_access_invite_claims, generate_invite_claims,
generate_verify_email_claims,
},
error::Error, error::Error,
CONFIG, CONFIG,
}; };
@@ -27,7 +30,7 @@ fn mailer() -> SmtpTransport {
.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout()))); .timeout(Some(Duration::from_secs(CONFIG.smtp_timeout())));
// Determine security // Determine security
let smtp_client = if CONFIG.smtp_ssl() { let smtp_client = if CONFIG.smtp_ssl() || CONFIG.smtp_explicit_tls() {
let mut tls_parameters = TlsParameters::builder(host); let mut tls_parameters = TlsParameters::builder(host);
if CONFIG.smtp_accept_invalid_hostnames() { if CONFIG.smtp_accept_invalid_hostnames() {
tls_parameters = tls_parameters.dangerous_accept_invalid_hostnames(true); tls_parameters = tls_parameters.dangerous_accept_invalid_hostnames(true);
@@ -99,9 +102,8 @@ fn get_template(template_name: &str, data: &serde_json::Value) -> Result<(String
None => err!("Template doesn't contain subject"), None => err!("Template doesn't contain subject"),
}; };
use newline_converter::unix2dos;
let body = match text_split.next() { let body = match text_split.next() {
Some(s) => unix2dos(s.trim()).to_string(), Some(s) => s.trim().to_string(),
None => err!("Template doesn't contain body"), None => err!("Template doesn't contain body"),
}; };
@@ -181,6 +183,30 @@ pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
send_email(address, &subject, body_html, body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_2fa_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/send_2fa_removed_from_org",
json!({
"url": CONFIG.domain(),
"org_name": org_name,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_single_org_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/send_single_org_removed_from_org",
json!({
"url": CONFIG.domain(),
"org_name": org_name,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_invite( pub fn send_invite(
address: &str, address: &str,
uuid: &str, uuid: &str,
@@ -213,6 +239,136 @@ pub fn send_invite(
send_email(address, &subject, body_html, body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_emergency_access_invite(
address: &str,
uuid: &str,
emer_id: Option<String>,
grantor_name: Option<String>,
grantor_email: Option<String>,
) -> EmptyResult {
let claims = generate_emergency_access_invite_claims(
uuid.to_string(),
String::from(address),
emer_id.clone(),
grantor_name.clone(),
grantor_email,
);
let invite_token = encode_jwt(&claims);
let (subject, body_html, body_text) = get_text(
"email/send_emergency_access_invite",
json!({
"url": CONFIG.domain(),
"emer_id": emer_id.unwrap_or_else(|| "_".to_string()),
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"grantor_name": grantor_name,
"token": invite_token,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_invite_accepted(address: &str, grantee_email: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_invite_accepted",
json!({
"url": CONFIG.domain(),
"grantee_email": grantee_email,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_invite_confirmed(address: &str, grantor_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_invite_confirmed",
json!({
"url": CONFIG.domain(),
"grantor_name": grantor_name,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_recovery_approved(address: &str, grantor_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_approved",
json!({
"url": CONFIG.domain(),
"grantor_name": grantor_name,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_recovery_initiated(
address: &str,
grantee_name: &str,
atype: &str,
wait_time_days: &str,
) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_initiated",
json!({
"url": CONFIG.domain(),
"grantee_name": grantee_name,
"atype": atype,
"wait_time_days": wait_time_days,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_recovery_reminder(
address: &str,
grantee_name: &str,
atype: &str,
days_left: &str,
) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_reminder",
json!({
"url": CONFIG.domain(),
"grantee_name": grantee_name,
"atype": atype,
"days_left": days_left,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_recovery_rejected(address: &str, grantor_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_rejected",
json!({
"url": CONFIG.domain(),
"grantor_name": grantor_name,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_emergency_access_recovery_timed_out(address: &str, grantee_name: &str, atype: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/emergency_access_recovery_timed_out",
json!({
"url": CONFIG.domain(),
"grantee_name": grantee_name,
"atype": atype,
}),
)?;
send_email(address, &subject, body_html, body_text)
}
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult { pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
"email/invite_accepted", "email/invite_accepted",
@@ -307,13 +463,13 @@ fn send_email(address: &str, subject: &str, body_html: String, body_text: String
let html = SinglePart::builder() let html = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings. // We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64) .header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/html; charset=utf-8".parse()?)) .header(header::ContentType::TEXT_HTML)
.body(body_html); .body(body_html);
let text = SinglePart::builder() let text = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings. // We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64) .header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/plain; charset=utf-8".parse()?)) .header(header::ContentType::TEXT_PLAIN)
.body(body_text); .body(body_text);
let smtp_from = &CONFIG.smtp_from(); let smtp_from = &CONFIG.smtp_from();
@@ -329,15 +485,28 @@ fn send_email(address: &str, subject: &str, body_html: String, body_text: String
// Match some common errors and make them more user friendly // Match some common errors and make them more user friendly
Err(e) => { Err(e) => {
if e.is_client() { if e.is_client() {
err!(format!("SMTP Client error: {}", e)); debug!("SMTP Client error: {:#?}", e);
err!(format!("SMTP Client error: {}", e.to_string()));
} else if e.is_transient() { } else if e.is_transient() {
err!(format!("SMTP 4xx error: {:?}", e)); debug!("SMTP 4xx error: {:#?}", e);
err!(format!("SMTP 4xx error: {}", e.to_string()));
} else if e.is_permanent() { } else if e.is_permanent() {
err!(format!("SMTP 5xx error: {:?}", e)); debug!("SMTP 5xx error: {:#?}", e);
let mut msg = e.to_string();
// Add a special check for 535 to add a more descriptive message
if msg.contains("(535)") {
msg = format!("{} - Authentication credentials invalid", msg);
}
err!(format!("SMTP 5xx error: {}", msg));
} else if e.is_timeout() { } else if e.is_timeout() {
err!(format!("SMTP timeout error: {:?}", e)); debug!("SMTP timeout error: {:#?}", e);
err!(format!("SMTP timeout error: {}", e.to_string()));
} else if e.is_tls() {
debug!("SMTP Encryption error: {:#?}", e);
err!(format!("SMTP Encryption error: {}", e.to_string()));
} else { } else {
Err(e.into()) debug!("SMTP {:#?}", e);
err!(format!("SMTP {}", e.to_string()));
} }
} }
} }

View File

@@ -17,15 +17,7 @@ extern crate diesel;
extern crate diesel_migrations; extern crate diesel_migrations;
use job_scheduler::{Job, JobScheduler}; use job_scheduler::{Job, JobScheduler};
use std::{ use std::{fs::create_dir_all, panic, path::Path, process::exit, str::FromStr, thread, time::Duration};
fs::create_dir_all,
panic,
path::Path,
process::{exit, Command},
str::FromStr,
thread,
time::Duration,
};
#[macro_use] #[macro_use]
mod error; mod error;
@@ -53,13 +45,18 @@ fn main() {
let extra_debug = matches!(level, LF::Trace | LF::Debug); let extra_debug = matches!(level, LF::Trace | LF::Debug);
check_data_folder(); check_data_folder();
check_rsa_keys(); check_rsa_keys().unwrap_or_else(|_| {
error!("Error creating keys, exiting...");
exit(1);
});
check_web_vault(); check_web_vault();
create_icon_cache_folder(); create_icon_cache_folder();
let pool = create_db_pool(); let pool = create_db_pool();
schedule_jobs(pool.clone()); schedule_jobs(pool.clone());
crate::db::models::TwoFactor::migrate_u2f_to_webauthn(&pool.get().unwrap()).unwrap();
launch_rocket(pool, extra_debug); // Blocks until program termination. launch_rocket(pool, extra_debug); // Blocks until program termination.
} }
@@ -100,7 +97,7 @@ fn launch_info() {
println!("| This is an *unofficial* Bitwarden implementation, DO NOT use the |"); println!("| This is an *unofficial* Bitwarden implementation, DO NOT use the |");
println!("| official channels to report bugs/features, regardless of client. |"); println!("| official channels to report bugs/features, regardless of client. |");
println!("| Send usage/configuration questions or feature requests to: |"); println!("| Send usage/configuration questions or feature requests to: |");
println!("| https://bitwardenrs.discourse.group/ |"); println!("| https://vaultwarden.discourse.group/ |");
println!("| Report suspected bugs/issues in the software itself at: |"); println!("| Report suspected bugs/issues in the software itself at: |");
println!("| https://github.com/dani-garcia/vaultwarden/issues/new |"); println!("| https://github.com/dani-garcia/vaultwarden/issues/new |");
println!("\\--------------------------------------------------------------------/\n"); println!("\\--------------------------------------------------------------------/\n");
@@ -122,6 +119,9 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
// Never show html5ever and hyper::proto logs, too noisy // Never show html5ever and hyper::proto logs, too noisy
.level_for("html5ever", log::LevelFilter::Off) .level_for("html5ever", log::LevelFilter::Off)
.level_for("hyper::proto", log::LevelFilter::Off) .level_for("hyper::proto", log::LevelFilter::Off)
.level_for("hyper::client", log::LevelFilter::Off)
// Prevent cookie_store logs
.level_for("cookie_store", log::LevelFilter::Off)
.chain(std::io::stdout()); .chain(std::io::stdout());
// Enable smtp debug logging only specifically for smtp when need. // Enable smtp debug logging only specifically for smtp when need.
@@ -244,52 +244,29 @@ fn check_data_folder() {
} }
} }
fn check_rsa_keys() { fn check_rsa_keys() -> Result<(), crate::error::Error> {
// If the RSA keys don't exist, try to create them // If the RSA keys don't exist, try to create them
if !util::file_exists(&CONFIG.private_rsa_key()) || !util::file_exists(&CONFIG.public_rsa_key()) { let priv_path = CONFIG.private_rsa_key();
info!("JWT keys don't exist, checking if OpenSSL is available..."); let pub_path = CONFIG.public_rsa_key();
Command::new("openssl").arg("version").status().unwrap_or_else(|_| { if !util::file_exists(&priv_path) {
info!( let rsa_key = openssl::rsa::Rsa::generate(2048)?;
"Can't create keys because OpenSSL is not available, make sure it's installed and available on the PATH"
);
exit(1);
});
info!("OpenSSL detected, creating keys..."); let priv_key = rsa_key.private_key_to_pem()?;
crate::util::write_file(&priv_path, &priv_key)?;
let key = CONFIG.rsa_key_filename(); info!("Private key created correctly.");
let pem = format!("{}.pem", key);
let priv_der = format!("{}.der", key);
let pub_der = format!("{}.pub.der", key);
let mut success = Command::new("openssl")
.args(&["genrsa", "-out", &pem])
.status()
.expect("Failed to create private pem file")
.success();
success &= Command::new("openssl")
.args(&["rsa", "-in", &pem, "-outform", "DER", "-out", &priv_der])
.status()
.expect("Failed to create private der file")
.success();
success &= Command::new("openssl")
.args(&["rsa", "-in", &priv_der, "-inform", "DER"])
.args(&["-RSAPublicKey_out", "-outform", "DER", "-out", &pub_der])
.status()
.expect("Failed to create public der file")
.success();
if success {
info!("Keys created correctly.");
} else {
error!("Error creating keys, exiting...");
exit(1);
} }
if !util::file_exists(&pub_path) {
let rsa_key = openssl::rsa::Rsa::private_key_from_pem(&util::read_file(&priv_path)?)?;
let pub_key = rsa_key.public_key_to_pem()?;
crate::util::write_file(&pub_path, &pub_key)?;
info!("Public key created correctly.");
} }
auth::load_keys();
Ok(())
} }
fn check_web_vault() { fn check_web_vault() {
@@ -368,11 +345,32 @@ fn schedule_jobs(pool: db::DbPool) {
})); }));
} }
// Grant emergency access requests that have met the required wait time.
// This job should run before the emergency access reminders job to avoid
// sending reminders for requests that are about to be granted anyway.
if !CONFIG.emergency_request_timeout_schedule().is_empty() {
sched.add(Job::new(CONFIG.emergency_request_timeout_schedule().parse().unwrap(), || {
api::emergency_request_timeout_job(pool.clone());
}));
}
// Send reminders to emergency access grantors that there are pending
// emergency access requests.
if !CONFIG.emergency_notification_reminder_schedule().is_empty() {
sched.add(Job::new(CONFIG.emergency_notification_reminder_schedule().parse().unwrap(), || {
api::emergency_notification_reminder_job(pool.clone());
}));
}
// Periodically check for jobs to run. We probably won't need any // Periodically check for jobs to run. We probably won't need any
// jobs that run more often than once a minute, so a default poll // jobs that run more often than once a minute, so a default poll
// interval of 30 seconds should be sufficient. Users who want to // interval of 30 seconds should be sufficient. Users who want to
// schedule jobs to run more frequently for some reason can reduce // schedule jobs to run more frequently for some reason can reduce
// the poll interval accordingly. // the poll interval accordingly.
//
// Note that the scheduler checks jobs in the order in which they
// were added, so if two jobs are both eligible to run at a given
// tick, the one that was added earlier will run first.
loop { loop {
sched.tick(); sched.tick();
thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms())); thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms()));

View File

@@ -198,7 +198,9 @@
"amazon.in", "amazon.in",
"amazon.it", "amazon.it",
"amazon.nl", "amazon.nl",
"amazon.pl",
"amazon.sa", "amazon.sa",
"amazon.se",
"amazon.sg" "amazon.sg"
], ],
"Excluded": false "Excluded": false

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 6.9 KiB

Some files were not shown because too many files have changed in this diff Show More