Compare commits

...

208 Commits

Author SHA1 Message Date
Daniel García
0c1f0bad17 Merge branch 'BlackDex-update-rust-version-dockerfile' into main 2022-05-21 19:17:29 +02:00
Daniel García
72cf59fa54 Merge branch 'update-rust-version-dockerfile' of https://github.com/BlackDex/vaultwarden into BlackDex-update-rust-version-dockerfile 2022-05-21 19:17:24 +02:00
Daniel García
527bc1f625 Merge branch 'BlackDex-fix-upload-limits-and-logging' into main 2022-05-21 19:17:15 +02:00
BlackDex
2168d09421 Update Rust version in Dockerfile
Updated Rust from v1.60 to v1.61 for building the images.
Also made the rust version fixed for the Alpine build images to prevent
those images being build with a newer version when released.
2022-05-21 17:46:14 +02:00
BlackDex
1c266031d7 Fix upload limits and disable color logs
The limits for uploading files were to small in regards to the allowed
maximum filesize of the Bitwarden clients including the web-vault.
Changed both `data-form` (used for Send) and `file` (used for
attachments) to be 525MB, this is the same size we already check our selfs.

Also changed the `json` limit to be 20MB, this should allow very large
imports with 4000/5000+ items depending on if there are large notes or not.

And, also disabled Rocket from outputting colors, these colors were also
send to the log files and syslog. I think this changed in Rocket 0.5rc
somewhere, need to look a bit further into that maybe.
2022-05-21 17:28:29 +02:00
Daniel García
b636d20c64 Update web vault to v2.28.1 2022-05-11 22:19:22 +02:00
Daniel García
2a9ca88c2a Dependency updates 2022-05-11 22:03:07 +02:00
Daniel García
b9c434addb Merge branch 'jjlin-db-conn-init' into main 2022-05-11 21:36:11 +02:00
Daniel García
451ad47327 Merge branch 'db-conn-init' of https://github.com/jjlin/vaultwarden into jjlin-db-conn-init 2022-05-11 21:36:00 +02:00
Daniel García
7f61dd5fe3 Merge branch 'BlackDex-sql-optimizations' into main 2022-05-11 21:33:43 +02:00
BlackDex
3ca85028ea Improve sync speed and updated dep. versions
Improved sync speed by resolving the N+1 query issues.
Solves #1402 and Solves #1453

With this change there is just one query done to retreive all the
important data, and matching is done in-code/memory.

With a very large database the sync time went down about 3 times.

Also updated misc crates and Github Actions versions.
2022-05-06 17:01:02 +02:00
Jeremy Lin
542a73cc6e Switch to a single config option for database connection init
The main pro is less config options, while the main con is less clarity in
what the defaults are for the various database types.
2022-04-29 00:26:49 -07:00
Jeremy Lin
78d07e2fda Add default connection-scoped pragmas for SQLite
`PRAGMA busy_timeout = 5000` tells SQLite to keep trying for up to 5000 ms
when there is lock contention, rather than aborting immediately. This should
hopefully prevent the vast majority of "database is locked" panics observed
since the async transition.

`PRAGMA synchronous = NORMAL` trades better performance for a small potential
loss in durability (the default is `FULL`). The SQLite docs recommend `NORMAL`
as "a good choice for most applications running in WAL mode".
2022-04-26 17:55:19 -07:00
Jeremy Lin
b617ffd2af Add support for database connection init statements
This is probably mainly useful for running connection-scoped pragma statements.
2022-04-26 17:50:20 -07:00
Daniel García
3abf173d89 Merge pull request #2433 from jjlin/meta-apis
Add `/api/{alive,now,version}` endpoints
2022-04-24 18:36:08 +02:00
Jeremy Lin
df8aeb10e8 Add /api/{alive,now,version} endpoints
The added endpoints work the same as in their upstream implementations.

Upstream also implements `/api/ip`. This seems to include the server's public
IP address (the one that should be hidden behind Cloudflare), which doesn't
seem like a great idea.
2022-04-23 23:47:49 -07:00
Daniel García
26ad06df7c Update web vault to 2.28.0 and dependencies 2022-04-23 18:18:15 +02:00
Daniel García
37fff3ef4a Merge pull request #2400 from jjlin/global-domains
Sync global_domains.json
2022-03-30 22:02:44 +02:00
Jeremy Lin
28c5e63bf5 Sync global_domains.json to bitwarden/server@3521ccb (Just Eat Takeaway.com) 2022-03-29 11:41:43 -07:00
Daniel García
a07c213b3e Merge pull request #2398 from BlackDex/remove-u2f
Remove u2f implementation
2022-03-27 18:43:09 +02:00
Daniel García
ed72741f48 Merge pull request #2397 from BlackDex/fix-mimalloc-build
Fix building mimalloc on armv6
2022-03-27 18:42:59 +02:00
BlackDex
fb0c23b71f Remove u2f implementation
For a while now WebAuthn has replaced u2f.
And since web-vault v2.27.0 the connector files for u2f have been removed.
Also, on the official bitwarden server the endpoint to `/two-factor/get-u2f` results in a 404.

- Removed all u2f code except the migration code from u2f to WebAuthn
2022-03-27 17:25:04 +02:00
BlackDex
d98f95f536 Fix building mimalloc on armv6
The armv6 builds need a specific location for the libatomic.a file.
This commit fixes that by adding a RUSTFLAGS argument for this.

Also removed the `link-arg=-s` since this is now already done during via the release profile
And removed the CFLAGS for armv7, this is already fixed by default in the blackdex/rust-musl images.
2022-03-27 14:45:50 +02:00
Daniel García
6643e83b61 Disable mimalloc in arm for now 2022-03-26 20:11:46 +01:00
Daniel García
7b742009a1 Update web vault to 2.27.0 and dependencies 2022-03-26 16:35:54 +01:00
Daniel García
649e2b48f3 Merge branch 'Wonderfall-x-xss-protection' into main 2022-03-26 16:18:48 +01:00
Daniel García
81f0c2b0e8 Merge branch 'x-xss-protection' of https://github.com/Wonderfall/vaultwarden into Wonderfall-x-xss-protection 2022-03-26 16:18:34 +01:00
Daniel García
80d8aa7239 Merge branch 'BlackDex-misc-updates-202203' into main 2022-03-26 16:18:24 +01:00
Wonderfall
27d4b713f6 disable legacy X-XSS-Protection feature
Obsolete in every modern browser, unsafe, and replaced by CSP
2022-03-21 15:29:01 +01:00
BlackDex
b0faaf2527 Several updates and fixes
- Removed all `thread::sleep` and use `tokio::time::sleep` now.
  This solves an issue with updating to Bullseye ( Resolves #1998 )
- Updated all Debian images to Bullseye
- Added MiMalloc feature and enabled it by default for Alpine based images
  This increases performance for the Alpine images because the default
  memory allocator for MUSL based binaries isn't that fast
- Updated `dotenv` to `dotenvy` a maintained and updated fork
- Fixed an issue with a newer jslib (not fully released yet)
  That version uses a different endpoint for `prelogin` Resolves #2378 )
2022-03-20 18:51:24 +01:00
Daniel García
8d06d9c111 Merge pull request #2354 from BlackDex/multi-account-login
Update login API code and update crates to fix CVE
2022-03-13 15:46:49 +01:00
BlackDex
c4d565b15b Update login API code
- Updated jsonwebtoken to latest version
- Trim `username` received from the login form ( Fixes #2348 )
- Make uuid and user_uuid a combined primary key for the devices table ( Fixes #2295 )
- Updated crates including regex which contains a CVE ( https://blog.rust-lang.org/2022/03/08/cve-2022-24713.html )
2022-03-12 18:45:45 +01:00
Daniel García
06f8e69c70 Update web vault to 2.26.1 2022-02-27 22:21:36 +01:00
Daniel García
7db52374cd Merge branch 'BlackDex-async-updates' into async 2022-02-27 21:51:19 +01:00
Daniel García
843f205f6f Merge branch 'async-updates' of https://github.com/BlackDex/vaultwarden into BlackDex-async-updates 2022-02-27 21:50:33 +01:00
Daniel García
2ff51ae77e formatting 2022-02-27 21:37:24 +01:00
Daniel García
2b75d81a8b Ignore unused field 2022-02-27 21:37:24 +01:00
Daniel García
cad0dcbed1 await the mutex in db_run and use block_in_place for it's contents 2022-02-27 21:37:24 +01:00
BlackDex
19b8388950 Upd Dockerfiles, crates. Fixed rust 2018 idioms
- Updated crates
- Fixed Dockerfiles to build using the rust stable version
- Enabled warnings for rust 2018 idioms and fixed them.
2022-02-27 21:37:23 +01:00
BlackDex
87e08b9e50 Async/Awaited all db methods
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.

Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
2022-02-27 21:37:23 +01:00
Daniel García
0b7d6bf6df Update to rocket 0.5 and made code async, missing updating all db calls, that are currently blocking 2022-02-27 21:36:31 +01:00
Daniel García
89fe05b6cc Merge branch 'taylorwmj-main' into main 2022-02-27 21:22:24 +01:00
Daniel García
d73d74e78f Merge branch 'main' of https://github.com/taylorwmj/vaultwarden into taylorwmj-main 2022-02-27 21:22:15 +01:00
Daniel García
9a682b7a45 Merge branch 'TinfoilSubmarine-custom-env-path' into main 2022-02-27 21:21:46 +01:00
Daniel García
94201ca133 Merge branch 'custom-env-path' of https://github.com/TinfoilSubmarine/vaultwarden into TinfoilSubmarine-custom-env-path 2022-02-27 21:21:38 +01:00
Daniel García
99f9e7252a Merge branch 'jaen-add-ip-to-send-unauthorized-message' into main 2022-02-27 21:19:21 +01:00
BlackDex
42136a7097 Favicon, SMTP and misc updates
Favicon:
- Replaced HTML tokenizer, much faster now.
- Caching the domain blacklist function.
- Almost all functions are async now.
- Fixed bug on minimizing data to parse
- Changed maximum icon download size to 5MB to match Bitwarden
- Added `apple-touch-icon.png` as a second fallback besides `favicon.ico`

SMTP:
- Deprecated SMTP_SSL and SMTP_EXPLICIT_TLS, replaced with SMTP_SECURITY

Misc:
- Fixed issue when `resolv.conf` contains errors and trust-dns panics (Fixes #2283)
- Updated Javscript and CSS files for admin interface
- Fixed an issue with the /admin interface which did not cleared the login cookie correctly
- Prevent websocket notifications during org import, this caused a lot of traffic, and slowed down the import.
  This is also the same as Bitwarden which does not trigger this refresh via websockets.

Rust:
- Updated to use v1.59
- Use the new `strip` option and enabled to strip `debuginfo`
- Enabled `lto` with `thin`
- Removed the strip RUN from the alpine armv7, this is now done automatically
2022-02-26 13:56:42 +01:00
taylorwmj
9bb4c38bf9 Added autofocus to pw field on admin login page 2022-02-22 20:44:29 -06:00
BlackDex
5f01db69ff Update async to prepare for main merge
- Changed nightly to stable in Dockerfile and Workflow
- Updated Dockerfile to use stable and updated ENV's
- Removed 0.0.0.0 as default addr it now uses ROCKET_ADDRESS or the default
- Updated Github Workflow actions to the latest versions
- Updated Hadolint version
- Re-orderd the Cargo.toml file a bit and put libs together which are linked
- Updated some libs
- Updated .dockerignore file
2022-02-22 20:00:33 +01:00
Joel Beckmeyer
c59a7f4a8c document ENV_FILE variable usage 2022-02-16 14:42:12 -05:00
Joel Beckmeyer
8295688bed Add support for custom .env file path 2022-02-16 09:25:37 -05:00
Tomek Mańko
9713a3a555 Add IP address to missing/invalid password message for Sends 2022-02-13 13:13:42 +01:00
Daniel García
d781981bbd formatting 2022-01-30 22:26:19 +01:00
Daniel García
5125fdb882 Ignore unused field 2022-01-30 22:26:19 +01:00
Daniel García
fd9693b961 await the mutex in db_run and use block_in_place for it's contents 2022-01-30 22:26:19 +01:00
BlackDex
f38926d666 Upd Dockerfiles, crates. Fixed rust 2018 idioms
- Updated crates
- Fixed Dockerfiles to build using the rust stable version
- Enabled warnings for rust 2018 idioms and fixed them.
2022-01-30 22:26:18 +01:00
BlackDex
775d07e9a0 Async/Awaited all db methods
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.

Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
2022-01-30 22:26:18 +01:00
Daniel García
2d5f172e77 Update to rocket 0.5 and made code async, missing updating all db calls, that are currently blocking 2022-01-30 22:25:54 +01:00
Daniel García
08f0de7b46 Dependency updates 2022-01-30 22:24:42 +01:00
Daniel García
45122bed9e Update web vault to v2.25.1b 2022-01-30 21:33:13 +01:00
Daniel García
0876d4a5fd Merge pull request #2257 from jjlin/email-token
Increase length limit for email token generation
2022-01-29 00:05:38 +01:00
Jeremy Lin
7d552dbdc8 Increase length limit for email token generation
The current limit of 19 is an artifact of the implementation, which can be
easily rewritten in terms of a more general string generation function.
The new limit is 255 (max value of a `u8`); using a larger type would
probably be overkill.
2022-01-24 01:17:00 -08:00
Daniel García
9a60eb04c2 Update web vault to 2.25.1 and rust base images to march rust-toolchain, the official rust images don't have nightly builds, so just use the latest 2022-01-24 00:15:25 +01:00
Daniel García
1b99da91fb Merge branch 'dscottboggs-fix/CVE-2022-21658' into main 2022-01-24 00:03:52 +01:00
Daniel García
a64a400c9c Merge branch 'fix/CVE-2022-21658' of https://github.com/dscottboggs/vaultwarden into dscottboggs-fix/CVE-2022-21658 2022-01-24 00:03:46 +01:00
D. Scott Boggs
85c0aa1619 Bump rust version to mitigate CVE-2022-21658 2022-01-23 17:51:36 -05:00
Daniel García
19e78e3509 Merge branch 'jjlin-api-key' into main 2022-01-23 23:50:42 +01:00
Daniel García
bf6330374c Merge branch 'api-key' of https://github.com/jjlin/vaultwarden into jjlin-api-key 2022-01-23 23:50:34 +01:00
Daniel García
e639d9063b Merge branch 'iamdoubz-iamdoubz-feature-to-permissions-policy-patch' into main 2022-01-23 23:44:17 +01:00
Daniel García
4a88e7ec78 Merge branch 'iamdoubz-feature-to-permissions-policy-patch' of https://github.com/iamdoubz/vaultwarden into iamdoubz-iamdoubz-feature-to-permissions-policy-patch 2022-01-23 23:44:08 +01:00
Daniel García
65dad5a9d1 Merge branch 'jjlin-icons' into main 2022-01-23 23:43:30 +01:00
Daniel García
ba9ad14fbb Merge branch 'icons' of https://github.com/jjlin/vaultwarden into jjlin-icons 2022-01-23 23:43:24 +01:00
Daniel García
62c7a4d491 Merge branch 'BlackDex-fix-emergency-invite-register' into main 2022-01-23 23:42:42 +01:00
Daniel García
14e3dcad8e Merge branch 'fix-emergency-invite-register' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-emergency-invite-register 2022-01-23 23:42:35 +01:00
Daniel García
f4a9645b54 Remove references to "bwrs" #2195
Squashed commit of the following:

commit 1bdf1c7954e0731c95703d10118f3874ab5155d3
Merge: 8ba6e61 7257251
Author: Daniel García <dani-garcia@users.noreply.github.com>
Date:   Sun Jan 23 23:40:17 2022 +0100

    Merge branch 'remove-bwrs' of https://github.com/RealOrangeOne/vaultwarden into RealOrangeOne-remove-bwrs

commit 7257251ecf
Author: Jake Howard <git@theorangeone.net>
Date:   Thu Jan 6 17:48:18 2022 +0000

    Use `or_else` to save potentially unnecessary function call

commit 40ae81dd3c
Author: Jake Howard <git@theorangeone.net>
Date:   Wed Jan 5 21:18:24 2022 +0000

    Move $BWRS_VERSION fallback into build.rs

commit 743ef74b30
Author: Jake Howard <git@theorangeone.net>
Date:   Sat Jan 1 23:08:27 2022 +0000

    Revert "Add feature to enable use of `Option::or` in const context"

    This reverts commit fe8e043b8a.

    We want to run on stable soon, where these features are not supported

commit a1f0da638c
Author: Jake Howard <git@theorangeone.net>
Date:   Sat Jan 1 13:04:47 2022 +0000

    Rename web vault version file

    https://github.com/dani-garcia/bw_web_builds/pull/58

commit fe8e043b8a
Author: Jake Howard <git@theorangeone.net>
Date:   Sat Jan 1 12:56:44 2022 +0000

    Add feature to enable use of `Option::or` in const context

commit 687435c8b2
Author: Jake Howard <git@theorangeone.net>
Date:   Sat Jan 1 12:27:28 2022 +0000

    Continue to allow using `$BWRS_VERSION`

commit 8e2f708e50
Author: Jake Howard <git@theorangeone.net>
Date:   Fri Dec 31 11:41:34 2021 +0000

    Remove references to "bwrs"

    The only remaining one is getting the version of the web vault, which requires coordinating with the web vault patching.
2022-01-23 23:40:59 +01:00
Jeremy Lin
8f7900759f Fix scope and refresh_token for API key logins
API key logins use a scope of `api`, not `api offline_access`. Since
`offline_access` is not requested, no `refresh_token` is returned either.
2022-01-21 23:10:15 -08:00
Jeremy Lin
69ee4a70b4 Add support for API keys
This is mainly useful for CLI-based login automation.
2022-01-21 23:10:11 -08:00
iamdoubz
e4e16ed50f Upgrade Feature-Policy to Permissions-Policy
Convert old, soon to be defunct, Feature-Policy with its replacement Permissions-Policy
2022-01-12 10:36:50 -06:00
Jeremy Lin
a16c656770 Add support for legacy HTTP 301/302 redirects for external icons
At least on Android, it seems the Bitwarden mobile client responds to
HTTP 307, but not to HTTP 308 for some reason.
2022-01-08 23:40:35 -08:00
BlackDex
76b7de15de Fix emergency access invites for new users
If a new user gets invited it should check if the user is invited via
emergency access, if so, allow that user to register.
2022-01-07 18:55:48 +01:00
Daniel García
8ba6e61fd5 Merge pull request #2197 from BlackDex/issue-2196
Fix issue with Bitwarden CLI.
2022-01-02 23:47:40 +01:00
Daniel García
a30a1c9703 Merge pull request #2194 from BlackDex/issue-2154
Fixed issue #2154
2022-01-02 23:46:32 +01:00
Daniel García
76687e2df7 Merge pull request #2188 from jjlin/icons
Add config option to set the HTTP redirect code for external icons
2022-01-02 23:46:23 +01:00
BlackDex
bf5aefd129 Fix issue with Bitwarden CLI.
The CLI seems to send a String instead of an Integer for the maximum access count.
It now accepts both types and converts it to an i32 in all cases.

Fixes #2196
2021-12-31 15:59:58 +01:00
BlackDex
1fa178d1d3 Fixed issue #2154
For emergency access invitations we need to check if invites are
allowed, not if sign-ups are allowed.
2021-12-31 11:53:21 +01:00
Jeremy Lin
b7eedbcddc Add config option to set the HTTP redirect code for external icons
The default code is 307 (temporary) to make it easier to test different icon
services, but once a service has been decided on, users should ideally switch
to using permanent redirects for cacheability.
2021-12-30 23:06:52 -08:00
Daniel García
920371929b Merge pull request #2182 from RealOrangeOne/cache-expires-header
Set `Expires` header when caching responses
2021-12-30 01:38:39 +01:00
Jake Howard
6ddbe84bde Remove unnecessary return 2021-12-29 16:29:42 +00:00
Jake Howard
690d0ed1bb Add our own HTTP date formatter 2021-12-29 16:21:28 +00:00
Jake Howard
248e7dabc2 Collapse field name definition
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2021-12-28 21:54:09 +00:00
Jake Howard
4584cfe3c1 Additionally set expires header when caching responses
Browsers are rather smart, but also dumb. This uses the `Expires` header
alongside `cache-control` to better prompt the browser to actually
cache.

Unfortunately, firefox still tries to "race" its own cache, in an
attempt to respond to requests faster, so still ends up making a bunch
of requests which could have been cached. Doesn't appear there's any way
around this.
2021-12-28 16:24:47 +00:00
Daniel García
cc646b1519 Merge branch 'BlackDex-multi-db-dockers' into main 2021-12-27 21:55:36 +01:00
Daniel García
e501dc6d0e Merge branch 'multi-db-dockers' of https://github.com/BlackDex/vaultwarden into BlackDex-multi-db-dockers 2021-12-27 21:55:28 +01:00
Daniel García
85ac9783f0 Merge branch 'ratelimit' into main 2021-12-27 21:55:15 +01:00
BlackDex
5b430f22bc Support all DB's for Alpine and Debian
- Using my own rust-musl build containers we now support all database
types for both Debian and Alpine.
- Added new Alpine containers for armv6 and arm64/aarch64
- The Debian builds can also be done wihout dpkg magic stuff, probably
some fixes in Rust regarding linking (Or maybe OpenSSL or Diesel), in
any case, it works now without hacking dpkg and apt.
- Updated toolchain and crates
2021-12-26 21:59:28 +01:00
Daniel García
d4eb21c2d9 Better document the new rate limiting 2021-12-25 01:12:09 +01:00
Daniel García
6bf8a9d93d Merge pull request #2171 from jjlin/global-domains
Sync global_domains.json
2021-12-25 00:55:50 +01:00
Jeremy Lin
605419ae1b Sync global_domains.json to bitwarden/server@5a8f334 (TransferWise) 2021-12-24 13:25:16 -08:00
Daniel García
b89ffb2731 Merge pull request #2170 from BlackDex/issue-2136
Small changes to icon log messages.
2021-12-24 20:40:30 +01:00
Daniel García
bda123b1eb Merge pull request #2169 from BlackDex/issue-2151
Fixed #2151
2021-12-24 20:40:07 +01:00
BlackDex
2c94ea075c Small changes to icon log messages.
As requested in #2136, some small changes on the type of log messages
and wording used.

Resolves #2136
2021-12-24 18:24:25 +01:00
BlackDex
4bd8eae07e Fixed #2151 2021-12-24 17:59:12 +01:00
Daniel García
5529264c3f Basic ratelimit for user login (including 2FA) and admin login 2021-12-22 21:48:49 +01:00
Daniel García
0a5df06e77 Merge pull request #2158 from jjlin/icons
Add support for external icon services
2021-12-22 15:46:30 +01:00
Jeremy Lin
2f9ac61a4e Add support for external icon services
If an external icon service is configured, icon requests return an HTTP
redirect to the corresponding icon at the external service.

An external service may be useful for various reasons, such as if:

* The Vaultwarden instance has no external network connectivity.
* The Vaultwarden instance has trouble handling large bursts of icon requests.
* There are concerns that an attacker may probe the instance to try to detect
  whether icons for certain sites have been cached, which would suggest that
  the instance contains entries for those sites.
* The external icon service does a better job of providing icons than the
  built-in fetcher.
2021-12-20 01:34:31 -08:00
Daniel García
840cf8740a Merge pull request #2156 from jjlin/global-domains
Sync global_domains.json
2021-12-19 17:39:45 +01:00
Jeremy Lin
d8869adf52 Sync global_domains.json to bitwarden/server@224bfb6 (Wells Fargo) 2021-12-18 16:19:05 -08:00
Jeremy Lin
a631fc0077 Sync global_domains.json to bitwarden/server@2f518fb (Ubisoft) 2021-12-18 16:19:05 -08:00
Daniel García
9e4d372213 Update web vault to 2.25.0 2021-12-13 00:02:13 +01:00
Daniel García
d0bf0ab237 Merge pull request #2125 from BlackDex/trust-dns
Enabled trust-dns and some updates.
2021-12-08 00:26:31 +01:00
BlackDex
e327583aa5 Enabled trust-dns and some updates.
- Enabled trust-dns feature which seems to help a bit when DNS is
causing long timeouts. Though in the blocking version it is less visible
then on the async branch.
- Updated crates
- Removed some redundant code
- Updated javascript/css libraries

Resolves #2118
Resolves #2119
2021-12-01 19:01:55 +01:00
Daniel García
ead2f02cbd Merge pull request #2084 from BlackDex/minimize-macro-recursion
Macro recursion decrease and other optimizations
2021-11-06 22:00:37 +01:00
BlackDex
c453528dc1 Macro recursion decrease and other optimizations
- Decreased `recursion_limit` from 512 to 87
  Mainly done by optimizing the config macro's.
  This fixes an issue with the rust-analyzer which doesn't go beyond 128
- Removed Regex for masking sensitive values and replaced it with a map()
  This is much faster then using a Regex.
- Refactored the get_support_json macro's
- All items above also lowered the binary size and possibly compile-time
- Removed `_conn: DbConn` from several functions, these caused unnecessary database connections for functions who didn't used that at all
- Decreased json response for `/plans`
- Updated libraries and where needed some code changes
  This also fixes some rare issues with SMTP https://github.com/lettre/lettre/issues/678
- Using Rust 2021 instead of 2018
- Updated rust nightly
2021-11-06 17:44:53 +01:00
Daniel García
6ae48aa8c2 Merge pull request #2080 from jjlin/fix-postgres-migration
Fix PostgreSQL migration
2021-11-01 14:33:47 +01:00
Daniel García
88643fd9d5 Merge pull request #2078 from jjlin/fix-ea-reject
Fix missing encrypted key after emergency access reject
2021-11-01 14:33:39 +01:00
Daniel García
73e0002219 Merge pull request #2073 from jjlin/fix-access-logic
Fix conflict resolution logic for `read_only` and `hide_passwords` flags
2021-11-01 14:33:29 +01:00
Jeremy Lin
c49ee47de0 Fix PostgreSQL migration
The PostgreSQL migration should have used `TIMESTAMP` rather than `DATETIME`.
2021-10-31 17:50:00 -07:00
Jeremy Lin
14408396bb Fix missing encrypted key after emergency access reject
Rejecting an emergency access request should transition the grantor/grantee
relationship back into the `Confirmed` state, and the grantor's encrypted key
should remain in escrow rather than being cleared, or else future emergency
access requsts from that grantee will fail.
2021-10-31 02:14:18 -07:00
Jeremy Lin
6cbb724069 Fix conflict resolution logic for read_only and hide_passwords flags
For one of these flags to be in effect for a cipher, upstream requires all of
(rather than any of) the collections the cipher is in to have that flag set.

Also, some of the logic for loading access restrictions was wrong. I think
that only malicious clients that also had knowledge of the UUIDs of ciphers
they didn't have access to would have been able to take advantage of that.
2021-10-29 13:47:56 -07:00
Daniel García
a2316ca091 Merge pull request #2067 from jjlin/incomplete-2fa
Add email notifications for incomplete 2FA logins
2021-10-28 18:00:03 +02:00
Jeremy Lin
c476e19796 Add email notifications for incomplete 2FA logins
An incomplete 2FA login is one where the correct master password was provided,
but the 2FA token or action required to complete the login was not provided
within the configured time limit. This potentially indicates that the user's
master password has been compromised, but the login was blocked by 2FA.

Be aware that the 2FA step can usually still be completed after the email
notification has already been sent out, which could be confusing. Therefore,
the incomplete 2FA time limit should be long enough that this situation would
be unlikely. This feature can also be disabled entirely if desired.
2021-10-28 00:19:43 -07:00
Daniel García
9f393cfd9d Formatting 2021-10-27 23:00:26 +02:00
Daniel García
450c4d4d97 Update web vault to 2.24.1 2021-10-27 22:46:12 +02:00
Daniel García
75e62abed0 Move database_max_conns 2021-10-24 22:22:28 +02:00
Daniel García
97f9eb1320 Update dependencies 2021-10-24 21:50:26 +02:00
Daniel García
53cc8a65af Add doc comments to the functions in Config, and remove some unneeded pubs 2021-10-23 20:47:05 +02:00
Daniel García
f94ac6ca61 Merge pull request #2044 from jjlin/emergency-access-cleanup
Emergency Access cleanup
2021-10-19 20:14:29 +02:00
Jeremy Lin
cee3fd5ba2 Emergency Access cleanup
This commit contains mostly superficial user-facing cleanup, to be followed up
with more extensive cleanup and fixes in the API implementation.
2021-10-19 02:22:44 -07:00
Daniel García
016fe2269e Update dependencies 2021-10-18 22:14:29 +02:00
Daniel García
03c0a5e405 Update web vault image to v2.23.0c 2021-10-18 22:06:35 +02:00
Daniel García
cbbed79036 Merge branch 'domdomegg-domdomegg/2fa-check-accepted' into main 2021-10-18 21:13:57 +02:00
Daniel García
4af81ec50e Merge branch 'domdomegg/2fa-check-accepted' of https://github.com/domdomegg/vaultwarden into domdomegg-domdomegg/2fa-check-accepted 2021-10-18 21:13:50 +02:00
Daniel García
a5ba67fef2 Merge branch 'BlackDex-alive-db-check' into main 2021-10-18 21:13:29 +02:00
Adam Jones
4cebe1fff4 cargo fmt 2021-10-09 15:42:06 +01:00
Adam Jones
a984dbbdf3 2FA org policy: do not enforce on invited (not accepted) users 2021-10-09 13:54:30 +01:00
BlackDex
881524bd54 Added DbConn to /alive healthcheck
During a small discusson on Matrix it seems logical to have the /alive
endpoint also check if the database connection still works.

The reason for this was regarding a certificate which failed/expired
while vaultwarden and the database were still up-and-running, but
suddenly vaultwarden couldn't connect anymore.

With this `DbConn` added to `/alive`, it will be more accurate, because
of vaultwarden can't reach the database, it isn't alive.
2021-10-09 14:16:27 +02:00
Daniel García
44da9e6ca7 Merge branch 'BlackDex-update-openssl-amd64-alpine' into main 2021-10-08 22:29:19 +02:00
Daniel García
4c0c8f7432 Merge branch 'update-openssl-amd64-alpine' of https://github.com/BlackDex/vaultwarden into BlackDex-update-openssl-amd64-alpine 2021-10-08 22:29:13 +02:00
Daniel García
f67854c59c Merge branch 'BlackDex-mail-errors' into main 2021-10-08 22:28:54 +02:00
Daniel García
a1c1b9ab3b Merge branch 'mail-errors' of https://github.com/BlackDex/vaultwarden into BlackDex-mail-errors 2021-10-08 22:28:46 +02:00
Daniel García
395979e834 Merge branch 'domdomegg-domdomegg/single-organization-policy' into main 2021-10-08 22:27:31 +02:00
BlackDex
fce6cb5865 Update OpenSSL via an updated clux build image.
Recently the LetsEncrypt DST certificate has expired.
Older versions of OpenSSL like v1.0.x have issues using this certificate.

Recently clux has updated his image to support OpenSSL v1.1.1[a-z].
This solves issues with those certificates.

This issues was disscused on Matrix.
2021-10-08 16:46:29 +02:00
BlackDex
338756550a Fix error reporting in admin and some small fixes
- Fixed a bug in JavaScript which caused no messages to be shown to the
user in-case of an error send by the server.
- Changed mail error handling for better error messages
- Changed user/org actions from a to buttons, this should prevent
strange issues in-case of javascript issues and the page does re-load.
- Added Alpine and Debian info for the running docker image

During the mail error testing i encountered a bug which caused lettre to
panic. This panic only happens on debug builds and not release builds,
so no need to update anything on that part. This bug is also already
fixed. See https://github.com/lettre/lettre/issues/678 and https://github.com/lettre/lettre/pull/679

Resolves #2021
Could also fix the issue reported here #2022, or at least no hash `#` in
the url.
2021-10-08 00:01:24 +02:00
Adam Jones
d014eede9a feature: Support single organization policy
This adds back-end support for the [single organization policy](https://bitwarden.com/help/article/policies/#single-organization).
2021-10-02 19:30:19 +02:00
Daniel García
9930a0d752 Merge pull request #2001 from BlackDex/issue-1998
Revert Debian images back to Buster.
2021-09-27 19:59:59 +02:00
BlackDex
9928a5404b Revert Debian images back to Buster.
This fixes #1998 where with some checking it seems Bullseye has some
issues with the glibc sleep call. It returns a SIGILL.

The glibc on Buster doesn't seem to have this issue, so revert back for
now until a fix has been released.
2021-09-27 17:35:49 +02:00
Daniel García
a6e0ddcdf1 Merge branch 'domdomegg-domdomegg/support-no-data-org-policies' into main 2021-09-26 23:21:30 +02:00
Daniel García
acab70ed89 Merge branch 'domdomegg/support-no-data-org-policies' of https://github.com/domdomegg/vaultwarden into domdomegg-domdomegg/support-no-data-org-policies 2021-09-26 23:21:24 +02:00
Daniel García
c0d149060f Merge branch 'BlackDex-icon-download-update' into main 2021-09-26 23:20:55 +02:00
Daniel García
344f00d9c9 Merge branch 'icon-download-update' of https://github.com/BlackDex/vaultwarden into BlackDex-icon-download-update 2021-09-26 23:20:44 +02:00
Daniel García
b26afb970a Merge branch 'Makishima-patch-1' into main 2021-09-26 23:19:36 +02:00
Nikolay
34ed5ce4b3 Update README.md
Fixed 'Bitwarden clients' link, it should lead to https://bitwarden.com/download/ and not to https://bitwarden.com/#download/
2021-09-25 13:10:06 +03:00
BlackDex
9375d5b8c2 Updated icon downloading
- Unicode websites could break (www.post.japanpost.jp for example).
  regex would fail because it was missing the unicode-perl feature.
- Be less verbose in logging with icon downloads
- Removed duplicate info/error messages
- Added err_silent! macro to help with the less verbose error/info messages.
2021-09-24 18:27:52 +02:00
Adam Jones
e3678b4b56 fix: Support no-data enterprise policies
Boolean-toggle enterprise policies (like 'Two-Step Login' and 'Personal Ownership') don't provide a data attribute in the new version of the web client. This updates the backend to expect these to be optional.

Web change introduced in https://github.com/bitwarden/web/pull/1147 which added 2cbe023a38/src/app/organizations/policies/base-policy.component.ts (L48-L50)
2021-09-24 17:20:44 +02:00
Daniel García
b4c95fb4ac Hide some warnings for unused struct fields 2021-09-22 21:39:31 +02:00
Daniel García
0bb33e04bb Update dependencies and ser cargo resolver to version 2 ahead of 2021 edition 2021-09-22 20:26:48 +02:00
Daniel García
4d33e24099 Update web vault to 2.23.0 2021-09-22 20:26:17 +02:00
Daniel García
2cdce04662 Merge branch 'thelittlefireman-emergency_feature' into main 2021-09-19 23:54:28 +02:00
Daniel García
756d108f6a Merge branch 'emergency_feature' of https://github.com/thelittlefireman/bitwarden_rs into thelittlefireman-emergency_feature 2021-09-19 23:54:19 +02:00
thelittlefireman
ca20b3d80c [PATCH] Some fixes to the Emergency Access PR
- Changed the date of the migration folders to be from this date.
- Removed a lot is_email_domain_allowed checks.
  This check only needs to be done during the invite it self, else
everything else will fail even if a user has an account created via the
/admin interface which bypasses that specific check! Also, the check was
at the wrong place anyway's, since it would only not send out an e-mail,
but would still have allowed an not allowed domain to be used when
e-mail would have been disabled. While that check always works, even if
sending e-mails is disasbled.
- Added an extra allowed route during password/key-rotation change which
updates/checks the public-key afterwards.
- A small change with some `Some` and `None` orders.
- Change the new invite object to only generate the UTC time once, since
it could be possible that there will be a second difference, and we only
need to call it just once.

by black.dex@gmail.com

Signed-off-by: thelittlefireman <thelittlefireman@users.noreply.github.com>
2021-09-17 01:25:47 +02:00
thelittlefireman
4ab9362971 Add Emergency contact feature
Signed-off-by: thelittlefireman <thelittlefireman@users.noreply.github.com>
2021-09-17 01:25:44 +02:00
Daniel García
4e8828e41a Merge branch 'BlackDex-admin-interface' into main 2021-09-16 21:36:31 +02:00
Daniel García
f8d1cfad2a Merge branch 'admin-interface' of https://github.com/BlackDex/vaultwarden into BlackDex-admin-interface 2021-09-16 21:36:25 +02:00
BlackDex
b0a411b733 Update some JS Libraries and fix small issues
- Updated JS Libraries
- Downgraded bootstrap.css to v5.0.2 which works with Bootstrap-Native.
- Fixed issue with settings being able to open/collapse on some systems.
- Added .js and .css to the exclude list for the end-of-file-fixer pre-commit
2021-09-18 19:49:44 +02:00
Daniel García
81741647f3 Merge branch 'BlackDex-bulk-org-actions' into main 2021-09-16 21:36:15 +02:00
BlackDex
f36bd72a7f Add Organization bulk actions support
For user management within the organization view you are able to select
multiple users to re-invite, confirm or delete them.

These actions were not working which this PR fixes by adding support for
these endpoints. This will make it easier to confirm and delete multiple
users at once instead of having to do this one-by-one.
2021-09-18 14:22:14 +02:00
Mathijs van Veluw
8c10de3edd Merge pull request #1978 from benarmstead/main
Update dependencies
2021-09-16 19:28:05 +02:00
Ben Armstead
0ab10a7c43 Merge pull request #1 from benarmstead/update
Add updates in cargo.toml too
2021-09-16 16:02:50 +01:00
Ben Armstead
a1a5e00ff5 Update dependencies in Cargo.lock 2021-09-16 15:59:24 +01:00
Ben Armstead
8af4b593fa Update dependencies in cargo.toml 2021-09-16 15:58:49 +01:00
Ben Armstead
9bef2c120c Update dependencies 2021-09-16 14:21:06 +01:00
Daniel García
f7d99c43b5 Merge pull request #1973 from BlackDex/optimize-release
Optimize release workflow.
2021-09-13 20:39:48 +02:00
BlackDex
ca0fd7a31b Optimize release workflow.
- Split Debian and Alpine into different build matrix
  This starts building both Debian and Alpine based images at the same time
- Make use of Docker BuildKit, which improves speed also.
- Use BuildKit caching for Rust Cargo across docker images.
  This prevents downloading the same crates multiple times.
- Use Github Actions Services to start a docker registry, starting it
via the build script sometimes caused issues.
- Updated the Build workflow to use Ubuntu 20.04 which is more close to
the Bullseye Debian release regarding package versions.
2021-09-13 14:42:15 +02:00
Daniel García
9e1550af8e Merge branch 'BlackDex-issue-1963' into main 2021-09-09 20:30:38 +02:00
Daniel García
a99c9715f6 Merge branch 'issue-1963' of https://github.com/BlackDex/vaultwarden into BlackDex-issue-1963 2021-09-09 20:30:29 +02:00
Daniel García
1a888b5355 Merge branch 'jjlin-personal-ownership-imports' into main 2021-09-09 20:30:13 +02:00
BlackDex
10d5c7738a Fix issue when using uppercase chars in emails
In the case when SMTP is disabled and.
when inviting new users either via the admin interface or into an
organization and using uppercase letters, this would fail for those
users to be able to register since the checks which were done are
case-sensitive and never matched.

This PR fixes that issue by ensuring everything is lowercase.
Fixes #1963
2021-09-09 13:52:39 +02:00
Jeremy Lin
80f23e6d78 Enforce Personal Ownership policy on imports
Upstream PR: https://github.com/bitwarden/server/pull/1565
2021-09-08 23:26:15 -07:00
Daniel García
d5ed2ce6df Merge branch 'jjlin-webauthn-origin' into main 2021-09-06 17:17:04 +02:00
Daniel García
5e649f0d0d Merge branch 'webauthn-origin' of https://github.com/jjlin/vaultwarden into jjlin-webauthn-origin 2021-09-06 17:16:56 +02:00
Daniel García
612c0e9478 Merge branch 'jjlin-bullseye' into main 2021-09-06 17:16:36 +02:00
Daniel García
0d2b3bfb99 Merge pull request #1945 from BlackDex/github-actions-release
Build Docker Hub images via Github Actions
2021-09-08 20:54:41 +02:00
Daniel García
c934838ace Merge branch 'bullseye' of https://github.com/jjlin/vaultwarden into jjlin-bullseye 2021-09-06 17:16:28 +02:00
Jeremy Lin
4350e9d241 Update Debian base images to bullseye 2021-09-04 11:46:15 -07:00
Jeremy Lin
0cdc0cb147 Fix incorrect WebAuthn origin
This mainly affects users running Vaultwarden under a subpath.

Refs:

* https://github.com/kanidm/webauthn-rs/blob/b2cbb34/src/core.rs#L941-L948
* https://github.com/kanidm/webauthn-rs/blob/b2cbb34/src/core.rs#L316
* https://w3c.github.io/webauthn/#dictionary-client-data
2021-08-29 15:53:25 -07:00
BlackDex
20535065d7 Build Docker Hub images via Github Actions
Since docker hub stopped Autobuild, we need to switch to something else.
This will trigger building of images on Github Actions and pushes them
to Docker Hub.

You only need to add 3 secrets before you merge this PR to have it working directly.

- DOCKERHUB_USERNAME : The username of the account you are going to push the builds to
- DOCKERHUB_TOKEN : The token needed to login and push builds
- DOCKERHUB_REPO : The repo name in the following form `index.docker.io/<user>/<repo>`
  So for vaultwarden that would be `index.docker.io/vaultwarden/server`

Also some small modifications to the other workflows.
2021-08-28 17:29:13 +02:00
Daniel García
a23f4a704b Merge branch 'fabianthdev-fix/sends_notifications' into main 2021-08-22 22:17:00 +02:00
Daniel García
93f2f74767 Merge branch 'fix/sends_notifications' of https://github.com/fabianthdev/vaultwarden into fabianthdev-fix/sends_notifications 2021-08-22 22:16:50 +02:00
Daniel García
37ca202247 Merge branch 'mrckndt-fix-timezone-alpine-container' into main 2021-08-22 22:14:46 +02:00
Daniel García
37525b1e7e Merge branch 'fix-timezone-alpine-container' of https://github.com/mrckndt/vaultwarden into mrckndt-fix-timezone-alpine-container 2021-08-22 22:14:38 +02:00
Daniel García
d594b5a266 Merge branch 'jjlin-fix-attachment-sharing' into main 2021-08-22 22:14:14 +02:00
Daniel García
41add45e67 Merge branch 'fix-attachment-sharing' of https://github.com/jjlin/vaultwarden into jjlin-fix-attachment-sharing 2021-08-22 22:14:07 +02:00
Daniel García
08b168a0a1 Merge branch 'BlackDex-fix-1878' into main 2021-08-22 22:12:59 +02:00
Daniel García
978ef2bc8b Merge branch 'fix-1878' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-1878 2021-08-22 22:12:52 +02:00
BlackDex
881d1f4334 Fix wrong display of MFA email.
There was some wrong logic regarding the display of which email is
configured to be used for the email MFA. This is now fixed.

Resolves #1878
2021-08-19 09:25:34 +02:00
Jeremy Lin
56b4f46d7d Fix limitation on sharing ciphers with attachments
This check is several years old, so maybe there was a valid reason
for having it before, but it's not correct anymore.
2021-08-16 22:23:33 -07:00
Marco
f6bd8b3462 Adding tzdata to container
To be able to set a timezone inside a container with the env variable TZ
the tzdata package is needed. Otherwise only UTC will be set.
2021-08-06 13:39:33 +02:00
Fabian Thies
1f0f64d961 Sort the imports in notifications.rs alphabetically 2021-08-04 16:56:43 +02:00
Fabian Thies
42ba817a4c Fix errors that occurred in the nightly build 2021-08-04 13:25:41 +02:00
Fabian Thies
dd98fe860b Send create, update and delete notifications for Sends in the correct format.
Add endpoints to get all sends or a specific send by its uuid.
2021-08-03 17:39:38 +02:00
Daniel García
1fe9f101be Merge branch 'jjlin-fix-org-attachment-uploads' into main 2021-07-25 19:08:44 +02:00
Daniel García
c68fbb41d2 Merge branch 'fix-org-attachment-uploads' of https://github.com/jjlin/vaultwarden into jjlin-fix-org-attachment-uploads 2021-07-25 19:08:38 +02:00
Jeremy Lin
91e80657e4 Fix error with adding file attachment from org vault view 2021-08-18 20:54:36 -07:00
Daniel García
2db30f918e Merge branch 'BlackDex-fix-sync-desktop-client' into main 2021-07-25 19:07:59 +02:00
Daniel García
cfceac3909 Merge branch 'fix-sync-desktop-client' of https://github.com/BlackDex/vaultwarden into BlackDex-fix-sync-desktop-client 2021-07-25 19:07:51 +02:00
BlackDex
58b046fd10 Fix syncing with Bitwarden Desktop v1.28.0
Syncing with the latest desktop client (v1.28.0) fails because it expects some json key/values to be there.

This PR adds those key/value pairs.

Resolves #1924
2021-08-21 10:36:08 +02:00
Daniel García
227779256c Merge branch 'BlackDex-dep-update' into main 2021-07-25 19:07:04 +02:00
BlackDex
89b5f7c98d Dependency updates
Updated several dependencies and switch to different totp library.

- Switch oath with totp-lite
  oauth hasn't been updated in a long while and some dependencies could not be updated any more
  It now also validates a preseeding 0, as the previous library returned an int instead of a str which stripped a leading 0
- Updated rust to the current latest nightly (including build image)
- Updated bootstrap css and js
- Updated hadolint to latest version
- Updated default rust image from v1.53 to v1.54
- Updated new nightly build/clippy messages
2021-08-22 13:46:48 +02:00
138 changed files with 20005 additions and 12238 deletions

View File

@@ -3,13 +3,18 @@ target
# Data folder
data
# Misc
.env
.env.template
.gitattributes
.gitignore
rustfmt.toml
# IDE files
.vscode
.idea
.editorconfig
*.iml
# Documentation
@@ -19,9 +24,17 @@ data
*.yml
*.yaml
# Docker folders
# Docker
hooks
tools
Dockerfile
.dockerignore
docker/**
!docker/healthcheck.sh
!docker/start.sh
# Web vault
web-vault
web-vault
# Vaultwarden Resources
resources

View File

@@ -3,6 +3,11 @@
##
## Be aware that most of these settings will be overridden if they were changed
## in the admin interface. Those overrides are stored within DATA_FOLDER/config.json .
##
## By default, vaultwarden expects for this file to be named ".env" and located
## in the current working directory. If this is not the case, the environment
## variable ENV_FILE can be set to the location of this file prior to starting
## vaultwarden.
## Main data folder
# DATA_FOLDER=data
@@ -24,6 +29,15 @@
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
## Individual folders, these override %DATA_FOLDER%
# RSA_KEY_FILENAME=data/rsa_key
# ICON_CACHE_FOLDER=data/icon_cache
@@ -61,6 +75,10 @@
## To control this on a per-org basis instead, use the "Disable Send" org policy.
# SENDS_ALLOWED=true
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Job scheduler settings
##
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
@@ -77,6 +95,18 @@
## Cron schedule of the job that checks for trashed items to delete permanently.
## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
# TRASH_PURGE_SCHEDULE="0 5 0 * * *"
##
## Cron schedule of the job that checks for incomplete 2FA logins.
## Defaults to once every minute. Set blank to disable this job.
# INCOMPLETE_2FA_SCHEDULE="30 * * * * *"
##
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
##
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
@@ -113,10 +143,32 @@
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Icon service
## The predefined icon services are: internal, bitwarden, duckduckgo, google.
## To specify a custom icon service, set a URL template with exactly one instance of `{}`,
## which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
##
## `internal` refers to Vaultwarden's built-in icon fetching implementation.
## If an external service is set, an icon request to Vaultwarden will return an HTTP
## redirect to the corresponding icon at the external service. An external service may
## be useful if your Vaultwarden instance has no external network connectivity, or if
## you are concerned that someone may probe your instance to try to detect whether icons
## for certain sites have been cached.
# ICON_SERVICE=internal
## Icon redirect code
## The HTTP status code to use for redirects to an external icon service.
## The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
## Temporary redirects are useful while testing different icon services, but once a service
## has been decided on, consider using permanent redirects for cacheability. The legacy codes
## are currently better supported by the Bitwarden clients.
# ICON_REDIRECT_CODE=302
## Disable icon downloading
## Set to true to disable icon downloading, this would still serve icons from $ICON_CACHE_FOLDER,
## but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
## otherwise it will delete them and they won't be downloaded again.
## Set to true to disable icon downloading in the internal icon service.
## This still serves existing icons from $ICON_CACHE_FOLDER, without generating any external
## network requests. $ICON_CACHE_TTL must also be set to 0; otherwise, the existing icons
## will be deleted eventually, but won't be downloaded again.
# DISABLE_ICON_DOWNLOAD=false
## Icon download timeout
@@ -147,7 +199,7 @@
# EMAIL_EXPIRATION_TIME=600
## Email token size
## Number of digits in an email token (min: 6, max: 19).
## Number of digits in an email 2FA token (min: 6, max: 255).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
@@ -208,6 +260,13 @@
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
## resulting in an email notification. An incomplete 2FA login is one where the correct
## master password was provided but the required 2FA step was not completed, which
## potentially indicates a master password compromise. Set to 0 to disable this check.
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Controls the PBBKDF password iterations to apply on the server
## The change only applies when the password is changed
# PASSWORD_ITERATIONS=100000
@@ -231,6 +290,17 @@
## Multiple values must be separated with a whitespace.
# ALLOWED_IFRAME_ANCESTORS=
## Number of seconds, on average, between login requests from the same IP address before rate limiting kicks in.
# LOGIN_RATELIMIT_SECONDS=60
## Allow a burst of requests of up to this size, while maintaining the average indicated by `LOGIN_RATELIMIT_SECONDS`.
## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2.
# LOGIN_RATELIMIT_MAX_BURST=10
## Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in.
# ADMIN_RATELIMIT_SECONDS=300
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP
## You can generate it here: https://upgrade.yubico.com/getapikey/
@@ -275,9 +345,8 @@
# SMTP_HOST=smtp.domain.tld
# SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Vaultwarden
# SMTP_SECURITY=starttls # ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption (port 25)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default.
# SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here.
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15

View File

@@ -2,36 +2,23 @@ name: Build
on:
push:
paths-ignore:
- "*.md"
- "*.txt"
- ".dockerignore"
- ".env.template"
- ".gitattributes"
- ".gitignore"
- "azure-pipelines.yml"
- "docker/**"
- "hooks/**"
- "tools/**"
- ".github/FUNDING.yml"
- ".github/ISSUE_TEMPLATE/**"
- ".github/security-contact.gif"
paths:
- ".github/workflows/build.yml"
- "src/**"
- "migrations/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
pull_request:
# Ignore when there are only changes done too one of these paths
paths-ignore:
- "*.md"
- "*.txt"
- ".dockerignore"
- ".env.template"
- ".gitattributes"
- ".gitignore"
- "azure-pipelines.yml"
- "docker/**"
- "hooks/**"
- "tools/**"
- ".github/FUNDING.yml"
- ".github/ISSUE_TEMPLATE/**"
- ".github/security-contact.gif"
paths:
- ".github/workflows/build.yml"
- "src/**"
- "migrations/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
jobs:
build:
@@ -43,31 +30,23 @@ jobs:
fail-fast: false
matrix:
channel:
- nightly
# - stable
- stable
target-triple:
- x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
include:
- target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu
features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
channel: nightly
os: ubuntu-18.04
features: [sqlite,mysql,postgresql,enable_mimalloc] # Remember to update the `cargo test` to match the amount of features
channel: stable
os: ubuntu-20.04
ext: ""
# - target-triple: x86_64-unknown-linux-gnu
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,mysql,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext: ""
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }}
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
# End Checkout the repo
@@ -86,13 +65,13 @@ jobs:
# Enable Rust Caching
- uses: Swatinem/rust-cache@v1
- uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
# End Enable Rust Caching
# Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@v1
uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
with:
profile: minimal
target: ${{ matrix.target-triple }}
@@ -103,28 +82,28 @@ jobs:
# Run cargo tests (In release mode to speed up future builds)
# First test all features together, afterwards test them separately.
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
# Test single features
# 0: sqlite
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[0] != '' }}
# 1: mysql
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
if: ${{ matrix.features[1] != '' }}
# 2: postgresql
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: test
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
@@ -134,7 +113,7 @@ jobs:
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: clippy
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
@@ -143,7 +122,7 @@ jobs:
# Run cargo fmt
- name: '`cargo fmt`'
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: fmt
args: --all -- --check
@@ -152,7 +131,7 @@ jobs:
# Build the binary
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
uses: actions-rs/cargo@v1
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # v1.0.3
with:
command: build
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
@@ -161,21 +140,8 @@ jobs:
# Upload artifact to Github Actions
- name: Upload artifact
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8 # v3.1.0
with:
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# End Upload artifact to Github Actions
## This is not used at the moment
## We could start using this when we can build static binaries
# Upload to github actions release
# - name: Release
# uses: Shopify/upload-to-release@1
# if: startsWith(github.ref, 'refs/tags/')
# with:
# name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
# path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# repo-token: ${{ secrets.GITHUB_TOKEN }}
# End Upload to github actions release

View File

@@ -2,11 +2,10 @@ name: Hadolint
on:
push:
# Ignore when there are only changes done too one of these paths
paths:
- "docker/**"
pull_request:
# Ignore when there are only changes done too one of these paths
paths:
- "docker/**"
@@ -17,18 +16,18 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
# End Checkout the repo
# Download hadolint
# Download hadolint - https://github.com/hadolint/hadolint/releases
- name: Download hadolint
shell: bash
run: |
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint
env:
HADOLINT_VERSION: 2.5.0
HADOLINT_VERSION: 2.10.0
# End Download hadolint
# Test Dockerfiles

119
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,119 @@
name: Release
on:
push:
paths:
- ".github/workflows/release.yml"
- "src/**"
- "migrations/**"
- "hooks/**"
- "docker/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain"
branches: # Only on paths above
- main
tags: # Always, regardless of paths above
- '*'
jobs:
# https://github.com/marketplace/actions/skip-duplicate-actions
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-latest
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@9d116fa7e55f295019cfab7e3ab72b478bcf7fdd # v4.0.0
with:
cancel_others: 'true'
# Only run this when not creating a tag
if: ${{ startsWith(github.ref, 'refs/heads/') }}
docker-build:
runs-on: ubuntu-latest
needs: skip_check
# Start a local docker registry to be used to generate multi-arch images.
services:
registry:
image: registry:2
ports:
- 5000:5000
env:
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
strategy:
matrix:
base_image: ["debian","alpine"]
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # v3.0.2
with:
fetch-depth: 0
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@49ed152c8eca782a232dede0303416e8f356c37b # v2.0.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Determine Docker Tag
- name: Init Variables
id: vars
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
echo "set-output name=DOCKER_TAG::testing"
echo "::set-output name=DOCKER_TAG::testing"
fi
# End Determine Docker Tag
- name: Build Debian based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/build
if: ${{ matrix.base_image == 'debian' }}
- name: Push Debian based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
run: |
./hooks/push
if: ${{ matrix.base_image == 'debian' }}
- name: Build Alpine based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/build
if: ${{ matrix.base_image == 'alpine' }}
- name: Push Alpine based images
shell: bash
env:
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
run: |
./hooks/push
if: ${{ matrix.base_image == 'alpine' }}

View File

@@ -1,12 +1,13 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
rev: v4.2.0
hooks:
- id: check-yaml
- id: check-json
- id: check-toml
- id: end-of-file-fixer
exclude: "(.*js$|.*css$)"
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
@@ -24,7 +25,7 @@ repos:
description: Test the package for errors.
entry: cargo test
language: system
args: ["--features", "sqlite,mysql,postgresql", "--"]
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"]
types: [rust]
pass_filenames: false
- id: cargo-clippy
@@ -32,6 +33,6 @@ repos:
description: Lint Rust sources
entry: cargo clippy
language: system
args: ["--features", "sqlite,mysql,postgresql", "--", "-D", "warnings"]
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types: [rust]
pass_filenames: false

2577
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,9 @@
name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2018"
edition = "2021"
rust-version = "1.60"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
readme = "README.md"
@@ -11,6 +13,7 @@ publish = false
build = "build.rs"
[features]
# default = ["sqlite"]
# Empty to keep compatibility, prefer to set USE_SYSLOG=true
enable_syslog = []
mysql = ["diesel/mysql", "diesel_migrations/mysql"]
@@ -18,137 +21,142 @@ postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"]
# Enable to use a vendored and statically linked openssl
vendored_openssl = ["openssl/vendored"]
# Enable MiMalloc memory allocator to replace the default malloc
# This can improve performance for Alpine builds
enable_mimalloc = ["mimalloc"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
unstable = []
[target."cfg(not(windows))".dependencies]
syslog = "4.0.1"
# Logging
syslog = "6.0.1" # Needs to be v4 until fern is updated
[dependencies]
# Web framework for nightly with a focus on ease-of-use, expressibility, and speed.
rocket = { version = "=0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "=0.5.0-dev"
# Logging
log = "0.4.17"
fern = { version = "0.6.1", features = ["syslog-6"] }
tracing = { version = "0.1.34", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
# HTTP client
reqwest = { version = "0.11.4", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies"] }
backtrace = "0.3.65" # Logging panics to logfile instead stderr only
# Used for custom short lived cookie jar
cookie = "0.15.1"
cookie_store = "0.15.0"
bytes = "1.0.1"
url = "2.2.2"
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.1", default-features = false }
# multipart/form-data support
multipart = { version = "0.18.0", features = ["server"], default-features = false }
# Lazy initialization
once_cell = "1.10.0"
# WebSockets library
ws = { version = "0.11.0", package = "parity-ws" }
# Numerical libraries
num-traits = "0.2.15"
num-derive = "0.3.3"
# MessagePack library
rmpv = "0.4.7"
# Web framework
rocket = { version = "0.5.0-rc.2", features = ["tls", "json"], default-features = false }
# Concurrent hashmap implementation
chashmap = "2.2.2"
# WebSockets libraries
ws = { version = "0.11.1", package = "parity-ws" }
rmpv = "1.0.0" # MessagePack library
chashmap = "2.2.2" # Concurrent hashmap implementation
# Async futures
futures = "0.3.21"
tokio = { version = "1.18.2", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.126", features = ["derive"] }
serde_json = "1.0.64"
# Logging
log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] }
serde = { version = "1.0.137", features = ["derive"] }
serde_json = "1.0.81"
# A safe, extensible ORM and Query builder
diesel = { version = "1.4.7", features = [ "chrono", "r2d2"] }
diesel = { version = "1.4.8", features = ["chrono", "r2d2"] }
diesel_migrations = "1.4.0"
# Bundled SQLite
libsqlite3-sys = { version = "0.22.2", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.8.4"
rand = "0.8.5"
ring = "0.16.20"
# UUID generation
uuid = { version = "0.8.2", features = ["v4"] }
uuid = { version = "1.0.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.19", features = ["serde"] }
chrono-tz = "0.5.3"
time = "0.2.27"
chrono = { version = "0.4.19", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.6.1"
time = "0.3.9"
# Job scheduler
job_scheduler = "1.2.1"
# TOTP library
oath = "0.10.2"
# Data encoding library
# Data encoding library Hex/Base32/Base64
data-encoding = "2.3.2"
# JWT library
jsonwebtoken = "7.2.0"
jsonwebtoken = "8.1.0"
# U2F library
u2f = "0.2.0"
webauthn-rs = "=0.3.0-alpha.9"
# TOTP library
totp-lite = "1.0.3"
# Yubico Library
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false }
# WebAuthn libraries
webauthn-rs = "0.3.2"
# Lazy initialization
once_cell = "1.8.0"
# Numerical libraries
num-traits = "0.2.14"
num-derive = "0.3.3"
# Handling of URL's for WebAuthn
url = "2.2.2"
# Email libraries
tracing = { version = "0.1.26", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
lettre = { version = "0.10.0-rc.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
idna = "0.2.3" # Punycode conversion
lettre = { version = "0.10.0-rc.6", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
percent-encoding = "2.1.0" # URL encoding library used for URL's in the emails
# Template library
handlebars = { version = "4.1.0", features = ["dir_source"] }
handlebars = { version = "4.2.2", features = ["dir_source"] }
# HTTP client
reqwest = { version = "0.11.10", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
# For favicon extraction from main website
html5ever = "0.25.1"
markup5ever_rcdom = "0.1.0"
regex = { version = "1.5.4", features = ["std", "perf"], default-features = false }
data-url = "0.1.0"
html5gum = "0.4.0"
regex = { version = "1.5.5", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.1.1"
bytes = "1.1.0"
cached = "0.34.0"
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.0"
cookie_store = "0.16.0"
# Used by U2F, JWT and Postgres
openssl = "0.10.35"
# URL encoding library
percent-encoding = "2.1.0"
# Punycode conversion
idna = "0.2.3"
openssl = "0.10.40"
# CLI argument parsing
pico-args = "0.4.2"
# Logging panics to logfile instead stderr only
backtrace = "0.3.60"
# Macro ident concatenation
paste = "1.0.5"
paste = "1.0.7"
governor = "0.4.2"
# Capture CTRL+C
ctrlc = { version = "3.2.2", features = ["termination"] }
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.29", features = ["secure"], default-features = false, optional = true }
[patch.crates-io]
# Use newest ring
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
# For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = 'eb7330b5296c0d43816d1346211b74182bb4ae37' }
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
# to any issues or PRs for almost a year (as of April 2021). This hopefully
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
# In particular, `cron` has since implemented parsing of some common syntax
# that wasn't previously supported (https://github.com/zslayton/cron/pull/64).
job_scheduler = { git = 'https://github.com/jjlin/job_scheduler', rev = 'ee023418dbba2bfe1e30a5fd7d937f9e33739806' }
# 2022-05-04: Forked/Updated the job_scheduler again use the latest dependencies and some fixes.
job_scheduler = { git = 'https://github.com/BlackDex/job_scheduler', rev = '9100fc596a083fd9c0b560f8f11f108e0a19d07e' }
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations
[profile.release]
strip = "debuginfo"
lto = "thin"

View File

@@ -1,4 +1,4 @@
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.

View File

@@ -1,2 +0,0 @@
[global.limits]
json = 10485760 # 10 MiB

View File

@@ -15,11 +15,14 @@ fn main() {
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
if let Ok(version) = env::var("BWRS_VERSION") {
println!("cargo:rustc-env=BWRS_VERSION={}", version);
// Support $BWRS_VERSION for legacy compatibility, but default to $VW_VERSION.
// If neither exist, read from git.
let maybe_vaultwarden_version =
env::var("VW_VERSION").or_else(|_| env::var("BWRS_VERSION")).or_else(|_| version_from_git_info());
if let Ok(version) = maybe_vaultwarden_version {
println!("cargo:rustc-env=VW_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
} else {
read_git_info().ok();
}
}
@@ -33,7 +36,13 @@ fn run(args: &[&str]) -> Result<String, std::io::Error> {
}
/// This method reads info from Git, namely tags, branch, and revision
fn read_git_info() -> Result<(), std::io::Error> {
/// To access these values, use:
/// - env!("GIT_EXACT_TAG")
/// - env!("GIT_LAST_TAG")
/// - env!("GIT_BRANCH")
/// - env!("GIT_REV")
/// - env!("VW_VERSION")
fn version_from_git_info() -> Result<String, std::io::Error> {
// The exact tag for the current commit, can be empty when
// the current commit doesn't have an associated tag
let exact_tag = run(&["git", "describe", "--abbrev=0", "--tags", "--exact-match"]).ok();
@@ -56,23 +65,11 @@ fn read_git_info() -> Result<(), std::io::Error> {
println!("cargo:rustc-env=GIT_REV={}", rev_short);
// Combined version
let version = if let Some(exact) = exact_tag {
exact
if let Some(exact) = exact_tag {
Ok(exact)
} else if &branch != "main" && &branch != "master" {
format!("{}-{} ({})", last_tag, rev_short, branch)
Ok(format!("{}-{} ({})", last_tag, rev_short, branch))
} else {
format!("{}-{}", last_tag, rev_short)
};
println!("cargo:rustc-env=BWRS_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
// To access these values, use:
// env!("GIT_EXACT_TAG")
// env!("GIT_LAST_TAG")
// env!("GIT_BRANCH")
// env!("GIT_REV")
// env!("BWRS_VERSION")
Ok(())
Ok(format!("{}-{}", last_tag, rev_short))
}
}

View File

@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1
# The cross-built images have the build arch (`amd64`) embedded in the image
# manifest, rather than the target arch. For example:
#

View File

@@ -1,31 +1,41 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.53" %}
{% set build_stage_base_image = "rust:1.61-bullseye" %}
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2021-06-24" %}
{% set runtime_stage_base_image = "alpine:3.14" %}
{% set build_stage_base_image = "blackdex/rust-musl:x86_64-musl-stable-1.61.0" %}
{% set runtime_stage_base_image = "alpine:3.15" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.14" %}
{% set build_stage_base_image = "blackdex/rust-musl:armv7-musleabihf-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.15" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% elif "armv6" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:arm-musleabi-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/rpi-alpine:3.15" %}
{% set package_arch_target = "arm-unknown-linux-musleabi" %}
{% elif "arm64" in target_file %}
{% set build_stage_base_image = "blackdex/rust-musl:aarch64-musl-stable-1.61.0" %}
{% set runtime_stage_base_image = "balenalib/aarch64-alpine:3.15" %}
{% set package_arch_target = "aarch64-unknown-linux-musl" %}
{% endif %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %}
{% set runtime_stage_base_image = "debian:bullseye-slim" %}
{% elif "arm64" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:bullseye" %}
{% set package_arch_name = "arm64" %}
{% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:bullseye" %}
{% set package_arch_name = "armel" %}
{% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:bullseye" %}
{% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
{% set package_cross_compiler = "arm-linux-gnueabihf" %}
@@ -40,12 +50,17 @@
{% else %}
{% set package_arch_target_param = "" %}
{% endif %}
{% if "buildx" in target_file %}
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
{% else %}
{% set mount_rust_cache = "" %}
{% endif %}
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_version = "2.21.1" %}
{% set vault_image_digest = "sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5" %}
{% set vault_version = "2.28.1" %}
{% set vault_image_digest = "sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
@@ -68,66 +83,68 @@ FROM vaultwarden/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
{% set features = "sqlite,postgresql" %}
{% else %}
# Alpine-based ARM (musl) only supports sqlite during compile time.
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
ARG DB=sqlite,vendored_openssl
{% set features = "sqlite" %}
{% endif %}
{% else %}
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
{% set features = "sqlite,mysql,postgresql" %}
{% endif %}
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
{# {% if "alpine" not in target_file and "buildx" in target_file %}
# Debian based Buildx builds can use some special apt caching to speedup building.
# By default Debian based images have some rules to keep docker builds clean, we need to remove this.
# See: https://hub.docker.com/r/docker/dockerfile
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
{% endif %} #}
# Create CARGO_HOME folder and don't download rust docs
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
{% if "alpine" in target_file %}
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
{% if "armv7" in target_file %}
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
{% if "armv6" in target_file %}
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/{{ package_arch_target }}/lib/libatomic.a'
{% endif %}
{% elif "arm" in target_file %}
#
# Install required build libs for {{ package_arch_name }} architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture {{ package_arch_name }} \
# hadolint ignore=DL3059
RUN dpkg --add-architecture {{ package_arch_name }} \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev{{ package_arch_prefix }} \
libc6-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libpq-dev \
libpq-dev{{ package_arch_prefix }} \
libmariadb3{{ package_arch_prefix }} \
libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }} \
gcc-{{ package_cross_compiler }} \
&& mkdir -p ~/.cargo \
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
#
# Make sure cargo has the right target config
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> "${CARGO_HOME}/config"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif -%}
# Set arm specific environment values
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}" \
OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% if "amd64" in target_file and "alpine" not in target_file %}
{% elif "amd64" in target_file %}
# Install DB packages
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev{{ package_arch_prefix }} \
libpq-dev{{ package_arch_prefix }} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
@@ -140,37 +157,22 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
{% if "alpine" not in target_file %}
{% if "arm" in target_file %}
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% endif -%}
{% endif %}
{% if package_arch_target is defined %}
RUN rustup target add {{ package_arch_target }}
RUN {{ mount_rust_cache -}} rustup target add {{ package_arch_target }}
{% endif %}
# Configure the DB ARG as late as possible to not invalidate the cached layers above
{% if "alpine" in target_file %}
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% else %}
ARG DB=sqlite,mysql,postgresql
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} \
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} \
&& find . -not -path "./target*" -delete
# Copies the complete project
@@ -182,26 +184,22 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %}
{% if "armv7" in target_file %}
# hadolint ignore=DL3059
RUN musl-strip target/{{ package_arch_target }}/release/vaultwarden
{% endif %}
{% endif %}
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM {{ runtime_stage_base_image }}
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
{% if "alpine" in runtime_stage_base_image %}
ENV SSL_CERT_DIR=/etc/ssl/certs
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
{%- if "alpine" in runtime_stage_base_image %} \
SSL_CERT_DIR=/etc/ssl/certs
{% endif %}
{% if "amd64" not in target_file %}
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
@@ -212,14 +210,9 @@ RUN mkdir /data \
{% if "alpine" in runtime_stage_base_image %}
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
{% if "mysql" in features %}
mariadb-connector-c \
{% endif %}
{% if "postgresql" in features %}
postgresql-libs \
{% endif %}
ca-certificates
{% else %}
&& apt-get update && apt-get install -y \
@@ -230,6 +223,7 @@ RUN mkdir /data \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
@@ -245,7 +239,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
{% if package_arch_target is defined %}
COPY --from=build /app/target/{{ package_arch_target }}/release/vaultwarden .
@@ -259,5 +252,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -7,3 +7,9 @@ all: $(OBJECTS)
%/Dockerfile.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,33 +16,41 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.53 as build
FROM rust:1.61-bullseye as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
@@ -53,6 +63,9 @@ COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -68,16 +81,17 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
FROM debian:bullseye-slim
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# Create data folder and Install needed libraries
@@ -90,6 +104,7 @@ RUN mkdir /data \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
@@ -100,7 +115,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/vaultwarden .
@@ -110,5 +124,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,30 +16,34 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2021-06-24 as build
FROM blackdex/rust-musl:x86_64-musl-stable-1.61.0 as build
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -50,6 +56,10 @@ COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -65,26 +75,28 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.14
FROM alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
postgresql-libs \
ca-certificates
@@ -95,7 +107,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
@@ -105,5 +116,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,132 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Install DB packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:bullseye-slim
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,124 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:x86_64-musl-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add x86_64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,50 +16,61 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.53 as build
FROM rust:1.61-bullseye as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
# hadolint ignore=DL3059
RUN dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -68,27 +81,11 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -104,16 +101,17 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:buster
FROM balenalib/aarch64-debian:bullseye
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
@@ -128,6 +126,7 @@ RUN mkdir /data \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
@@ -140,7 +139,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
@@ -150,5 +148,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,128 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add aarch64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,156 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for arm64 architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev:arm64 \
libmariadb3:arm64 \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64 \
gcc-aarch64-linux-gnu \
#
# Make sure cargo has the right target config
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \
OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-gnu
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:bullseye
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,128 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:aarch64-musl-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-musl
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,50 +16,61 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.53 as build
FROM rust:1.61-bullseye as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
# hadolint ignore=DL3059
RUN dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel \
libpq5:armel \
libpq-dev \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -68,27 +81,11 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -104,16 +101,17 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:buster
FROM balenalib/rpi-debian:bullseye
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
@@ -128,6 +126,7 @@ RUN mkdir /data \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
@@ -140,7 +139,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
@@ -150,5 +148,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,130 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add arm-unknown-linux-musleabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,156 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armel architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel \
libpq5:armel \
libpq-dev:armel \
libmariadb3:armel \
libmariadb-dev:armel \
libmariadb-dev-compat:armel \
gcc-arm-linux-gnueabi \
#
# Make sure cargo has the right target config
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-gnueabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:bullseye
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,130 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:arm-musleabi-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# To be able to build the armv6 image with mimalloc we need to specifically specify the libatomic.a file location
ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/arm-unknown-linux-musleabi/lib/libatomic.a'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-musleabi
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-musleabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,50 +16,61 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.53 as build
FROM rust:1.61-bullseye as build
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
# hadolint ignore=DL3059
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -68,27 +81,11 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
&& apt-get download libmariadb-dev-compat:amd64 \
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
&& rm -rvf ./libmariadb-dev-compat*.deb \
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -104,16 +101,17 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:buster
FROM balenalib/armv7hf-debian:bullseye
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
@@ -128,6 +126,7 @@ RUN mkdir /data \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
@@ -140,7 +139,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
@@ -150,5 +148,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,3 +1,5 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
@@ -14,32 +16,34 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.21.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
# [vaultwarden/web-vault:v2.21.1]
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.61.0 as build
# Alpine-based ARM (musl) only supports sqlite during compile time.
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
ARG DB=sqlite,vendored_openssl
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Don't download rust docs
RUN rustup set profile minimal
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
@@ -52,6 +56,10 @@ COPY ./build.rs ./build.rs
RUN rustup target add armv7-unknown-linux-musleabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@@ -67,19 +75,19 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
# hadolint ignore=DL3059
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.14
FROM balenalib/armv7hf-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
@@ -88,6 +96,7 @@ RUN [ "cross-build-start" ]
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
@@ -102,7 +111,6 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
@@ -112,5 +120,9 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,156 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM rust:1.61-bullseye as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
#
# Install required build libs for armhf architecture.
# hadolint ignore=DL3059
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf \
libpq5:armhf \
libpq-dev:armhf \
libmariadb3:armhf \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf \
gcc-arm-linux-gnueabihf \
#
# Make sure cargo has the right target config
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
# Set arm specific environment values
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \
CROSS_COMPILE="1" \
OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \
OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-gnueabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:bullseye
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
dumb-init \
libmariadb-dev-compat \
libpq5 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -0,0 +1,128 @@
# syntax=docker/dockerfile:1
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull vaultwarden/web-vault:v2.28.1
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.28.1
# [vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5
# [vaultwarden/web-vault:v2.28.1]
#
FROM vaultwarden/web-vault@sha256:df7f12b1e22bf0dfc1b6b6f46921b4e9e649561931ba65357c1eb1963514b3b5 as vault
########################## BUILD IMAGE ##########################
FROM blackdex/rust-musl:armv7-musleabihf-stable-1.61.0 as build
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
TZ=UTC \
TERM=xterm-256color \
CARGO_HOME="/root/.cargo" \
USER="root"
# Create CARGO_HOME folder and don't download rust docs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-musleabihf
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf \
&& find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
# hadolint ignore=DL3059
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-alpine:3.15
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
ROCKET_PORT=80 \
SSL_CERT_DIR=/etc/ssl/certs
# hadolint ignore=DL3059
RUN [ "cross-build-start" ]
# Create data folder and Install needed libraries
RUN mkdir /data \
&& apk add --no-cache \
openssl \
tzdata \
curl \
dumb-init \
ca-certificates
# hadolint ignore=DL3059
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
# We should be able to remove the dumb-init now with Rocket 0.5
# But the balenalib images have some issues with there entry.sh
# See: https://github.com/balena-io-library/base-images/issues/735
# Lets keep using dumb-init for now, since that is working fine.
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -7,10 +7,5 @@ arches=(
)
if [[ "${DOCKER_TAG}" == *alpine ]]; then
# The Alpine image build currently only works for certain arches.
distro_suffix=.alpine
arches=(
amd64
armv7
)
fi

View File

@@ -34,12 +34,17 @@ for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}")
done
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
buildx_suffix=.buildx
fi
set -ex
for arch in "${arches[@]}"; do
docker build \
"${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${distro_suffix} \
-f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
.
done

View File

@@ -10,7 +10,7 @@ join() { local IFS="$1"; shift; echo "$*"; }
set -ex
echo ">>> Starting local Docker registry..."
echo ">>> Starting local Docker registry when needed..."
# Docker Buildx's `docker-container` driver is needed for multi-platform
# builds, but it can't access existing images on the Docker host (like the
@@ -25,7 +25,13 @@ echo ">>> Starting local Docker registry..."
# Use host networking so the buildx container can access the registry via
# localhost.
#
docker run -d --name registry --network host registry:2 # defaults to port 5000
# First check if there already is a registry container running, else skip it.
# This will only happen either locally or running it via Github Actions
#
if ! timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/5000'; then
# defaults to port 5000
docker run -d --name registry --network host registry:2
fi
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
@@ -49,7 +55,12 @@ echo ">>> Setting up Docker Buildx..."
#
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
#
docker buildx create --name builder --use --driver-opt network=host
# Check if there already is a builder running, else skip this and use the existing.
# This will only happen either locally or running it via Github Actions
#
if ! docker buildx inspect builder > /dev/null 2>&1 ; then
docker buildx create --name builder --use --driver-opt network=host
fi
echo ">>> Running Docker Buildx..."

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid CHAR(36) NOT NULL PRIMARY KEY,
grantor_uuid CHAR(36) REFERENCES users (uuid),
grantee_uuid CHAR(36) REFERENCES users (uuid),
email VARCHAR(255),
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at DATETIME,
last_notification_at DATETIME,
updated_at DATETIME NOT NULL,
created_at DATETIME NOT NULL
);

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_incomplete;

View File

@@ -0,0 +1,9 @@
CREATE TABLE twofactor_incomplete (
user_uuid CHAR(36) NOT NULL REFERENCES users(uuid),
device_uuid CHAR(36) NOT NULL,
device_name TEXT NOT NULL,
login_time DATETIME NOT NULL,
ip_address TEXT NOT NULL,
PRIMARY KEY (user_uuid, device_uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN api_key VARCHAR(255);

View File

@@ -0,0 +1,4 @@
-- First remove the previous primary key
ALTER TABLE devices DROP PRIMARY KEY;
-- Add a new combined one
ALTER TABLE devices ADD PRIMARY KEY (uuid, user_uuid);

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid CHAR(36) NOT NULL PRIMARY KEY,
grantor_uuid CHAR(36) REFERENCES users (uuid),
grantee_uuid CHAR(36) REFERENCES users (uuid),
email VARCHAR(255),
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at TIMESTAMP,
last_notification_at TIMESTAMP,
updated_at TIMESTAMP NOT NULL,
created_at TIMESTAMP NOT NULL
);

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_incomplete;

View File

@@ -0,0 +1,9 @@
CREATE TABLE twofactor_incomplete (
user_uuid VARCHAR(40) NOT NULL REFERENCES users(uuid),
device_uuid VARCHAR(40) NOT NULL,
device_name TEXT NOT NULL,
login_time TIMESTAMP NOT NULL,
ip_address TEXT NOT NULL,
PRIMARY KEY (user_uuid, device_uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN api_key TEXT;

View File

@@ -0,0 +1,4 @@
-- First remove the previous primary key
ALTER TABLE devices DROP CONSTRAINT devices_pkey;
-- Add a new combined one
ALTER TABLE devices ADD PRIMARY KEY (uuid, user_uuid);

View File

@@ -0,0 +1 @@
DROP TABLE emergency_access;

View File

@@ -0,0 +1,14 @@
CREATE TABLE emergency_access (
uuid TEXT NOT NULL PRIMARY KEY,
grantor_uuid TEXT REFERENCES users (uuid),
grantee_uuid TEXT REFERENCES users (uuid),
email TEXT,
key_encrypted TEXT,
atype INTEGER NOT NULL,
status INTEGER NOT NULL,
wait_time_days INTEGER NOT NULL,
recovery_initiated_at DATETIME,
last_notification_at DATETIME,
updated_at DATETIME NOT NULL,
created_at DATETIME NOT NULL
);

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_incomplete;

View File

@@ -0,0 +1,9 @@
CREATE TABLE twofactor_incomplete (
user_uuid TEXT NOT NULL REFERENCES users(uuid),
device_uuid TEXT NOT NULL,
device_name TEXT NOT NULL,
login_time DATETIME NOT NULL,
ip_address TEXT NOT NULL,
PRIMARY KEY (user_uuid, device_uuid)
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE users
ADD COLUMN api_key TEXT;

View File

@@ -0,0 +1,23 @@
-- Create new devices table with primary keys on both uuid and user_uuid
CREATE TABLE devices_new (
uuid TEXT NOT NULL,
created_at DATETIME NOT NULL,
updated_at DATETIME NOT NULL,
user_uuid TEXT NOT NULL,
name TEXT NOT NULL,
atype INTEGER NOT NULL,
push_token TEXT,
refresh_token TEXT NOT NULL,
twofactor_remember TEXT,
PRIMARY KEY(uuid, user_uuid),
FOREIGN KEY(user_uuid) REFERENCES users(uuid)
);
-- Transfer current data to new table
INSERT INTO devices_new SELECT * FROM devices;
-- Drop the old table
DROP TABLE devices;
-- Rename the new table to the original name
ALTER TABLE devices_new RENAME TO devices;

View File

@@ -1 +1 @@
nightly-2021-06-24
stable

View File

@@ -1,7 +1,7 @@
version = "Two"
edition = "2018"
# version = "Two"
edition = "2021"
max_width = 120
newline_style = "Unix"
use_small_heuristics = "Off"
struct_lit_single_line = false
overflow_delimited_expr = true
# struct_lit_single_line = false
# overflow_delimited_expr = true

View File

@@ -1,15 +1,16 @@
use once_cell::sync::Lazy;
use serde::de::DeserializeOwned;
use serde_json::Value;
use std::{env, time::Duration};
use std::env;
use rocket::serde::json::Json;
use rocket::{
http::{Cookie, Cookies, SameSite, Status},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
response::{content::Html, Flash, Redirect},
form::Form,
http::{Cookie, CookieJar, SameSite, Status},
request::{self, FlashMessage, FromRequest, Outcome, Request},
response::{content::RawHtml as Html, Flash, Redirect},
Route,
};
use rocket_contrib::json::Json;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
@@ -18,10 +19,14 @@ use crate::{
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
error::{Error, MapResult},
mail,
util::{format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker},
CONFIG,
util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
},
CONFIG, VERSION,
};
use futures::{stream, stream::StreamExt};
pub fn routes() -> Vec<Route> {
if !CONFIG.disable_admin_token() && !CONFIG.is_admin_token_set() {
return routes![admin_disabled];
@@ -72,11 +77,10 @@ fn admin_disabled() -> &'static str {
"The admin panel is disabled, please configure the 'ADMIN_TOKEN' variable to enable it"
}
const COOKIE_NAME: &str = "BWRS_ADMIN";
const COOKIE_NAME: &str = "VW_ADMIN";
const ADMIN_PATH: &str = "/admin";
const BASE_TEMPLATE: &str = "admin/base";
const VERSION: Option<&str> = option_env!("BWRS_VERSION");
fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
@@ -84,10 +88,11 @@ fn admin_path() -> String {
struct Referer(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for Referer {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Referer {
type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
Outcome::Success(Referer(request.headers().get_one("Referer").map(str::to_string)))
}
}
@@ -95,10 +100,11 @@ impl<'a, 'r> FromRequest<'a, 'r> for Referer {
#[derive(Debug)]
struct IpHeader(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for IpHeader {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for IpHeader {
type Error = ();
fn from_request(req: &'a Request<'r>) -> Outcome<Self, Self::Error> {
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
if req.headers().get_one(&CONFIG.ip_header()).is_some() {
Outcome::Success(IpHeader(Some(CONFIG.ip_header())))
} else if req.headers().get_one("X-Client-IP").is_some() {
@@ -135,9 +141,9 @@ fn admin_url(referer: Referer) -> String {
}
#[get("/", rank = 2)]
fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> {
fn admin_login(flash: Option<FlashMessage<'_>>) -> ApiResult<Html<String>> {
// If there is an error, show it
let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg()));
let msg = flash.map(|msg| format!("{}: {}", msg.kind(), msg.message()));
let json = json!({
"page_content": "admin/login",
"version": VERSION,
@@ -158,12 +164,16 @@ struct LoginForm {
#[post("/", data = "<data>")]
fn post_admin_login(
data: Form<LoginForm>,
mut cookies: Cookies,
cookies: &CookieJar<'_>,
ip: ClientIp,
referer: Referer,
) -> Result<Redirect, Flash<Redirect>> {
let data = data.into_inner();
if crate::ratelimit::check_limit_admin(&ip.ip).is_err() {
return Err(Flash::error(Redirect::to(admin_url(referer)), "Too many requests, try again later."));
}
// If the token is invalid, redirect to login page
if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip);
@@ -175,7 +185,7 @@ fn post_admin_login(
let cookie = Cookie::build(COOKIE_NAME, jwt)
.path(admin_path())
.max_age(time::Duration::minutes(20))
.max_age(rocket::time::Duration::minutes(20))
.same_site(SameSite::Strict)
.http_only(true)
.finish();
@@ -234,7 +244,7 @@ impl AdminTemplateData {
}
#[get("/", rank = 1)]
fn admin_page(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
let text = AdminTemplateData::new().render()?;
Ok(Html(text))
}
@@ -245,8 +255,8 @@ struct InviteData {
email: String,
}
fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn) {
async fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn).await {
Ok(user)
} else {
err_code!("User doesn't exist", Status::NotFound.code);
@@ -254,30 +264,28 @@ fn get_user_or_404(uuid: &str, conn: &DbConn) -> ApiResult<User> {
}
#[post("/invite", data = "<data>")]
fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> JsonResult {
async fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner();
let email = data.email.clone();
if User::find_by_mail(&data.email, &conn).is_some() {
if User::find_by_mail(&data.email, &conn).await.is_some() {
err_code!("User already exists", Status::Conflict.code)
}
let mut user = User::new(email);
// TODO: After try_blocks is stabilized, this can be made more readable
// See: https://github.com/rust-lang/rust/issues/31436
(|| {
async fn _generate_invite(user: &User, conn: &DbConn) -> EmptyResult {
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)
} else {
let invitation = Invitation::new(data.email);
invitation.save(&conn)?;
let invitation = Invitation::new(user.email.clone());
invitation.save(conn).await
}
}
user.save(&conn)
})()
.map_err(|e| e.with_code(Status::InternalServerError.code))?;
_generate_invite(&user, &conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
user.save(&conn).await.map_err(|e| e.with_code(Status::InternalServerError.code))?;
Ok(Json(user.to_json(&conn)))
Ok(Json(user.to_json(&conn).await))
}
#[post("/test/smtp", data = "<data>")]
@@ -292,90 +300,96 @@ fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
}
#[get("/logout")]
fn logout(mut cookies: Cookies, referer: Referer) -> Redirect {
cookies.remove(Cookie::named(COOKIE_NAME));
fn logout(cookies: &CookieJar<'_>, referer: Referer) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
Redirect::to(admin_url(referer))
}
#[get("/users")]
fn get_users_json(_token: AdminToken, conn: DbConn) -> Json<Value> {
let users = User::get_all(&conn);
let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect();
async fn get_users_json(_token: AdminToken, conn: DbConn) -> Json<Value> {
let users_json = stream::iter(User::get_all(&conn).await)
.then(|u| async {
let u = u; // Move out this single variable
u.to_json(&conn).await
})
.collect::<Vec<Value>>()
.await;
Json(Value::Array(users_json))
}
#[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn);
let dt_fmt = "%Y-%m-%d %H:%M:%S %Z";
let users_json: Vec<Value> = users
.iter()
.map(|u| {
let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
async fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
const DT_FMT: &str = "%Y-%m-%d %H:%M:%S %Z";
let users_json = stream::iter(User::get_all(&conn).await)
.then(|u| async {
let u = u; // Move out this single variable
let mut usr = u.to_json(&conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn).await as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt));
usr["last_active"] = match u.last_active(&conn) {
Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)),
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&conn).await {
Some(dt) => json!(format_naive_datetime_local(&dt, DT_FMT)),
None => json!("Never"),
};
usr
})
.collect();
.collect::<Vec<Value>>()
.await;
let text = AdminTemplateData::with_data("admin/users", json!(users_json)).render()?;
Ok(Html(text))
}
#[get("/users/<uuid>")]
fn get_user_json(uuid: String, _token: AdminToken, conn: DbConn) -> JsonResult {
let user = get_user_or_404(&uuid, &conn)?;
async fn get_user_json(uuid: String, _token: AdminToken, conn: DbConn) -> JsonResult {
let user = get_user_or_404(&uuid, &conn).await?;
Ok(Json(user.to_json(&conn)))
Ok(Json(user.to_json(&conn).await))
}
#[post("/users/<uuid>/delete")]
fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&uuid, &conn)?;
user.delete(&conn)
async fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&uuid, &conn).await?;
user.delete(&conn).await
}
#[post("/users/<uuid>/deauth")]
fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?;
async fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?;
user.reset_security_stamp();
user.save(&conn)
user.save(&conn).await
}
#[post("/users/<uuid>/disable")]
fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn)?;
async fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?;
user.reset_security_stamp();
user.enabled = false;
user.save(&conn)
user.save(&conn).await
}
#[post("/users/<uuid>/enable")]
fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
async fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
user.enabled = true;
user.save(&conn)
user.save(&conn).await
}
#[post("/users/<uuid>/remove-2fa")]
fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn)?;
TwoFactor::delete_all_by_user(&user.uuid, &conn)?;
async fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&uuid, &conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &conn).await?;
user.totp_recover = None;
user.save(&conn)
user.save(&conn).await
}
#[derive(Deserialize, Debug)]
@@ -386,10 +400,10 @@ struct UserOrgTypeData {
}
#[post("/users/org_type", data = "<data>")]
fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
async fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn) {
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
@@ -401,7 +415,8 @@ fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: D
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there are at least another owner
let num_owners = UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).len();
let num_owners =
UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).await.len();
if num_owners <= 1 {
err!("Can't change the type of the last owner")
@@ -409,37 +424,37 @@ fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: D
}
user_to_edit.atype = new_type as i32;
user_to_edit.save(&conn)
user_to_edit.save(&conn).await
}
#[post("/users/update_revision")]
fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn)
async fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn).await
}
#[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations
.iter()
.map(|o| {
async fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations_json = stream::iter(Organization::get_all(&conn).await)
.then(|o| async {
let o = o; //Move out this single variable
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn).await as i32));
org
})
.collect();
.collect::<Vec<Value>>()
.await;
let text = AdminTemplateData::with_data("admin/organizations", json!(organizations_json)).render()?;
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).map_res("Organization doesn't exist")?;
org.delete(&conn)
async fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).await.map_res("Organization doesn't exist")?;
org.delete(&conn).await
}
#[derive(Deserialize)]
@@ -457,30 +472,30 @@ struct GitCommit {
sha: String,
}
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
async fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let github_api = get_reqwest_client();
Ok(github_api.get(url).timeout(Duration::from_secs(10)).send()?.error_for_status()?.json::<T>()?)
Ok(github_api.get(url).send().await?.error_for_status()?.json::<T>().await?)
}
fn has_http_access() -> bool {
async fn has_http_access() -> bool {
let http_access = get_reqwest_client();
match http_access.head("https://github.com/dani-garcia/vaultwarden").timeout(Duration::from_secs(10)).send() {
match http_access.head("https://github.com/dani-garcia/vaultwarden").send().await {
Ok(r) => r.status().is_success(),
_ => false,
}
}
#[get("/diagnostics")]
fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResult<Html<String>> {
async fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResult<Html<String>> {
use crate::util::read_file_string;
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let web_vault_version: WebVaultVersion =
match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "bwrs-version.json")) {
match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
@@ -492,7 +507,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
// Execute some environment checks
let running_within_docker = is_running_in_docker();
let has_http_access = has_http_access();
let has_http_access = has_http_access().await;
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
|| env::var_os("HTTPS_PROXY").is_some()
@@ -508,11 +523,14 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
// TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already.
let (latest_release, latest_commit, latest_web_build) = if has_http_access {
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest") {
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest")
.await
{
Ok(r) => r.tag_name,
_ => "-".to_string(),
},
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main") {
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main").await
{
Ok(mut c) => {
c.sha.truncate(8);
c.sha
@@ -526,7 +544,9 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
} else {
match get_github_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
) {
)
.await
{
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(),
}
@@ -549,6 +569,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
"web_vault_version": web_vault_version.version,
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"docker_base_image": docker_base_image(),
"has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
@@ -556,7 +577,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
"db_type": *DB_TYPE,
"db_version": get_sql_server_version(&conn),
"db_version": get_sql_server_version(&conn).await,
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"overrides": &CONFIG.get_overrides().join(", "),
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
@@ -585,9 +606,9 @@ fn delete_config(_token: AdminToken) -> EmptyResult {
}
#[post("/config/backup_db")]
fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
async fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
if *CAN_BACKUP {
backup_database(&conn)
backup_database(&conn).await
} else {
err!("Can't back up current DB (Only SQLite supports this feature)");
}
@@ -595,28 +616,29 @@ fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
pub struct AdminToken {}
impl<'a, 'r> FromRequest<'a, 'r> for AdminToken {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for AdminToken {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
if CONFIG.disable_admin_token() {
Outcome::Success(AdminToken {})
} else {
let mut cookies = request.cookies();
let cookies = request.cookies();
let access_token = match cookies.get(COOKIE_NAME) {
Some(cookie) => cookie.value(),
None => return Outcome::Forward(()), // If there is no cookie, redirect to login
};
let ip = match request.guard::<ClientIp>() {
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip.ip,
_ => err_handler!("Error getting Client IP"),
};
if decode_admin(access_token).is_err() {
// Remove admin cookie
cookies.remove(Cookie::named(COOKIE_NAME));
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
error!("Invalid or expired admin JWT. IP: {}.", ip);
return Outcome::Forward(());
}

View File

@@ -1,5 +1,5 @@
use chrono::Utc;
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
@@ -34,6 +34,8 @@ pub fn routes() -> Vec<rocket::Route> {
password_hint,
prelogin,
verify_password,
api_key,
rotate_api_key,
]
}
@@ -49,6 +51,7 @@ struct RegisterData {
MasterPasswordHint: Option<String>,
Name: Option<String>,
Token: Option<String>,
#[allow(dead_code)]
OrganizationUserId: Option<String>,
}
@@ -60,13 +63,14 @@ struct KeysData {
}
#[post("/accounts/register", data = "<data>")]
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
let data: RegisterData = data.into_inner().data;
let email = data.Email.to_lowercase();
let mut user = match User::find_by_mail(&data.Email, &conn) {
let mut user = match User::find_by_mail(&email, &conn).await {
Some(user) => {
if !user.password_hash.is_empty() {
if CONFIG.is_signup_allowed(&data.Email) {
if CONFIG.is_signup_allowed(&email) {
err!("User already exists")
} else {
err!("Registration not allowed or user already exists")
@@ -75,19 +79,20 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
if let Some(token) = data.Token {
let claims = decode_invite(&token)?;
if claims.email == data.Email {
if claims.email == email {
user
} else {
err!("Registration email does not match invite email")
}
} else if Invitation::take(&data.Email, &conn) {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
} else if Invitation::take(&email, &conn).await {
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).await.iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32;
user_org.save(&conn)?;
user_org.save(&conn).await?;
}
user
} else if CONFIG.is_signup_allowed(&data.Email) {
} else if EmergencyAccess::find_invited_by_grantee_email(&email, &conn).await.is_some() {
user
} else if CONFIG.is_signup_allowed(&email) {
err!("Account with this email already exists")
} else {
err!("Registration not allowed or user already exists")
@@ -97,8 +102,8 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
// Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions.
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) {
User::new(data.Email.clone())
if Invitation::take(&email, &conn).await || CONFIG.is_signup_allowed(&email) {
User::new(email.clone())
} else {
err!("Registration not allowed or user already exists")
}
@@ -106,7 +111,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
};
// Make sure we don't leave a lingering invitation.
Invitation::take(&data.Email, &conn);
Invitation::take(&email, &conn).await;
if let Some(client_kdf_iter) = data.KdfIterations {
user.client_kdf_iter = client_kdf_iter;
@@ -145,12 +150,12 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
}
}
user.save(&conn)
user.save(&conn).await
}
#[get("/accounts/profile")]
fn profile(headers: Headers, conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&conn))
async fn profile(headers: Headers, conn: DbConn) -> Json<Value> {
Json(headers.user.to_json(&conn).await)
}
#[derive(Deserialize, Debug)]
@@ -163,12 +168,12 @@ struct ProfileData {
}
#[put("/accounts/profile", data = "<data>")]
fn put_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
post_profile(data, headers, conn)
async fn put_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
post_profile(data, headers, conn).await
}
#[post("/accounts/profile", data = "<data>")]
fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: ProfileData = data.into_inner().data;
let mut user = headers.user;
@@ -178,13 +183,13 @@ fn post_profile(data: JsonUpcase<ProfileData>, headers: Headers, conn: DbConn) -
Some(ref h) if h.is_empty() => None,
_ => data.MasterPasswordHint,
};
user.save(&conn)?;
Ok(Json(user.to_json(&conn)))
user.save(&conn).await?;
Ok(Json(user.to_json(&conn).await))
}
#[get("/users/<uuid>/public-key")]
fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &conn) {
async fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(&uuid, &conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -197,7 +202,7 @@ fn get_public_keys(uuid: String, _headers: Headers, conn: DbConn) -> JsonResult
}
#[post("/accounts/keys", data = "<data>")]
fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: KeysData = data.into_inner().data;
let mut user = headers.user;
@@ -205,7 +210,7 @@ fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> Json
user.private_key = Some(data.EncryptedPrivateKey);
user.public_key = Some(data.PublicKey);
user.save(&conn)?;
user.save(&conn).await?;
Ok(Json(json!({
"PrivateKey": user.private_key,
@@ -223,7 +228,7 @@ struct ChangePassData {
}
#[post("/accounts/password", data = "<data>")]
fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: ChangePassData = data.into_inner().data;
let mut user = headers.user;
@@ -233,10 +238,10 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
user.set_password(
&data.NewMasterPasswordHash,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts")]),
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
);
user.akey = data.Key;
user.save(&conn)
user.save(&conn).await
}
#[derive(Deserialize)]
@@ -251,7 +256,7 @@ struct ChangeKdfData {
}
#[post("/accounts/kdf", data = "<data>")]
fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: ChangeKdfData = data.into_inner().data;
let mut user = headers.user;
@@ -263,7 +268,7 @@ fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) ->
user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn)
user.save(&conn).await
}
#[derive(Deserialize)]
@@ -286,7 +291,7 @@ struct KeyData {
}
#[post("/accounts/key", data = "<data>")]
fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let data: KeyData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
@@ -297,7 +302,7 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
// Update folder data
for folder_data in data.Folders {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &conn) {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
};
@@ -307,14 +312,14 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
}
saved_folder.name = folder_data.Name;
saved_folder.save(&conn)?
saved_folder.save(&conn).await?
}
// Update cipher data
use super::ciphers::update_cipher_from_data;
for cipher_data in data.Ciphers {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &conn) {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
@@ -325,7 +330,7 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None)?
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None).await?
}
// Update user data
@@ -335,11 +340,11 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
user.private_key = Some(data.PrivateKey);
user.reset_security_stamp();
user.save(&conn)
user.save(&conn).await
}
#[post("/accounts/security-stamp", data = "<data>")]
fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let mut user = headers.user;
@@ -347,9 +352,9 @@ fn post_sstamp(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -
err!("Invalid password")
}
Device::delete_all_by_user(&user.uuid, &conn)?;
Device::delete_all_by_user(&user.uuid, &conn).await?;
user.reset_security_stamp();
user.save(&conn)
user.save(&conn).await
}
#[derive(Deserialize)]
@@ -360,7 +365,7 @@ struct EmailTokenData {
}
#[post("/accounts/email-token", data = "<data>")]
fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: EmailTokenData = data.into_inner().data;
let mut user = headers.user;
@@ -368,7 +373,7 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).is_some() {
if User::find_by_mail(&data.NewEmail, &conn).await.is_some() {
err!("Email already in use");
}
@@ -376,7 +381,7 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
err!("Email domain not allowed");
}
let token = crypto::generate_token(6)?;
let token = crypto::generate_email_token(6);
if CONFIG.mail_enabled() {
if let Err(e) = mail::send_change_email(&data.NewEmail, &token) {
@@ -386,7 +391,7 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
user.email_new = Some(data.NewEmail);
user.email_new_token = Some(token);
user.save(&conn)
user.save(&conn).await
}
#[derive(Deserialize)]
@@ -401,7 +406,7 @@ struct ChangeEmailData {
}
#[post("/accounts/email", data = "<data>")]
fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: ChangeEmailData = data.into_inner().data;
let mut user = headers.user;
@@ -409,7 +414,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
err!("Invalid password")
}
if User::find_by_mail(&data.NewEmail, &conn).is_some() {
if User::find_by_mail(&data.NewEmail, &conn).await.is_some() {
err!("Email already in use");
}
@@ -444,11 +449,11 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn)
user.save(&conn).await
}
#[post("/accounts/verify-email")]
fn post_verify_email(headers: Headers, _conn: DbConn) -> EmptyResult {
fn post_verify_email(headers: Headers) -> EmptyResult {
let user = headers.user;
if !CONFIG.mail_enabled() {
@@ -470,10 +475,10 @@ struct VerifyEmailTokenData {
}
#[post("/accounts/verify-email-token", data = "<data>")]
fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn) -> EmptyResult {
async fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn) -> EmptyResult {
let data: VerifyEmailTokenData = data.into_inner().data;
let mut user = match User::find_by_uuid(&data.UserId, &conn) {
let mut user = match User::find_by_uuid(&data.UserId, &conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -488,7 +493,7 @@ fn post_verify_email_token(data: JsonUpcase<VerifyEmailTokenData>, conn: DbConn)
user.verified_at = Some(Utc::now().naive_utc());
user.last_verifying_at = None;
user.login_verify_count = 0;
if let Err(e) = user.save(&conn) {
if let Err(e) = user.save(&conn).await {
error!("Error saving email verification: {:#?}", e);
}
@@ -502,13 +507,11 @@ struct DeleteRecoverData {
}
#[post("/accounts/delete-recover", data = "<data>")]
fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover(data: JsonUpcase<DeleteRecoverData>, conn: DbConn) -> EmptyResult {
let data: DeleteRecoverData = data.into_inner().data;
let user = User::find_by_mail(&data.Email, &conn);
if CONFIG.mail_enabled() {
if let Some(user) = user {
if let Some(user) = User::find_by_mail(&data.Email, &conn).await {
if let Err(e) = mail::send_delete_account(&user.email, &user.uuid) {
error!("Error sending delete account email: {:#?}", e);
}
@@ -531,10 +534,10 @@ struct DeleteRecoverTokenData {
}
#[post("/accounts/delete-recover-token", data = "<data>")]
fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbConn) -> EmptyResult {
async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbConn) -> EmptyResult {
let data: DeleteRecoverTokenData = data.into_inner().data;
let user = match User::find_by_uuid(&data.UserId, &conn) {
let user = match User::find_by_uuid(&data.UserId, &conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
};
@@ -546,16 +549,16 @@ fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, conn: DbC
if claims.sub != user.uuid {
err!("Invalid claim");
}
user.delete(&conn)
user.delete(&conn).await
}
#[post("/accounts/delete", data = "<data>")]
fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn)
async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn).await
}
#[delete("/accounts", data = "<data>")]
fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -563,7 +566,7 @@ fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn
err!("Invalid password")
}
user.delete(&conn)
user.delete(&conn).await
}
#[get("/accounts/revision-date")]
@@ -579,7 +582,7 @@ struct PasswordHintData {
}
#[post("/accounts/password-hint", data = "<data>")]
fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
async fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
err!("This server is not configured to provide password hints.");
}
@@ -589,19 +592,18 @@ fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResul
let data: PasswordHintData = data.into_inner().data;
let email = &data.Email;
match User::find_by_mail(email, &conn) {
match User::find_by_mail(email, &conn).await {
None => {
// To prevent user enumeration, act as if the user exists.
if CONFIG.mail_enabled() {
// There is still a timing side channel here in that the code
// paths that send mail take noticeably longer than ones that
// don't. Add a randomized sleep to mitigate this somewhat.
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
let base = 1000;
use rand::{rngs::SmallRng, Rng, SeedableRng};
let mut rng = SmallRng::from_entropy();
let delta: i32 = 100;
let sleep_ms = (base + rng.gen_range(-delta..=delta)) as u64;
std::thread::sleep(std::time::Duration::from_millis(sleep_ms));
let sleep_ms = (1_000 + rng.gen_range(-delta..=delta)) as u64;
tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await;
Ok(())
} else {
err!(NO_HINT);
@@ -623,15 +625,19 @@ fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResul
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct PreloginData {
pub struct PreloginData {
Email: String,
}
#[post("/accounts/prelogin", data = "<data>")]
fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
pub async fn _prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
let data: PreloginData = data.into_inner().data;
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn) {
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn).await {
Some(user) => (user.client_kdf_type, user.client_kdf_iter),
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT),
};
@@ -641,15 +647,17 @@ fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
"KdfIterations": kdf_iter
}))
}
// https://github.com/bitwarden/server/blob/master/src/Api/Models/Request/Accounts/SecretVerificationRequestModel.cs
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct VerifyPasswordData {
struct SecretVerificationRequest {
MasterPasswordHash: String,
}
#[post("/accounts/verify-password", data = "<data>")]
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers, _conn: DbConn) -> EmptyResult {
let data: VerifyPasswordData = data.into_inner().data;
fn verify_password(data: JsonUpcase<SecretVerificationRequest>, headers: Headers) -> EmptyResult {
let data: SecretVerificationRequest = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
@@ -658,3 +666,37 @@ fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers, _conn
Ok(())
}
async fn _api_key(
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
conn: DbConn,
) -> JsonResult {
let data: SecretVerificationRequest = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key());
user.save(&conn).await.expect("Error saving API key");
}
Ok(Json(json!({
"ApiKey": user.api_key,
"Object": "apiKey",
})))
}
#[post("/accounts/api-key", data = "<data>")]
async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn).await
}
#[post("/accounts/rotate-api-key", data = "<data>")]
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +1,837 @@
use chrono::{Duration, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use std::borrow::Borrow;
use crate::{api::JsonResult, auth::Headers, db::DbConn};
use crate::{
api::{core::CipherSyncData, EmptyResult, JsonResult, JsonUpcase, NumberOrString},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
};
use futures::{stream, stream::StreamExt};
pub fn routes() -> Vec<Route> {
routes![get_contacts,]
routes![
get_contacts,
get_grantees,
get_emergency_access,
put_emergency_access,
delete_emergency_access,
post_delete_emergency_access,
send_invite,
resend_invite,
accept_invite,
confirm_emergency_access,
initiate_emergency_access,
approve_emergency_access,
reject_emergency_access,
takeover_emergency_access,
password_emergency_access,
view_emergency_access,
policies_emergency_access,
]
}
/// This endpoint is expected to return at least something.
/// If we return an error message that will trigger error toasts for the user.
/// To prevent this we just return an empty json result with no Data.
/// When this feature is going to be implemented it also needs to return this empty Data
/// instead of throwing an error/4XX unless it really is an error.
// region get
#[get("/emergency-access/trusted")]
fn get_contacts(_headers: Headers, _conn: DbConn) -> JsonResult {
debug!("Emergency access is not supported.");
async fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list_json =
stream::iter(EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn).await)
.then(|e| async {
let e = e; // Move out this single variable
e.to_json_grantee_details(&conn).await
})
.collect::<Vec<Value>>()
.await;
Ok(Json(json!({
"Data": [],
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/emergency-access/granted")]
async fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list_json =
stream::iter(EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn).await)
.then(|e| async {
let e = e; // Move out this single variable
e.to_json_grantor_details(&conn).await
})
.collect::<Vec<Value>>()
.await;
Ok(Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn).await)),
None => err!("Emergency access not valid."),
}
}
// endregion
// region put/post
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessUpdateData {
Type: NumberOrString,
WaitTimeDays: i32,
KeyEncrypted: Option<String>,
}
#[put("/emergency-access/<emer_id>", data = "<data>")]
async fn put_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
conn: DbConn,
) -> JsonResult {
post_emergency_access(emer_id, data, conn).await
}
#[post("/emergency-access/<emer_id>", data = "<data>")]
async fn post_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessUpdateData>,
conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
};
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
emergency_access.atype = new_type;
emergency_access.wait_time_days = data.WaitTimeDays;
emergency_access.key_encrypted = data.KeyEncrypted;
emergency_access.save(&conn).await?;
Ok(Json(emergency_access.to_json()))
}
// endregion
// region delete
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
}
emer
}
None => err!("Emergency access not valid."),
};
emergency_access.delete(&conn).await?;
Ok(())
}
#[post("/emergency-access/<emer_id>/delete")]
async fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn).await
}
// endregion
// region invite
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessInviteData {
Email: String,
Type: NumberOrString,
WaitTimeDays: i32,
}
#[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase();
let wait_time_days = data.WaitTimeDays;
let emergency_access_status = EmergencyAccessStatus::Invited as i32;
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
let grantor_user = headers.user;
// avoid setting yourself as emergency contact
if email == grantor_user.email {
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", email))
}
if !CONFIG.is_email_domain_allowed(&email) {
err!("Email domain not eligible for invitations")
}
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(email.clone());
invitation.save(&conn).await?;
}
let mut user = User::new(email.clone());
user.save(&conn).await?;
user
}
Some(user) => user,
};
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
&grantor_user.uuid,
&grantee_user.uuid,
&grantee_user.email,
&conn,
)
.await
.is_some()
{
err!(format!("Grantee user already invited: {}", email))
}
let mut new_emergency_access = EmergencyAccess::new(
grantor_user.uuid.clone(),
Some(grantee_user.email.clone()),
emergency_access_status,
new_type,
wait_time_days,
);
new_emergency_access.save(&conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&grantee_user.email,
&grantee_user.uuid,
Some(new_emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &conn).await {
Some(user) => {
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()).await {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
None => err!("Grantee user not found."),
}
}
Ok(())
}
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.grantor_uuid != headers.user.uuid {
err!("Emergency access not valid.");
}
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
err!("The grantee user is already accepted or confirmed to the organization");
}
let email = match emergency_access.email.clone() {
Some(email) => email,
None => err!("Email not valid."),
};
let grantee_user = match User::find_by_mail(&email, &conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
let grantor_user = headers.user;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&email,
&grantor_user.uuid,
Some(emergency_access.uuid),
Some(grantor_user.name.clone()),
Some(grantor_user.email),
)?;
} else {
if Invitation::find_by_mail(&email, &conn).await.is_none() {
let invitation = Invitation::new(email);
invitation.save(&conn).await?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow())
.await
{
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
}
Ok(())
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct AcceptData {
Token: String,
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
let claims = decode_emergency_access_invite(token)?;
let grantee_user = match User::find_by_mail(&claims.email, &conn).await {
Some(user) => {
Invitation::take(&claims.email, &conn).await;
user
}
None => err!("Invited user not found"),
};
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
{
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn).await {
Ok(v) => (v),
Err(e) => err!(e.to_string()),
}
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email)?;
}
Ok(())
} else {
err!("Emergency access invitation error.")
}
}
async fn accept_invite_process(
grantee_uuid: String,
emer_id: String,
email: Option<String>,
conn: &DbConn,
) -> EmptyResult {
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
let emer_email = emergency_access.email;
if emer_email.is_none() || emer_email != email {
err!("User email does not match invite.");
}
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(grantee_uuid);
emergency_access.email = None;
emergency_access.save(conn).await
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct ConfirmData {
Key: String,
}
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
async fn confirm_emergency_access(
emer_id: String,
data: JsonUpcase<ConfirmData>,
headers: Headers,
conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
|| emergency_access.grantor_uuid != confirming_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.key_encrypted = Some(key);
emergency_access.email = None;
emergency_access.save(&conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
// endregion
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
let now = Utc::now().naive_utc();
emergency_access.status = EmergencyAccessStatus::RecoveryInitiated as i32;
emergency_access.updated_at = now;
emergency_access.recovery_initiated_at = Some(now);
emergency_access.last_notification_at = Some(now);
emergency_access.save(&conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_initiated(
&grantor_user.email,
&initiating_user.name,
emergency_access.get_type_as_str(),
&emergency_access.wait_time_days.clone().to_string(),
)?;
}
Ok(Json(emergency_access.to_json()))
}
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let approving_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|| emergency_access.grantor_uuid != approving_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
emergency_access.save(&conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let rejecting_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|| emergency_access.grantor_uuid != rejecting_user.uuid
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.save(&conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name)?;
}
Ok(Json(emergency_access.to_json()))
} else {
err!("Grantee user not found.")
}
}
// endregion
// region action
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let host = headers.host;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
err!("Emergency access not valid.")
}
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn).await;
let cipher_sync_data = CipherSyncData::new(&emergency_access.grantor_uuid, &ciphers, &conn).await;
let ciphers_json = stream::iter(ciphers)
.then(|c| async {
let c = c; // Move out this single variable
c.to_json(&host, &emergency_access.grantor_uuid, Some(&cipher_sync_data), &conn).await
})
.collect::<Vec<Value>>()
.await;
Ok(Json(json!({
"Ciphers": ciphers_json,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessView",
})))
}
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
Ok(Json(json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EmergencyAccessPasswordData {
NewMasterPasswordHash: String,
Key: String,
}
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
async fn password_emergency_access(
emer_id: String,
data: JsonUpcase<EmergencyAccessPasswordData>,
headers: Headers,
conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
let key = data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
// change grantor_user password
grantor_user.set_password(new_master_password_hash, None);
grantor_user.akey = key;
grantor_user.save(&conn).await?;
// Disable TwoFactor providers since they will otherwise block logins
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn).await?;
// Remove grantor from all organisations unless Owner
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn).await {
if user_org.atype != UserOrgType::Owner as i32 {
user_org.delete(&conn).await?;
}
}
Ok(())
}
// endregion
#[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
};
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
let policies_json: Vec<Value> = policies.await.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
"Object": "list",
"ContinuationToken": null
})))
}
fn is_valid_request(
emergency_access: &EmergencyAccess,
requesting_user_uuid: String,
requested_access_type: EmergencyAccessType,
) -> bool {
emergency_access.grantee_uuid == Some(requesting_user_uuid)
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
&& emergency_access.atype == requested_access_type as i32
}
fn check_emergency_access_allowed() -> EmptyResult {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not allowed.")
}
Ok(())
}
pub async fn emergency_request_timeout_job(pool: DbPool) {
debug!("Start emergency_request_timeout_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request timeout to approve");
}
for mut emer in emergency_access_list {
if emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days(emer.wait_time_days as i64)
{
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
emer.save(&conn).await.expect("Cannot save emergency access on job");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &conn).await.expect("Grantor user not found.");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.await
.expect("Grantee user not found.");
mail::send_emergency_access_recovery_timed_out(
&grantor_user.email,
&grantee_user.name.clone(),
emer.get_type_as_str(),
)
.expect("Error on sending email");
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
.expect("Error on sending email");
}
}
}
} else {
error!("Failed to get DB connection while searching emergency request timed out")
}
}
pub async fn emergency_notification_reminder_job(pool: DbPool) {
debug!("Start emergency_notification_reminder_job");
if !CONFIG.emergency_access_allowed() {
return;
}
if let Ok(conn) = pool.get().await {
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn).await;
if emergency_access_list.is_empty() {
debug!("No emergency request reminder notification to send");
}
for mut emer in emergency_access_list {
if (emer.recovery_initiated_at.is_some()
&& Utc::now().naive_utc()
>= emer.recovery_initiated_at.unwrap() + Duration::days((emer.wait_time_days as i64) - 1))
&& (emer.last_notification_at.is_none()
|| (emer.last_notification_at.is_some()
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
{
emer.save(&conn).await.expect("Cannot save emergency access on job");
if CONFIG.mail_enabled() {
// get grantor user to send Accepted email
let grantor_user =
User::find_by_uuid(&emer.grantor_uuid, &conn).await.expect("Grantor user not found.");
// get grantee user to send Accepted email
let grantee_user =
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
.await
.expect("Grantee user not found.");
mail::send_emergency_access_recovery_reminder(
&grantor_user.email,
&grantee_user.name.clone(),
emer.get_type_as_str(),
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
)
.expect("Error on sending email");
}
}
}
} else {
error!("Failed to get DB connection while searching emergency notification reminder")
}
}

View File

@@ -1,4 +1,4 @@
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
@@ -12,9 +12,8 @@ pub fn routes() -> Vec<rocket::Route> {
}
#[get("/folders")]
fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &conn);
async fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &conn).await;
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
Json(json!({
@@ -25,8 +24,8 @@ fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
}
#[get("/folders/<uuid>")]
fn get_folder(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &conn) {
async fn get_folder(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(&uuid, &conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -45,27 +44,39 @@ pub struct FolderData {
}
#[post("/folders", data = "<data>")]
fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = Folder::new(headers.user.uuid, data.Name);
folder.save(&conn)?;
folder.save(&conn).await?;
nt.send_folder_update(UpdateType::FolderCreate, &folder);
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>", data = "<data>")]
fn post_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
put_folder(uuid, data, headers, conn, nt)
async fn post_folder(
uuid: String,
data: JsonUpcase<FolderData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
put_folder(uuid, data, headers, conn, nt).await
}
#[put("/folders/<uuid>", data = "<data>")]
fn put_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
async fn put_folder(
uuid: String,
data: JsonUpcase<FolderData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: FolderData = data.into_inner().data;
let mut folder = match Folder::find_by_uuid(&uuid, &conn) {
let mut folder = match Folder::find_by_uuid(&uuid, &conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -76,20 +87,20 @@ fn put_folder(uuid: String, data: JsonUpcase<FolderData>, headers: Headers, conn
folder.name = data.Name;
folder.save(&conn)?;
folder.save(&conn).await?;
nt.send_folder_update(UpdateType::FolderUpdate, &folder);
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>/delete")]
fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_folder(uuid, headers, conn, nt)
async fn delete_folder_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(uuid, headers, conn, nt).await
}
#[delete("/folders/<uuid>")]
fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &conn) {
async fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(&uuid, &conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
@@ -99,7 +110,7 @@ fn delete_folder(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> Em
}
// Delete the actual folder entry
folder.delete(&conn)?;
folder.delete(&conn).await?;
nt.send_folder_update(UpdateType::FolderDelete, &folder);
Ok(())

View File

@@ -1,4 +1,4 @@
mod accounts;
pub mod accounts;
mod ciphers;
mod emergency_access;
mod folders;
@@ -7,11 +7,16 @@ mod sends;
pub mod two_factor;
pub use ciphers::purge_trashed_ciphers;
pub use ciphers::CipherSyncData;
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> {
let mut mod_routes =
routes![clear_device_token, put_device_token, get_eq_domains, post_eq_domains, put_eq_domains, hibp_breach,];
let mut device_token_routes = routes![clear_device_token, put_device_token];
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version];
let mut routes = Vec::new();
routes.append(&mut accounts::routes());
@@ -21,7 +26,10 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes());
routes.append(&mut sends::routes());
routes.append(&mut mod_routes);
routes.append(&mut device_token_routes);
routes.append(&mut eq_domains_routes);
routes.append(&mut hibp_routes);
routes.append(&mut meta_routes);
routes
}
@@ -29,8 +37,8 @@ pub fn routes() -> Vec<Route> {
//
// Move this somewhere else
//
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
@@ -119,7 +127,7 @@ struct EquivDomainData {
}
#[post("/settings/domains", data = "<data>")]
fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EquivDomainData = data.into_inner().data;
let excluded_globals = data.ExcludedGlobalEquivalentDomains.unwrap_or_default();
@@ -131,18 +139,18 @@ fn post_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: Db
user.excluded_globals = to_string(&excluded_globals).unwrap_or_else(|_| "[]".to_string());
user.equivalent_domains = to_string(&equivalent_domains).unwrap_or_else(|_| "[]".to_string());
user.save(&conn)?;
user.save(&conn).await?;
Ok(Json(json!({})))
}
#[put("/settings/domains", data = "<data>")]
fn put_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
post_eq_domains(data, headers, conn)
async fn put_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbConn) -> JsonResult {
post_eq_domains(data, headers, conn).await
}
#[get("/hibp/breach?<username>")]
fn hibp_breach(username: String) -> JsonResult {
async fn hibp_breach(username: String) -> JsonResult {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{}?truncateResponse=false&includeUnverified=false",
username
@@ -151,14 +159,14 @@ fn hibp_breach(username: String) -> JsonResult {
if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let hibp_client = get_reqwest_client();
let res = hibp_client.get(&url).header("hibp-api-key", api_key).send()?;
let res = hibp_client.get(&url).header("hibp-api-key", api_key).send().await?;
// If we get a 404, return a 404, it means no breached accounts
if res.status() == 404 {
return Err(Error::empty().with_code(404));
}
let value: Value = res.error_for_status()?.json()?;
let value: Value = res.error_for_status()?.json().await?;
Ok(Json(value))
} else {
Ok(Json(json!([{
@@ -168,7 +176,7 @@ fn hibp_breach(username: String) -> JsonResult {
"BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"LogoPath": "bwrs_static/hibp.png",
"LogoPath": "vw_static/hibp.png",
"PwnCount": 0,
"DataClasses": [
"Error - No API key set!"
@@ -176,3 +184,19 @@ fn hibp_breach(username: String) -> JsonResult {
}])))
}
}
// We use DbConn here to let the alive healthcheck also verify the database connection.
#[get("/alive")]
fn alive(_conn: DbConn) -> Json<String> {
now()
}
#[get("/now")]
pub fn now() -> Json<String> {
Json(crate::util::format_date(&chrono::Utc::now().naive_utc()))
}
#[get("/version")]
fn version() -> Json<&'static str> {
Json(crate::VERSION.unwrap_or_default())
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,15 @@
use std::{io::Read, path::Path};
use std::path::Path;
use chrono::{DateTime, Duration, Utc};
use multipart::server::{save::SavedData, Multipart, SaveResult};
use rocket::{http::ContentType, response::NamedFile, Data};
use rocket_contrib::json::Json;
use rocket::form::Form;
use rocket::fs::NamedFile;
use rocket::fs::TempFile;
use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
auth::{Headers, Host},
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, UpdateType},
auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool},
util::SafeString,
CONFIG,
@@ -18,6 +19,8 @@ const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer availab
pub fn routes() -> Vec<rocket::Route> {
routes![
get_sends,
get_send,
post_send,
post_send_file,
post_access,
@@ -29,10 +32,10 @@ pub fn routes() -> Vec<rocket::Route> {
]
}
pub fn purge_sends(pool: DbPool) {
pub async fn purge_sends(pool: DbPool) {
debug!("Purging sends");
if let Ok(conn) = pool.get() {
Send::purge(&conn);
if let Ok(conn) = pool.get().await {
Send::purge(&conn).await;
} else {
error!("Failed to get DB connection while purging sends")
}
@@ -40,21 +43,21 @@ pub fn purge_sends(pool: DbPool) {
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendData {
pub Type: i32,
pub Key: String,
pub Password: Option<String>,
pub MaxAccessCount: Option<i32>,
pub ExpirationDate: Option<DateTime<Utc>>,
pub DeletionDate: DateTime<Utc>,
pub Disabled: bool,
pub HideEmail: Option<bool>,
struct SendData {
Type: i32,
Key: String,
Password: Option<String>,
MaxAccessCount: Option<NumberOrString>,
ExpirationDate: Option<DateTime<Utc>>,
DeletionDate: DateTime<Utc>,
Disabled: bool,
HideEmail: Option<bool>,
// Data field
pub Name: String,
pub Notes: Option<String>,
pub Text: Option<Value>,
pub File: Option<Value>,
Name: String,
Notes: Option<String>,
Text: Option<Value>,
File: Option<Value>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@@ -65,10 +68,10 @@ pub struct SendData {
///
/// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally.
fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::DisableSend;
if !CONFIG.sends_allowed() || OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
if !CONFIG.sends_allowed() || OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn).await {
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
}
Ok(())
@@ -80,10 +83,10 @@ fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult
/// but is allowed to remove this option from an existing Send.
///
/// Ref: https://bitwarden.com/help/article/policies/#send-options
fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &DbConn) -> EmptyResult {
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let hide_email = data.HideEmail.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn) {
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn).await {
err!(
"Due to an Enterprise Policy, you are not allowed to hide your email address \
from recipients when creating or editing a Send."
@@ -92,7 +95,7 @@ fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &
Ok(())
}
fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
async fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
let data_val = if data.Type == SendType::Text as i32 {
data.Text
} else if data.Type == SendType::File as i32 {
@@ -114,10 +117,13 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
);
}
let mut send = Send::new(data.Type, data.Name, data_str, data.Key, data.DeletionDate.naive_utc());
let mut send = Send::new(data.Type, data.Name, data_str, data.Key, data.DeletionDate.naive_utc()).await;
send.user_uuid = Some(user_uuid);
send.notes = data.Notes;
send.max_access_count = data.MaxAccessCount;
send.max_access_count = match data.MaxAccessCount {
Some(m) => Some(m.into_i32()?),
_ => None,
};
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.disabled = data.Disabled;
send.hide_email = data.HideEmail;
@@ -128,43 +134,67 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
Ok(send)
}
#[get("/sends")]
async fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
let sends = Send::find_by_user(&headers.user.uuid, &conn);
let sends_json: Vec<Value> = sends.await.iter().map(|s| s.to_json()).collect();
Json(json!({
"Data": sends_json,
"Object": "list",
"ContinuationToken": null
}))
}
#[get("/sends/<uuid>")]
async fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(&uuid, &conn).await {
Some(send) => send,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
Ok(Json(send.to_json()))
}
#[post("/sends", data = "<data>")]
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
enforce_disable_hide_email_policy(&data, &headers, &conn).await?;
if data.Type == SendType::File as i32 {
err!("File sends should use /api/sends/file")
}
let mut send = create_send(data, headers.user.uuid.clone())?;
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
let mut send = create_send(data, headers.user.uuid).await?;
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn).await);
Ok(Json(send.to_json()))
}
#[derive(FromForm)]
struct UploadData<'f> {
model: Json<crate::util::UpCase<SendData>>,
data: TempFile<'f>,
}
#[post("/sends/file", format = "multipart/form-data", data = "<data>")]
fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
let boundary = content_type.params().next().expect("No boundary provided").1;
let UploadData {
model,
mut data,
} = data.into_inner();
let model = model.into_inner().data;
let mut mpart = Multipart::with_body(data.open(), boundary);
// First entry is the SendData JSON
let mut model_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "model" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let mut buf = String::new();
model_entry.data.read_to_string(&mut buf)?;
let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?;
enforce_disable_hide_email_policy(&data.data, &headers, &conn)?;
enforce_disable_hide_email_policy(&model, &headers, &conn).await?;
// Get the file length and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
@@ -172,7 +202,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn);
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn).await;
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
@@ -181,51 +211,33 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
None => SIZE_525_MB,
};
// Create the Send
let mut send = create_send(data.data, headers.user.uuid.clone())?;
let file_id = crate::crypto::generate_send_id();
let mut send = create_send(model, headers.user.uuid).await?;
if send.atype != SendType::File as i32 {
err!("Send content is not a file");
}
let file_path = Path::new(&CONFIG.sends_folder()).join(&send.uuid).join(&file_id);
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
}
// Read the data entry and save the file
let mut data_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "data" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let file_id = crate::crypto::generate_send_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
data.persist_to(&file_path).await?;
let size = match data_entry.data.save().memory_threshold(0).size_limit(size_limit).with_path(&file_path) {
SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment is not a file: {:?}", other));
}
SaveResult::Partial(_, reason) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment storage limit exceeded with this file: {:?}", reason));
}
SaveResult::Error(e) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Error: {:?}", e));
}
};
// Set ID and sizes
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size as i32)));
}
send.data = serde_json::to_string(&data_value)?;
// Save the changes in the database
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await);
Ok(Json(send.to_json()))
}
@@ -237,8 +249,8 @@ pub struct SendAccessData {
}
#[post("/sends/access/<access_id>", data = "<data>")]
fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &conn) {
async fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn, ip: ClientIp) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -266,8 +278,8 @@ fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn
if send.password_hash.is_some() {
match data.into_inner().data.Password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password."),
None => err_code!("Password not provided", 401),
Some(_) => err!("Invalid password", format!("IP: {}.", ip.ip)),
None => err_code!("Password not provided", format!("IP: {}.", ip.ip), 401),
}
}
@@ -276,20 +288,20 @@ fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn
send.access_count += 1;
}
send.save(&conn)?;
send.save(&conn).await?;
Ok(Json(send.to_json_access(&conn)))
Ok(Json(send.to_json_access(&conn).await))
}
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
fn post_access_file(
async fn post_access_file(
send_id: String,
file_id: String,
data: JsonUpcase<SendAccessData>,
host: Host,
conn: DbConn,
) -> JsonResult {
let mut send = match Send::find_by_uuid(&send_id, &conn) {
let mut send = match Send::find_by_uuid(&send_id, &conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
@@ -324,7 +336,7 @@ fn post_access_file(
send.access_count += 1;
send.save(&conn)?;
send.save(&conn).await?;
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
@@ -336,23 +348,29 @@ fn post_access_file(
}
#[get("/sends/<send_id>/<file_id>?<t>")]
fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
async fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(&t) {
if claims.sub == format!("{}/{}", send_id, file_id) {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok();
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok();
}
}
None
}
#[put("/sends/<id>", data = "<data>")]
fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn put_send(
id: String,
data: JsonUpcase<SendData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
let data: SendData = data.into_inner().data;
enforce_disable_hide_email_policy(&data, &headers, &conn)?;
enforce_disable_hide_email_policy(&data, &headers, &conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn) {
let mut send = match Send::find_by_uuid(&id, &conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -386,7 +404,10 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
send.akey = data.Key;
send.deletion_date = data.DeletionDate.naive_utc();
send.notes = data.Notes;
send.max_access_count = data.MaxAccessCount;
send.max_access_count = match data.MaxAccessCount {
Some(m) => Some(m.into_i32()?),
_ => None,
};
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.hide_email = data.HideEmail;
send.disabled = data.Disabled;
@@ -396,15 +417,15 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
send.set_password(Some(&password));
}
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await);
Ok(Json(send.to_json()))
}
#[delete("/sends/<id>")]
fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &conn) {
async fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -413,17 +434,17 @@ fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyR
err!("Send is not owned by user")
}
send.delete(&conn)?;
nt.send_user_update(UpdateType::SyncSendDelete, &headers.user);
send.delete(&conn).await?;
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn).await);
Ok(())
}
#[put("/sends/<id>/remove-password")]
fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
async fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &conn).await?;
let mut send = match Send::find_by_uuid(&id, &conn) {
let mut send = match Send::find_by_uuid(&id, &conn).await {
Some(s) => s,
None => err!("Send not found"),
};
@@ -433,8 +454,8 @@ fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -
}
send.set_password(None);
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
send.save(&conn).await?;
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn).await);
Ok(Json(send.to_json()))
}

View File

@@ -1,6 +1,6 @@
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{
@@ -21,7 +21,7 @@ pub fn routes() -> Vec<Route> {
}
#[post("/two-factor/get-authenticator", data = "<data>")]
fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -30,7 +30,7 @@ fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn
}
let type_ = TwoFactorType::Authenticator as i32;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn);
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await;
let (enabled, key) = match twofactor {
Some(tf) => (true, tf.data),
@@ -53,7 +53,7 @@ struct EnableAuthenticatorData {
}
#[post("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator(
async fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
@@ -62,7 +62,7 @@ fn activate_authenticator(
let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let key = data.Key;
let token = data.Token.into_i32()? as u64;
let token = data.Token.into_string();
let mut user = headers.user;
@@ -81,9 +81,9 @@ fn activate_authenticator(
}
// Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, token, &key.to_uppercase(), &ip, &conn)?;
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -93,73 +93,80 @@ fn activate_authenticator(
}
#[put("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator_put(
async fn activate_authenticator_put(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
) -> JsonResult {
activate_authenticator(data, headers, ip, conn)
activate_authenticator(data, headers, ip, conn).await
}
pub fn validate_totp_code_str(
pub async fn validate_totp_code_str(
user_uuid: &str,
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
) -> EmptyResult {
let totp_code: u64 = match totp_code.parse() {
Ok(code) => code,
_ => err!("TOTP code is not a number"),
};
if !totp_code.chars().all(char::is_numeric) {
err!("TOTP code is not a number");
}
validate_totp_code(user_uuid, totp_code, secret, ip, conn)
validate_totp_code(user_uuid, totp_code, secret, ip, conn).await
}
pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
use oath::{totp_raw_custom_time, HashType};
pub async fn validate_totp_code(
user_uuid: &str,
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
Ok(s) => s,
Err(_) => err!("Invalid TOTP secret"),
};
let mut twofactor = match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn) {
Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
};
// Get the current system time in UNIX Epoch (UTC)
let current_time = chrono::Utc::now();
let current_timestamp = current_time.timestamp();
let mut twofactor =
match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn).await {
Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
};
// The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes.
// If that is the case, we set the steps to 0 so only the current TOTP is valid.
let steps = !CONFIG.authenticator_disable_time_drift() as i64;
// Get the current system time in UNIX Epoch (UTC)
let current_time = chrono::Utc::now();
let current_timestamp = current_time.timestamp();
for step in -steps..=steps {
let time_step = current_timestamp / 30i64 + step;
// We need to calculate the time offsite and cast it as an i128.
// Else we can't do math with it on a default u64 variable.
// We need to calculate the time offsite and cast it as an u64.
// Since we only have times into the future and the totp generator needs an u64 instead of the default i64.
let time = (current_timestamp + step * 30i64) as u64;
let generated = totp_raw_custom_time(&decoded_secret, 6, 0, 30, time, &HashType::SHA1);
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
// Check the the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > twofactor.last_used as i64 {
// If the step does not equals 0 the time is drifted either server or client side.
if step != 0 {
info!("TOTP Time drift detected. The step offset is {}", step);
warn!("TOTP Time drift detected. The step offset is {}", step);
}
// Save the last used time step so only totp time steps higher then this one are allowed.
// This will also save a newly created twofactor if the code is correct.
twofactor.last_used = time_step as i32;
twofactor.save(conn)?;
twofactor.save(conn).await?;
return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used as i64 {
warn!("This or a TOTP code within {} steps back and forward has already been used!", steps);
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
}
}

View File

@@ -1,7 +1,7 @@
use chrono::Utc;
use data_encoding::BASE64;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, PasswordData},
@@ -89,14 +89,14 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")]
fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let data = get_user_duo_data(&headers.user.uuid, &conn);
let data = get_user_duo_data(&headers.user.uuid, &conn).await;
let (enabled, data) = match data {
DuoStatus::Global(_) => (true, Some(DuoData::secret())),
@@ -152,7 +152,7 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
}
#[post("/two-factor/duo", data = "<data>")]
fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user;
@@ -163,7 +163,7 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
let (data, data_str) = if check_duo_fields_custom(&data) {
let data_req: DuoData = data.into();
let data_str = serde_json::to_string(&data_req)?;
duo_api_request("GET", "/auth/v2/check", "", &data_req).map_res("Failed to validate Duo credentials")?;
duo_api_request("GET", "/auth/v2/check", "", &data_req).await.map_res("Failed to validate Duo credentials")?;
(data_req.obscure(), data_str)
} else {
(DuoData::secret(), String::new())
@@ -171,9 +171,9 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
let type_ = TwoFactorType::Duo;
let twofactor = TwoFactor::new(user.uuid.clone(), type_, data_str);
twofactor.save(&conn)?;
twofactor.save(&conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &conn).await;
Ok(Json(json!({
"Enabled": true,
@@ -185,11 +185,11 @@ fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn)
}
#[put("/two-factor/duo", data = "<data>")]
fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn)
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn).await
}
fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
use reqwest::{header, Method};
use std::str::FromStr;
@@ -209,7 +209,8 @@ fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> Em
.basic_auth(username, Some(password))
.header(header::USER_AGENT, "vaultwarden:Duo/1.0 (Rust)")
.header(header::DATE, date)
.send()?
.send()
.await?
.error_for_status()?;
Ok(())
@@ -222,11 +223,11 @@ const AUTH_PREFIX: &str = "AUTH";
const DUO_PREFIX: &str = "TX";
const APP_PREFIX: &str = "APP";
fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
async fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn).await {
Some(t) => t,
None => return DuoStatus::Disabled(DuoData::global().is_some()),
};
@@ -246,19 +247,20 @@ fn get_user_duo_data(uuid: &str, conn: &DbConn) -> DuoStatus {
}
// let (ik, sk, ak, host) = get_duo_keys();
fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> {
let data = User::find_by_mail(email, conn)
.and_then(|u| get_user_duo_data(&u.uuid, conn).data())
.or_else(DuoData::global)
.map_res("Can't fetch Duo keys")?;
async fn get_duo_keys_email(email: &str, conn: &DbConn) -> ApiResult<(String, String, String, String)> {
let data = match User::find_by_mail(email, conn).await {
Some(u) => get_user_duo_data(&u.uuid, conn).await.data(),
_ => DuoData::global(),
}
.map_res("Can't fetch Duo Keys")?;
Ok((data.ik, data.sk, CONFIG.get_duo_akey(), data.host))
}
pub fn generate_duo_signature(email: &str, conn: &DbConn) -> ApiResult<(String, String)> {
pub async fn generate_duo_signature(email: &str, conn: &DbConn) -> ApiResult<(String, String)> {
let now = Utc::now().timestamp();
let (ik, sk, ak, host) = get_duo_keys_email(email, conn)?;
let (ik, sk, ak, host) = get_duo_keys_email(email, conn).await?;
let duo_sign = sign_duo_values(&sk, email, &ik, DUO_PREFIX, now + DUO_EXPIRE);
let app_sign = sign_duo_values(&ak, email, &ik, APP_PREFIX, now + APP_EXPIRE);
@@ -273,7 +275,7 @@ fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64
format!("{}|{}", cookie, crypto::hmac_sign(key, &cookie))
}
pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult {
// email is as entered by the user, so it needs to be normalized before
// comparison with auth_user below.
let email = &email.to_lowercase();
@@ -288,7 +290,7 @@ pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyRe
let now = Utc::now().timestamp();
let (ik, sk, ak, _host) = get_duo_keys_email(email, conn)?;
let (ik, sk, ak, _host) = get_duo_keys_email(email, conn).await?;
let auth_user = parse_duo_values(&sk, auth_sig, &ik, AUTH_PREFIX, now)?;
let app_user = parse_duo_values(&ak, app_sig, &ik, APP_PREFIX, now)?;

View File

@@ -1,6 +1,6 @@
use chrono::{Duration, NaiveDateTime, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use crate::{
api::{core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase, PasswordData},
@@ -28,13 +28,13 @@ struct SendEmailLoginData {
/// User is trying to login and wants to use email 2FA.
/// Does not require Bearer token
#[post("/two-factor/send-email-login", data = "<data>")] // JsonResult
fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> EmptyResult {
async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> EmptyResult {
let data: SendEmailLoginData = data.into_inner().data;
use crate::db::models::User;
// Get the user
let user = match User::find_by_mail(&data.Email, &conn) {
let user = match User::find_by_mail(&data.Email, &conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -48,22 +48,23 @@ fn send_email_login(data: JsonUpcase<SendEmailLoginData>, conn: DbConn) -> Empty
err!("Email 2FA is disabled")
}
send_token(&user.uuid, &conn)?;
send_token(&user.uuid, &conn).await?;
Ok(())
}
/// Generate the token, save the data for later verification and send email to user
pub fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, type_, conn).map_res("Two factor not found")?;
let mut twofactor =
TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await.map_res("Two factor not found")?;
let generated_token = crypto::generate_token(CONFIG.email_token_size())?;
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let mut twofactor_data = EmailTokenData::from_json(&twofactor.data)?;
twofactor_data.set_token(generated_token);
twofactor.data = twofactor_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
@@ -72,7 +73,7 @@ pub fn send_token(user_uuid: &str, conn: &DbConn) -> EmptyResult {
/// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")]
fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let user = headers.user;
@@ -80,14 +81,17 @@ fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) ->
err!("Invalid password");
}
let type_ = TwoFactorType::Email as i32;
let enabled = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
Some(x) => x.enabled,
_ => false,
};
let (enabled, mfa_email) =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn).await {
Some(x) => {
let twofactor_data = EmailTokenData::from_json(&x.data)?;
(true, json!(twofactor_data.email))
}
_ => (false, json!(null)),
};
Ok(Json(json!({
"Email": user.email,
"Email": mfa_email,
"Enabled": enabled,
"Object": "twoFactorEmail"
})))
@@ -103,7 +107,7 @@ struct SendEmailData {
/// Send a verification email to the specified email address to check whether it exists/belongs to user.
#[post("/two-factor/send-email", data = "<data>")]
fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: SendEmailData = data.into_inner().data;
let user = headers.user;
@@ -117,16 +121,16 @@ fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -
let type_ = TwoFactorType::Email as i32;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
tf.delete(&conn)?;
if let Some(tf) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
tf.delete(&conn).await?;
}
let generated_token = crypto::generate_token(CONFIG.email_token_size())?;
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let twofactor_data = EmailTokenData::new(data.Email, generated_token);
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::EmailVerificationChallenge, twofactor_data.to_json());
twofactor.save(&conn)?;
twofactor.save(&conn).await?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
@@ -143,7 +147,7 @@ struct EmailData {
/// Verify email belongs to user and can be used for 2FA email codes.
#[put("/two-factor/email", data = "<data>")]
fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EmailData = data.into_inner().data;
let mut user = headers.user;
@@ -152,7 +156,8 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
}
let type_ = TwoFactorType::EmailVerificationChallenge as i32;
let mut twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).map_res("Two factor not found")?;
let mut twofactor =
TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await.map_res("Two factor not found")?;
let mut email_data = EmailTokenData::from_json(&twofactor.data)?;
@@ -168,9 +173,9 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
email_data.reset_token();
twofactor.atype = TwoFactorType::Email as i32;
twofactor.data = email_data.to_json();
twofactor.save(&conn)?;
twofactor.save(&conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &conn).await;
Ok(Json(json!({
"Email": email_data.email,
@@ -180,9 +185,10 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
}
/// Validate the email code when used as TwoFactor token mechanism
pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
.await
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token {
Some(t) => t,
@@ -195,14 +201,14 @@ pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &
email_data.reset_token();
}
twofactor.data = email_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
err!("Token is invalid")
}
email_data.reset_token();
twofactor.data = email_data.to_json();
twofactor.save(conn)?;
twofactor.save(conn).await?;
let date = NaiveDateTime::from_timestamp(email_data.token_sent, 0);
let max_time = CONFIG.email_expiration_time() as i64;
@@ -307,18 +313,4 @@ mod tests {
// If it's smaller than 3 characters it should only show asterisks.
assert_eq!(result, "***@example.ext");
}
#[test]
fn test_token() {
let result = crypto::generate_token(19).unwrap();
assert_eq!(result.chars().count(), 19);
}
#[test]
fn test_token_too_large() {
let result = crypto::generate_token(20);
assert!(result.is_err(), "too large token should give an error");
}
}

View File

@@ -1,20 +1,20 @@
use chrono::{Duration, Utc};
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
auth::Headers,
crypto,
db::{models::*, DbConn},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
};
pub mod authenticator;
pub mod duo;
pub mod email;
pub mod u2f;
pub mod webauthn;
pub mod yubikey;
@@ -24,7 +24,6 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut authenticator::routes());
routes.append(&mut duo::routes());
routes.append(&mut email::routes());
routes.append(&mut u2f::routes());
routes.append(&mut webauthn::routes());
routes.append(&mut yubikey::routes());
@@ -32,8 +31,8 @@ pub fn routes() -> Vec<Route> {
}
#[get("/two-factor")]
fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn);
async fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn).await;
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Json(json!({
@@ -67,13 +66,13 @@ struct RecoverTwoFactor {
}
#[post("/two-factor/recover", data = "<data>")]
fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
async fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner().data;
use crate::db::models::User;
// Get the user
let mut user = match User::find_by_mail(&data.Email, &conn) {
let mut user = match User::find_by_mail(&data.Email, &conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
};
@@ -89,19 +88,19 @@ fn recover(data: JsonUpcase<RecoverTwoFactor>, conn: DbConn) -> JsonResult {
}
// Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &conn)?;
TwoFactor::delete_all_by_user(&user.uuid, &conn).await?;
// Remove the recovery code, not needed without twofactors
user.totp_recover = None;
user.save(&conn)?;
user.save(&conn).await?;
Ok(Json(json!({})))
}
fn _generate_recover_code(user: &mut User, conn: &DbConn) {
async fn _generate_recover_code(user: &mut User, conn: &DbConn) {
if user.totp_recover.is_none() {
let totp_recover = BASE32.encode(&crypto::get_random(vec![0u8; 20]));
user.totp_recover = Some(totp_recover);
user.save(conn).ok();
user.save(conn).await.ok();
}
}
@@ -113,7 +112,7 @@ struct DisableTwoFactorData {
}
#[post("/two-factor/disable", data = "<data>")]
fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user;
@@ -124,23 +123,24 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
let type_ = data.Type.into_i32()?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
twofactor.delete(&conn)?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
twofactor.delete(&conn).await?;
}
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).is_empty();
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).await.is_empty();
if twofactor_disabled {
let policy_type = OrgPolicyType::TwoFactorAuthentication;
let org_list = UserOrganization::find_by_user_and_policy(&user.uuid, policy_type, &conn);
for user_org in org_list.into_iter() {
for user_org in
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &conn)
.await
.into_iter()
{
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).unwrap();
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
}
user_org.delete(&conn)?;
user_org.delete(&conn).await?;
}
}
}
@@ -153,6 +153,37 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
}
#[put("/two-factor/disable", data = "<data>")]
fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn)
async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn).await
}
pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
debug!("Sending notifications for incomplete 2FA logins");
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return;
}
let conn = match pool.get().await {
Ok(conn) => conn,
_ => {
error!("Failed to get DB connection in send_incomplete_2fa_notifications()");
return;
}
};
let now = Utc::now().naive_utc();
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
let time_before = now - time_limit;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &conn).await;
for login in incomplete_logins {
let user = User::find_by_uuid(&login.user_uuid, &conn).await.expect("User not found");
info!(
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
user.email, login.ip_address
);
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
.expect("Error sending incomplete 2FA email");
login.delete(&conn).await.expect("Error deleting incomplete 2FA record");
}
}

View File

@@ -1,352 +0,0 @@
use once_cell::sync::Lazy;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use u2f::{
messages::{RegisterResponse, SignResponse, U2fSignRequest},
protocol::{Challenge, U2f},
register::Registration,
};
use crate::{
api::{
core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase, NumberOrString,
PasswordData,
},
auth::Headers,
db::{
models::{TwoFactor, TwoFactorType},
DbConn,
},
error::Error,
CONFIG,
};
const U2F_VERSION: &str = "U2F_V2";
static APP_ID: Lazy<String> = Lazy::new(|| format!("{}/app-id.json", &CONFIG.domain()));
static U2F: Lazy<U2f> = Lazy::new(|| U2f::new(APP_ID.clone()));
pub fn routes() -> Vec<Route> {
routes![generate_u2f, generate_u2f_challenge, activate_u2f, activate_u2f_put, delete_u2f,]
}
#[post("/two-factor/get-u2f", data = "<data>")]
fn generate_u2f(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. U2F disabled")
}
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let (enabled, keys) = get_u2f_registrations(&headers.user.uuid, &conn)?;
let keys_json: Vec<Value> = keys.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": enabled,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
#[post("/two-factor/get-u2f-challenge", data = "<data>")]
fn generate_u2f_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let _type = TwoFactorType::U2fRegisterChallenge;
let challenge = _create_u2f_challenge(&headers.user.uuid, _type, &conn).challenge;
Ok(Json(json!({
"UserId": headers.user.uuid,
"AppId": APP_ID.to_string(),
"Challenge": challenge,
"Version": U2F_VERSION,
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EnableU2FData {
Id: NumberOrString,
// 1..5
Name: String,
MasterPasswordHash: String,
DeviceResponse: String,
}
// This struct is referenced from the U2F lib
// because it doesn't implement Deserialize
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
#[serde(remote = "Registration")]
struct RegistrationDef {
key_handle: Vec<u8>,
pub_key: Vec<u8>,
attestation_cert: Option<Vec<u8>>,
device_name: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct U2FRegistration {
pub id: i32,
pub name: String,
#[serde(with = "RegistrationDef")]
pub reg: Registration,
pub counter: u32,
compromised: bool,
pub migrated: Option<bool>,
}
impl U2FRegistration {
fn to_json(&self) -> Value {
json!({
"Id": self.id,
"Name": self.name,
"Compromised": self.compromised,
})
}
}
// This struct is copied from the U2F lib
// to add an optional error code
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct RegisterResponseCopy {
pub registration_data: String,
pub version: String,
pub client_data: String,
pub error_code: Option<NumberOrString>,
}
impl From<RegisterResponseCopy> for RegisterResponse {
fn from(r: RegisterResponseCopy) -> RegisterResponse {
RegisterResponse {
registration_data: r.registration_data,
version: r.version,
client_data: r.client_data,
}
}
}
#[post("/two-factor/u2f", data = "<data>")]
fn activate_u2f(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableU2FData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let tf_type = TwoFactorType::U2fRegisterChallenge as i32;
let tf_challenge = match TwoFactor::find_by_user_and_type(&user.uuid, tf_type, &conn) {
Some(c) => c,
None => err!("Can't recover challenge"),
};
let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?;
tf_challenge.delete(&conn)?;
let response: RegisterResponseCopy = serde_json::from_str(&data.DeviceResponse)?;
let error_code = response.error_code.clone().map_or("0".into(), NumberOrString::into_string);
if error_code != "0" {
err!("Error registering U2F token")
}
let registration = U2F.register_response(challenge, response.into())?;
let full_registration = U2FRegistration {
id: data.Id.into_i32()?,
name: data.Name,
reg: registration,
compromised: false,
counter: 0,
migrated: None,
};
let mut regs = get_u2f_registrations(&user.uuid, &conn)?.1;
// TODO: Check that there is no repeat Id
regs.push(full_registration);
save_u2f_registrations(&user.uuid, &regs, &conn)?;
_generate_recover_code(&mut user, &conn);
let keys_json: Vec<Value> = regs.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
#[put("/two-factor/u2f", data = "<data>")]
fn activate_u2f_put(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_u2f(data, headers, conn)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct DeleteU2FData {
Id: NumberOrString,
MasterPasswordHash: String,
}
#[delete("/two-factor/u2f", data = "<data>")]
fn delete_u2f(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: DeleteU2FData = data.into_inner().data;
let id = data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
let type_ = TwoFactorType::U2f as i32;
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, type_, &conn) {
Some(tf) => tf,
None => err!("U2F data not found!"),
};
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&tf.data) {
Ok(d) => d,
Err(_) => err!("Error parsing U2F data"),
};
data.retain(|r| r.id != id);
let new_data_str = serde_json::to_string(&data)?;
tf.data = new_data_str;
tf.save(&conn)?;
let keys_json: Vec<Value> = data.iter().map(U2FRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
})))
}
fn _create_u2f_challenge(user_uuid: &str, type_: TwoFactorType, conn: &DbConn) -> Challenge {
let challenge = U2F.generate_challenge().unwrap();
TwoFactor::new(user_uuid.into(), type_, serde_json::to_string(&challenge).unwrap())
.save(conn)
.expect("Error saving challenge");
challenge
}
fn save_u2f_registrations(user_uuid: &str, regs: &[U2FRegistration], conn: &DbConn) -> EmptyResult {
TwoFactor::new(user_uuid.into(), TwoFactorType::U2f, serde_json::to_string(regs)?).save(conn)
}
fn get_u2f_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<U2FRegistration>), Error> {
let type_ = TwoFactorType::U2f as i32;
let (enabled, regs) = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
Some(tf) => (tf.enabled, tf.data),
None => return Ok((false, Vec::new())), // If no data, return empty list
};
let data = match serde_json::from_str(&regs) {
Ok(d) => d,
Err(_) => {
// If error, try old format
let mut old_regs = _old_parse_registrations(&regs);
if old_regs.len() != 1 {
err!("The old U2F format only allows one device")
}
// Convert to new format
let new_regs = vec![U2FRegistration {
id: 1,
name: "Unnamed U2F key".into(),
reg: old_regs.remove(0),
compromised: false,
counter: 0,
migrated: None,
}];
// Save new format
save_u2f_registrations(user_uuid, &new_regs, conn)?;
new_regs
}
};
Ok((enabled, data))
}
fn _old_parse_registrations(registations: &str) -> Vec<Registration> {
#[derive(Deserialize)]
struct Helper(#[serde(with = "RegistrationDef")] Registration);
let regs: Vec<Value> = serde_json::from_str(registations).expect("Can't parse Registration data");
regs.into_iter().map(|r| serde_json::from_value(r).unwrap()).map(|Helper(r)| r).collect()
}
pub fn generate_u2f_login(user_uuid: &str, conn: &DbConn) -> ApiResult<U2fSignRequest> {
let challenge = _create_u2f_challenge(user_uuid, TwoFactorType::U2fLoginChallenge, conn);
let registrations: Vec<_> = get_u2f_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.reg).collect();
if registrations.is_empty() {
err!("No U2F devices registered")
}
Ok(U2F.sign_request(challenge, registrations))
}
pub fn validate_u2f_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
let challenge_type = TwoFactorType::U2fLoginChallenge as i32;
let tf_challenge = TwoFactor::find_by_user_and_type(user_uuid, challenge_type, conn);
let challenge = match tf_challenge {
Some(tf_challenge) => {
let challenge: Challenge = serde_json::from_str(&tf_challenge.data)?;
tf_challenge.delete(conn)?;
challenge
}
None => err!("Can't recover login challenge"),
};
let response: SignResponse = serde_json::from_str(response)?;
let mut registrations = get_u2f_registrations(user_uuid, conn)?.1;
if registrations.is_empty() {
err!("No U2F devices registered")
}
for reg in &mut registrations {
let response = U2F.sign_response(challenge.clone(), reg.reg.clone(), response.clone(), reg.counter);
match response {
Ok(new_counter) => {
reg.counter = new_counter;
save_u2f_registrations(user_uuid, &registrations, conn)?;
return Ok(());
}
Err(u2f::u2ferror::U2fError::CounterTooLow) => {
reg.compromised = true;
save_u2f_registrations(user_uuid, &registrations, conn)?;
err!("This device might be compromised!");
}
Err(e) => {
warn!("E {:#}", e);
// break;
}
}
}
err!("error verifying response")
}

View File

@@ -1,6 +1,7 @@
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use url::Url;
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
use crate::{
@@ -20,21 +21,42 @@ pub fn routes() -> Vec<Route> {
routes![get_webauthn, generate_webauthn_challenge, activate_webauthn, activate_webauthn_put, delete_webauthn,]
}
// Some old u2f structs still needed for migrating from u2f to WebAuthn
// Both `struct Registration` and `struct U2FRegistration` can be removed if we remove the u2f to WebAuthn migration
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Registration {
pub key_handle: Vec<u8>,
pub pub_key: Vec<u8>,
pub attestation_cert: Option<Vec<u8>>,
pub device_name: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct U2FRegistration {
pub id: i32,
pub name: String,
#[serde(with = "Registration")]
pub reg: Registration,
pub counter: u32,
compromised: bool,
pub migrated: Option<bool>,
}
struct WebauthnConfig {
url: String,
origin: Url,
rpid: String,
}
impl WebauthnConfig {
fn load() -> Webauthn<Self> {
let domain = CONFIG.domain();
let domain_origin = CONFIG.domain_origin();
Webauthn::new(Self {
rpid: reqwest::Url::parse(&domain)
.map(|u| u.domain().map(str::to_owned))
.ok()
.flatten()
.unwrap_or_default(),
rpid: Url::parse(&domain).map(|u| u.domain().map(str::to_owned)).ok().flatten().unwrap_or_default(),
url: domain,
origin: Url::parse(&domain_origin).unwrap(),
})
}
}
@@ -44,8 +66,8 @@ impl webauthn_rs::WebauthnConfig for WebauthnConfig {
&self.url
}
fn get_origin(&self) -> &str {
&self.url
fn get_origin(&self) -> &Url {
&self.origin
}
fn get_relying_party_id(&self) -> &str {
@@ -80,7 +102,7 @@ impl WebauthnRegistration {
}
#[post("/two-factor/get-webauthn", data = "<data>")]
fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
@@ -89,7 +111,7 @@ fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn)
err!("Invalid password");
}
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &conn)?;
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &conn).await?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -100,12 +122,13 @@ fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn)
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let registrations = get_webauthn_registrations(&headers.user.uuid, &conn)?
let registrations = get_webauthn_registrations(&headers.user.uuid, &conn)
.await?
.1
.into_iter()
.map(|r| r.credential.cred_id) // We return the credentialIds to the clients to avoid double registering
@@ -121,7 +144,7 @@ fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers,
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&conn)?;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
@@ -218,7 +241,7 @@ impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
}
#[post("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
@@ -228,10 +251,10 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
// Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
let state = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn).await {
Some(tf) => {
let state: RegistrationState = serde_json::from_str(&tf.data)?;
tf.delete(&conn)?;
tf.delete(&conn).await?;
state
}
None => err!("Can't recover challenge"),
@@ -241,7 +264,7 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
let (credential, _data) =
WebauthnConfig::load().register_credential(&data.DeviceResponse.into(), &state, |_| Ok(false))?;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &conn)?.1;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &conn).await?.1;
// TODO: Check for repeated ID's
registrations.push(WebauthnRegistration {
id: data.Id.into_i32()?,
@@ -252,8 +275,10 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
});
// Save the registrations and return them
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?).save(&conn)?;
_generate_recover_code(&mut user, &conn);
TwoFactor::new(user.uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(&conn)
.await?;
_generate_recover_code(&mut user, &conn).await;
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@@ -264,8 +289,8 @@ fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, con
}
#[put("/two-factor/webauthn", data = "<data>")]
fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn)
async fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn).await
}
#[derive(Deserialize, Debug)]
@@ -276,13 +301,14 @@ struct DeleteU2FData {
}
#[delete("/two-factor/webauthn", data = "<data>")]
fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbConn) -> JsonResult {
let id = data.data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn) {
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn).await
{
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
@@ -296,12 +322,12 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
let removed_item = data.remove(item_pos);
tf.data = serde_json::to_string(&data)?;
tf.save(&conn)?;
tf.save(&conn).await?;
drop(tf);
// If entry is migrated from u2f, delete the u2f entry as well
if let Some(mut u2f) = TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &conn) {
use crate::api::core::two_factor::u2f::U2FRegistration;
if let Some(mut u2f) = TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::U2f as i32, &conn).await
{
let mut data: Vec<U2FRegistration> = match serde_json::from_str(&u2f.data) {
Ok(d) => d,
Err(_) => err!("Error parsing U2F data"),
@@ -311,7 +337,7 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
let new_data_str = serde_json::to_string(&data)?;
u2f.data = new_data_str;
u2f.save(&conn)?;
u2f.save(&conn).await?;
}
let keys_json: Vec<Value> = data.iter().map(WebauthnRegistration::to_json).collect();
@@ -323,18 +349,21 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
})))
}
pub fn get_webauthn_registrations(user_uuid: &str, conn: &DbConn) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
pub async fn get_webauthn_registrations(
user_uuid: &str,
conn: &DbConn,
) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)),
None => Ok((false, Vec::new())), // If no data, return empty list
}
}
pub fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
pub async fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
// Load saved credentials
let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.credential).collect();
get_webauthn_registrations(user_uuid, conn).await?.1.into_iter().map(|r| r.credential).collect();
if creds.is_empty() {
err!("No Webauthn devices registered")
@@ -346,18 +375,19 @@ pub fn generate_webauthn_login(user_uuid: &str, conn: &DbConn) -> JsonResult {
// Save the challenge state for later validation
TwoFactor::new(user_uuid.into(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
.save(conn)?;
.save(conn)
.await?;
// Return challenge to the clients
Ok(Json(serde_json::to_value(response.public_key)?))
}
pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn) {
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
Some(tf) => {
let state: AuthenticationState = serde_json::from_str(&tf.data)?;
tf.delete(conn)?;
tf.delete(conn).await?;
state
}
None => err!("Can't recover login challenge"),
@@ -366,7 +396,7 @@ pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -
let rsp: crate::util::UpCase<PublicKeyCredentialCopy> = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.data.into();
let mut registrations = get_webauthn_registrations(user_uuid, conn)?.1;
let mut registrations = get_webauthn_registrations(user_uuid, conn).await?.1;
// If the credential we received is migrated from U2F, enable the U2F compatibility
//let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0);
@@ -377,7 +407,8 @@ pub fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &DbConn) -
reg.credential.counter = auth_data.counter;
TwoFactor::new(user_uuid.to_string(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn)?;
.save(conn)
.await?;
return Ok(());
}
}

View File

@@ -1,5 +1,5 @@
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value;
use yubico::{config::Config, verify};
@@ -78,7 +78,7 @@ fn verify_yubikey_otp(otp: String) -> EmptyResult {
}
#[post("/two-factor/get-yubikey", data = "<data>")]
fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
// Make sure the credentials are set
get_yubico_credentials()?;
@@ -92,7 +92,7 @@ fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbCo
let user_uuid = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &conn);
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &conn).await;
if let Some(r) = r {
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?;
@@ -113,7 +113,7 @@ fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbCo
}
#[post("/two-factor/yubikey", data = "<data>")]
fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user;
@@ -122,10 +122,11 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
}
// Check if we already have some data
let mut yubikey_data = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &conn) {
Some(data) => data,
None => TwoFactor::new(user.uuid.clone(), TwoFactorType::YubiKey, String::new()),
};
let mut yubikey_data =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::YubiKey as i32, &conn).await {
Some(data) => data,
None => TwoFactor::new(user.uuid.clone(), TwoFactorType::YubiKey, String::new()),
};
let yubikeys = parse_yubikeys(&data);
@@ -154,9 +155,9 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
};
yubikey_data.data = serde_json::to_string(&yubikey_metadata).unwrap();
yubikey_data.save(&conn)?;
yubikey_data.save(&conn).await?;
_generate_recover_code(&mut user, &conn);
_generate_recover_code(&mut user, &conn).await;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
@@ -168,8 +169,8 @@ fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn:
}
#[put("/two-factor/yubikey", data = "<data>")]
fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn)
async fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn).await
}
pub fn validate_yubikey_login(response: &str, twofactor_data: &str) -> EmptyResult {

View File

@@ -1,16 +1,27 @@
use std::{
collections::HashMap,
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::prelude::*,
net::{IpAddr, ToSocketAddrs},
sync::{Arc, RwLock},
net::IpAddr,
sync::Arc,
time::{Duration, SystemTime},
};
use bytes::{Bytes, BytesMut};
use futures::{stream::StreamExt, TryFutureExt};
use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header};
use rocket::{http::ContentType, response::Content, Route};
use reqwest::{
header::{self, HeaderMap, HeaderValue},
Client, Response,
};
use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
net::lookup_host,
sync::RwLock,
};
use html5gum::{Emitter, EndTag, InfallibleTokenizer, Readable, StartTag, StringReader, Tokenizer};
use crate::{
error::Error,
@@ -19,54 +30,115 @@ use crate::{
};
pub fn routes() -> Vec<Route> {
routes![icon]
match CONFIG.icon_service().as_str() {
"internal" => routes![icon_internal],
"bitwarden" => routes![icon_bitwarden],
"duckduckgo" => routes![icon_duckduckgo],
"google" => routes![icon_google],
_ => routes![icon_custom],
}
}
static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = header::HeaderMap::new();
default_headers
.insert(header::USER_AGENT, header::HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers
.insert(header::ACCEPT, header::HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
let mut default_headers = HeaderMap::new();
default_headers.insert(header::USER_AGENT, HeaderValue::from_static("Links (2.22; Linux X86_64; GNU C; text)"));
default_headers.insert(header::ACCEPT, HeaderValue::from_static("text/html, text/*;q=0.5, image/*, */*;q=0.1"));
default_headers.insert(header::ACCEPT_LANGUAGE, HeaderValue::from_static("en,*;q=0.1"));
default_headers.insert(header::CACHE_CONTROL, HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, HeaderValue::from_static("no-cache"));
// Generate the cookie store
let cookie_store = Arc::new(Jar::default());
// Reuse the client between requests
get_reqwest_client_builder()
.cookie_provider(Arc::new(Jar::default()))
let client = get_reqwest_client_builder()
.cookie_provider(cookie_store.clone())
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers)
.build()
.expect("Failed to build icon client")
.default_headers(default_headers.clone());
match client.build() {
Ok(client) => client,
Err(e) => {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder()
.cookie_provider(cookie_store)
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers)
.trust_dns(false)
.build()
.expect("Failed to build client")
}
}
});
// Build Regex only once since this takes a lot of time.
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap());
static ICON_REL_BLACKLIST: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)mask-icon").unwrap());
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new()));
async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
if !is_valid_domain(domain).await {
warn!("Invalid domain: {}", domain);
return None;
}
if is_domain_blacklisted(domain).await {
return None;
}
let url = template.replace("{}", domain);
match CONFIG.icon_redirect_code() {
301 => Some(Redirect::moved(url)), // legacy permanent redirect
302 => Some(Redirect::found(url)), // legacy temporary redirect
307 => Some(Redirect::temporary(url)),
308 => Some(Redirect::permanent(url)),
_ => {
error!("Unexpected redirect code {}", CONFIG.icon_redirect_code());
None
}
}
}
#[get("/<domain>/icon.png")]
fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
async fn icon_custom(domain: String) -> Option<Redirect> {
icon_redirect(&domain, &CONFIG.icon_service()).await
}
#[get("/<domain>/icon.png")]
async fn icon_bitwarden(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.bitwarden.net/{}/icon.png").await
}
#[get("/<domain>/icon.png")]
async fn icon_duckduckgo(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://icons.duckduckgo.com/ip3/{}.ico").await
}
#[get("/<domain>/icon.png")]
async fn icon_google(domain: String) -> Option<Redirect> {
icon_redirect(&domain, "https://www.google.com/s2/favicons?domain={}&sz=32").await
}
#[get("/<domain>/icon.png")]
async fn icon_internal(domain: String) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(&domain) {
if !is_valid_domain(&domain).await {
warn!("Invalid domain: {}", domain);
return Cached::ttl(
Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
);
}
match get_icon(&domain) {
match get_icon(&domain).await {
Some((icon, icon_type)) => {
Cached::ttl(Content(ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl())
Cached::ttl((ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
}
_ => Cached::ttl(Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), CONFIG.icon_cache_negttl()),
_ => Cached::ttl((ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), CONFIG.icon_cache_negttl(), true),
}
}
@@ -74,7 +146,7 @@ fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
///
/// This does some manual checks and makes use of Url to do some basic checking.
/// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255.
fn is_valid_domain(domain: &str) -> bool {
async fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "_-.";
// If parsing the domain fails using Url, it will not work with reqwest.
@@ -206,69 +278,64 @@ mod tests {
}
}
fn is_domain_blacklisted(domain: &str) -> bool {
let mut is_blacklisted = CONFIG.icon_blacklist_non_global_ips()
&& (domain, 0)
.to_socket_addrs()
.map(|x| {
for ip_port in x {
if !is_global(ip_port.ip()) {
warn!("IP {} for domain '{}' is not a global IP!", ip_port.ip(), domain);
return true;
}
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
async fn is_domain_blacklisted(domain: &str) -> bool {
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return true;
}
false
})
.unwrap_or(false);
// Skip the regex check if the previous one is true already
if !is_blacklisted {
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
let mut regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let regex = if let Some(regex) = regex_hashmap.get(&blacklist) {
regex
} else {
drop(regex_hashmap);
let mut regex_hashmap_write = ICON_BLACKLIST_REGEX.write().unwrap();
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if regex_hashmap_write.len() >= 1 {
regex_hashmap_write.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
regex_hashmap_write.insert(blacklist.to_string(), blacklist_regex);
drop(regex_hashmap_write);
regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
regex_hashmap.get(&blacklist).unwrap()
};
// Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(domain) {
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist);
is_blacklisted = true;
}
}
}
is_blacklisted
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
let mut regex_hashmap = ICON_BLACKLIST_REGEX.read().await;
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let regex = if let Some(regex) = regex_hashmap.get(&blacklist) {
regex
} else {
drop(regex_hashmap);
let mut regex_hashmap_write = ICON_BLACKLIST_REGEX.write().await;
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if regex_hashmap_write.len() >= 1 {
regex_hashmap_write.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist);
regex_hashmap_write.insert(blacklist.to_string(), blacklist_regex.unwrap());
drop(regex_hashmap_write);
regex_hashmap = ICON_BLACKLIST_REGEX.read().await;
regex_hashmap.get(&blacklist).unwrap()
};
// Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(domain) {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
return true;
}
}
false
}
fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
// Check for expiration of negatively cached copy
if icon_is_negcached(&path) {
if icon_is_negcached(&path).await {
return None;
}
if let Some(icon) = get_cached_icon(&path) {
let icon_type = match get_icon_type(&icon) {
if let Some(icon) = get_cached_icon(&path).await {
let icon_type = match get_icon_type(&icon).await {
Some(x) => x,
_ => "x-icon",
};
@@ -280,31 +347,31 @@ fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
}
// Get the icon, or None in case of error
match download_icon(domain) {
match download_icon(domain).await {
Ok((icon, icon_type)) => {
save_icon(&path, &icon);
Some((icon, icon_type.unwrap_or("x-icon").to_string()))
save_icon(&path, &icon).await;
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
}
Err(e) => {
error!("Error downloading icon: {:?}", e);
warn!("Unable to download icon: {:?}", e);
let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]);
save_icon(&miss_indicator, &[]).await;
None
}
}
}
fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
async fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
// Check for expiration of successfully cached copy
if icon_is_expired(path) {
if icon_is_expired(path).await {
return None;
}
// Try to read the cached icon, and return it if it exists
if let Ok(mut f) = File::open(path) {
if let Ok(mut f) = File::open(path).await {
let mut buffer = Vec::new();
if f.read_to_end(&mut buffer).is_ok() {
if f.read_to_end(&mut buffer).await.is_ok() {
return Some(buffer);
}
}
@@ -312,22 +379,22 @@ fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
None
}
fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path)?;
async fn file_is_expired(path: &str, ttl: u64) -> Result<bool, Error> {
let meta = symlink_metadata(path).await?;
let modified = meta.modified()?;
let age = SystemTime::now().duration_since(modified)?;
Ok(ttl > 0 && ttl <= age.as_secs())
}
fn icon_is_negcached(path: &str) -> bool {
async fn icon_is_negcached(path: &str) -> bool {
let miss_indicator = path.to_owned() + ".miss";
let expired = file_is_expired(&miss_indicator, CONFIG.icon_cache_negttl());
let expired = file_is_expired(&miss_indicator, CONFIG.icon_cache_negttl()).await;
match expired {
// No longer negatively cached, drop the marker
Ok(true) => {
if let Err(e) = remove_file(&miss_indicator) {
if let Err(e) = remove_file(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {:?}: {:?}", path, e);
}
false
@@ -339,8 +406,8 @@ fn icon_is_negcached(path: &str) -> bool {
}
}
fn icon_is_expired(path: &str) -> bool {
let expired = file_is_expired(path, CONFIG.icon_cache_ttl());
async fn icon_is_expired(path: &str) -> bool {
let expired = file_is_expired(path, CONFIG.icon_cache_ttl()).await;
expired.unwrap_or(true)
}
@@ -358,91 +425,62 @@ impl Icon {
}
}
/// Iterates over the HTML document to find <base href="http://domain.tld">
/// When found it will stop the iteration and the found base href will be shared deref via `base_href`.
///
/// # Arguments
/// * `node` - A Parsed HTML document via html5ever::parse_document()
/// * `base_href` - a mutable url::Url which will be overwritten when a base href tag has been found.
///
fn get_base_href(node: &std::rc::Rc<markup5ever_rcdom::Node>, base_href: &mut url::Url) -> bool {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "base" {
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
async fn get_favicons_node(
dom: InfallibleTokenizer<StringReader<'_>, FaviconEmitter>,
icons: &mut Vec<Icon>,
url: &url::Url,
) {
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
const ATTR_REL: &[u8] = b"rel";
const ATTR_HREF: &[u8] = b"href";
const ATTR_SIZES: &[u8] = b"sizes";
if attr_name == "href" {
debug!("Found base href: {}", attr_value);
*base_href = match base_href.join(attr_value) {
Ok(href) => href,
_ => base_href.clone(),
};
return true;
}
}
return true;
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
// Check if we got a true back and stop the iter.
// This means we found a <base> tag and can stop processing the html.
if get_base_href(child, base_href) {
return true;
}
}
false
}
fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Vec<Icon>, url: &url::Url) {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "link" {
let mut has_rel = false;
let mut href = None;
let mut sizes = None;
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
if attr_name == "rel" && ICON_REL_REGEX.is_match(attr_value) && !ICON_REL_BLACKLIST.is_match(attr_value)
let mut base_url = url.clone();
let mut icon_tags: Vec<StartTag> = Vec::new();
for token in dom {
match token {
FaviconToken::StartTag(tag) => {
if tag.name == TAG_LINK
&& tag.attributes.contains_key(ATTR_REL)
&& tag.attributes.contains_key(ATTR_HREF)
{
has_rel = true;
} else if attr_name == "href" {
href = Some(attr_value);
} else if attr_name == "sizes" {
sizes = Some(attr_value);
let rel_value = std::str::from_utf8(tag.attributes.get(ATTR_REL).unwrap())
.unwrap_or_default()
.to_ascii_lowercase();
if rel_value.contains("icon") && !rel_value.contains("mask-icon") {
icon_tags.push(tag);
}
} else if tag.name == TAG_BASE && tag.attributes.contains_key(ATTR_HREF) {
let href = std::str::from_utf8(tag.attributes.get(ATTR_HREF).unwrap()).unwrap_or_default();
debug!("Found base href: {href}");
base_url = match base_url.join(href) {
Ok(inner_url) => inner_url,
_ => url.clone(),
};
}
}
if has_rel {
if let Some(inner_href) = href {
if let Ok(full_href) = url.join(inner_href).map(String::from) {
let priority = get_icon_priority(&full_href, sizes);
icons.push(Icon::new(priority, full_href));
}
FaviconToken::EndTag(tag) => {
if tag.name == TAG_HEAD {
break;
}
}
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
get_favicons_node(child, icons, url);
for icon_tag in icon_tags {
if let Some(icon_href) = icon_tag.attributes.get(ATTR_HREF) {
if let Ok(full_href) = base_url.join(std::str::from_utf8(icon_href).unwrap_or_default()) {
let sizes = if let Some(v) = icon_tag.attributes.get(ATTR_SIZES) {
std::str::from_utf8(v).unwrap_or_default()
} else {
""
};
let priority = get_icon_priority(full_href.as_str(), sizes).await;
icons.push(Icon::new(priority, full_href.to_string()));
}
};
}
}
@@ -460,16 +498,16 @@ struct IconUrlResult {
///
/// # Example
/// ```
/// let icon_result = get_icon_url("github.com")?;
/// let icon_result = get_icon_url("vaultwarden.discourse.group")?;
/// let icon_result = get_icon_url("github.com").await?;
/// let icon_result = get_icon_url("vaultwarden.discourse.group").await?;
/// ```
fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Default URL with secure and insecure schemes
let ssldomain = format!("https://{}", domain);
let httpdomain = format!("http://{}", domain);
let ssldomain = format!("https://{domain}");
let httpdomain = format!("http://{domain}");
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)) {
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)).await {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
@@ -483,26 +521,25 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{}", base_domain);
let httpbase = format!("http://{}", base_domain);
debug!("[get_icon_url]: Trying without subdomains '{}'", base_domain);
if is_valid_domain(&base_domain).await {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase));
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{}", domain);
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{}", www_domain);
let httpwww = format!("http://{}", www_domain);
debug!("[get_icon_url]: Trying with www. prefix '{}'", www_domain);
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain).await {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww));
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
}
}
sub_resp
}
};
@@ -517,26 +554,23 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Set the referer to be used on the final request, some sites check this.
// Mostly used to prevent direct linking and other security resons.
referer = url.as_str().to_string();
referer = url.to_string();
// Add the default favicon.ico to the list with the domain the content responded from.
// Add the fallback favicon.ico and apple-touch-icon.png to the list with the domain the content responded from.
iconlist.push(Icon::new(35, String::from(url.join("/favicon.ico").unwrap())));
iconlist.push(Icon::new(40, String::from(url.join("/apple-touch-icon.png").unwrap())));
// 384KB should be more than enough for the HTML, though as we only really need the HTML header.
let mut limited_reader = content.take(384 * 1024);
let limited_reader = stream_to_bytes_limit(content, 384 * 1024).await?.to_vec();
use html5ever::tendril::TendrilSink;
let dom = html5ever::parse_document(markup5ever_rcdom::RcDom::default(), Default::default())
.from_utf8()
.read_from(&mut limited_reader)?;
let mut base_url: url::Url = url;
get_base_href(&dom.document, &mut base_url);
get_favicons_node(&dom.document, &mut iconlist, &base_url);
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default()).infallible();
get_favicons_node(dom, &mut iconlist, &url).await;
} else {
// Add the default favicon.ico to the list with just the given domain
iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain)));
iconlist.push(Icon::new(35, format!("{}/favicon.ico", httpdomain)));
iconlist.push(Icon::new(35, format!("{ssldomain}/favicon.ico")));
iconlist.push(Icon::new(40, format!("{ssldomain}/apple-touch-icon.png")));
iconlist.push(Icon::new(35, format!("{httpdomain}/favicon.ico")));
iconlist.push(Icon::new(40, format!("{httpdomain}/apple-touch-icon.png")));
}
// Sort the iconlist by priority
@@ -549,13 +583,13 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
})
}
fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_referer(url, "")
async fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_referer(url, "").await
}
fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
err!("Favicon rel linked to a blacklisted domain!");
async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
warn!("Favicon '{}' resolves to a blacklisted domain or IP!", url);
}
let mut client = CLIENT.get(url);
@@ -563,7 +597,10 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
client = client.header("Referer", referer)
}
client.send()?.error_for_status().map_err(Into::into)
match client.send().await {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{}", e)),
}
}
/// Returns a Integer with the priority of the type of the icon which to prefer.
@@ -575,12 +612,12 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
///
/// # Example
/// ```
/// priority1 = get_icon_priority("http://example.com/path/to/a/favicon.png", "32x32");
/// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", "");
/// priority1 = get_icon_priority("http://example.com/path/to/a/favicon.png", "32x32").await;
/// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", "").await;
/// ```
fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
async fn get_icon_priority(href: &str, sizes: &str) -> u8 {
// Check if there is a dimension set
let (width, height) = parse_sizes(sizes);
let (width, height) = parse_sizes(sizes).await;
// Check if there is a size given
if width != 0 && height != 0 {
@@ -622,15 +659,15 @@ fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
///
/// # Example
/// ```
/// let (width, height) = parse_sizes("64x64"); // (64, 64)
/// let (width, height) = parse_sizes("x128x128"); // (128, 128)
/// let (width, height) = parse_sizes("32"); // (0, 0)
/// let (width, height) = parse_sizes("64x64").await; // (64, 64)
/// let (width, height) = parse_sizes("x128x128").await; // (128, 128)
/// let (width, height) = parse_sizes("32").await; // (0, 0)
/// ```
fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
async fn parse_sizes(sizes: &str) -> (u16, u16) {
let mut width: u16 = 0;
let mut height: u16 = 0;
if let Some(sizes) = sizes {
if !sizes.is_empty() {
match ICON_SIZE_REGEX.captures(sizes.trim()) {
None => {}
Some(dimensions) => {
@@ -645,14 +682,14 @@ fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
(width, height)
}
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if is_domain_blacklisted(domain) {
err!("Domain is blacklisted", domain)
async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
if is_domain_blacklisted(domain).await {
err_silent!("Domain is blacklisted", domain)
}
let icon_result = get_icon_url(domain)?;
let icon_result = get_icon_url(domain).await?;
let mut buffer = Vec::new();
let mut buffer = Bytes::new();
let mut icon_type: Option<&str> = None;
use data_url::DataUrl;
@@ -661,29 +698,34 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if icon.href.starts_with("data:image") {
let datauri = DataUrl::process(&icon.href).unwrap();
// Check if we are able to decode the data uri
match datauri.decode_to_vec() {
Ok((body, _fragment)) => {
let mut body = BytesMut::new();
match datauri.decode::<_, ()>(|bytes| {
body.extend_from_slice(bytes);
Ok(())
}) {
Ok(_) => {
// Also check if the size is atleast 67 bytes, which seems to be the smallest png i could create
if body.len() >= 67 {
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&body);
icon_type = get_icon_type(&body).await;
if icon_type.is_none() {
debug!("Icon from {} data:image uri, is not a valid image type", domain);
continue;
}
info!("Extracted icon from data:image uri for {}", domain);
buffer = body;
buffer = body.freeze();
break;
}
}
_ => warn!("Extracted icon from data:image uri is invalid"),
_ => debug!("Extracted icon from data:image uri is invalid"),
};
} else {
match get_page_with_referer(&icon.href, &icon_result.referer) {
Ok(mut res) => {
res.copy_to(&mut buffer)?;
match get_page_with_referer(&icon.href, &icon_result.referer).await {
Ok(res) => {
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
icon_type = get_icon_type(&buffer).await;
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
@@ -692,33 +734,33 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
info!("Downloaded icon from {}", icon.href);
break;
}
_ => warn!("Download failed for {}", icon.href),
Err(e) => debug!("{:?}", e),
};
}
}
if buffer.is_empty() {
err!("Empty response downloading icon")
err_silent!("Empty response or unable find a valid icon", domain);
}
Ok((buffer, icon_type))
}
fn save_icon(path: &str, icon: &[u8]) {
match File::create(path) {
async fn save_icon(path: &str, icon: &[u8]) {
match File::create(path).await {
Ok(mut f) => {
f.write_all(icon).expect("Error writing icon file");
f.write_all(icon).await.expect("Error writing icon file");
}
Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => {
create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache");
create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder");
}
Err(e) => {
warn!("Icon save error: {:?}", e);
warn!("Unable to save icon: {:?}", e);
}
}
}
fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
async fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
match bytes {
[137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"),
@@ -730,13 +772,30 @@ fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
}
}
/// Minimize the amount of bytes to be parsed from a reqwest result.
/// This prevents very long parsing and memory usage.
async fn stream_to_bytes_limit(res: Response, max_size: usize) -> Result<Bytes, reqwest::Error> {
let mut stream = res.bytes_stream().take(max_size);
let mut buf = BytesMut::new();
let mut size = 0;
while let Some(chunk) = stream.next().await {
let chunk = &chunk?;
size += chunk.len();
buf.extend(chunk);
if size >= max_size {
break;
}
}
Ok(buf.freeze())
}
/// This is an implementation of the default Cookie Jar from Reqwest and reqwest_cookie_store build by pfernie.
/// The default cookie jar used by Reqwest keeps all the cookies based upon the Max-Age or Expires which could be a long time.
/// That could be used for tracking, to prevent this we force the lifespan of the cookies to always be max two minutes.
/// A Cookie Jar is needed because some sites force a redirect with cookies to verify if a request uses cookies or not.
use cookie_store::CookieStore;
#[derive(Default)]
pub struct Jar(RwLock<CookieStore>);
pub struct Jar(std::sync::RwLock<CookieStore>);
impl reqwest::cookie::CookieStore for Jar {
fn set_cookies(&self, cookie_headers: &mut dyn Iterator<Item = &header::HeaderValue>, url: &url::Url) {
@@ -759,8 +818,6 @@ impl reqwest::cookie::CookieStore for Jar {
}
fn cookies(&self, url: &url::Url) -> Option<header::HeaderValue> {
use bytes::Bytes;
let cookie_store = self.0.read().unwrap();
let s = cookie_store
.get_request_values(url)
@@ -775,3 +832,137 @@ impl reqwest::cookie::CookieStore for Jar {
header::HeaderValue::from_maybe_shared(Bytes::from(s)).ok()
}
}
/// Custom FaviconEmitter for the html5gum parser.
/// The FaviconEmitter is using an almost 1:1 copy of the DefaultEmitter with some small changes.
/// This prevents emitting tags like comments, doctype and also strings between the tags.
/// Therefor parsing the HTML content is faster.
use std::collections::{BTreeSet, VecDeque};
enum FaviconToken {
StartTag(StartTag),
EndTag(EndTag),
}
#[derive(Default)]
struct FaviconEmitter {
current_token: Option<FaviconToken>,
last_start_tag: Vec<u8>,
current_attribute: Option<(Vec<u8>, Vec<u8>)>,
seen_attributes: BTreeSet<Vec<u8>>,
emitted_tokens: VecDeque<FaviconToken>,
}
impl FaviconEmitter {
fn emit_token(&mut self, token: FaviconToken) {
self.emitted_tokens.push_front(token);
}
fn flush_current_attribute(&mut self) {
if let Some((k, v)) = self.current_attribute.take() {
match self.current_token {
Some(FaviconToken::StartTag(ref mut tag)) => {
tag.attributes.entry(k).and_modify(|_| {}).or_insert(v);
}
Some(FaviconToken::EndTag(_)) => {
self.seen_attributes.insert(k);
}
_ => {
debug_assert!(false);
}
}
}
}
}
impl Emitter for FaviconEmitter {
type Token = FaviconToken;
fn set_last_start_tag(&mut self, last_start_tag: Option<&[u8]>) {
self.last_start_tag.clear();
self.last_start_tag.extend(last_start_tag.unwrap_or_default());
}
fn pop_token(&mut self) -> Option<Self::Token> {
self.emitted_tokens.pop_back()
}
fn init_start_tag(&mut self) {
self.current_token = Some(FaviconToken::StartTag(StartTag::default()));
}
fn init_end_tag(&mut self) {
self.current_token = Some(FaviconToken::EndTag(EndTag::default()));
self.seen_attributes.clear();
}
fn emit_current_tag(&mut self) {
self.flush_current_attribute();
let mut token = self.current_token.take().unwrap();
match token {
FaviconToken::EndTag(_) => {
self.seen_attributes.clear();
}
FaviconToken::StartTag(ref mut tag) => {
self.set_last_start_tag(Some(&tag.name));
}
}
self.emit_token(token);
}
fn push_tag_name(&mut self, s: &[u8]) {
match self.current_token {
Some(
FaviconToken::StartTag(StartTag {
ref mut name,
..
})
| FaviconToken::EndTag(EndTag {
ref mut name,
..
}),
) => {
name.extend(s);
}
_ => debug_assert!(false),
}
}
fn init_attribute(&mut self) {
self.flush_current_attribute();
self.current_attribute = Some((Vec::new(), Vec::new()));
}
fn push_attribute_name(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().0.extend(s);
}
fn push_attribute_value(&mut self, s: &[u8]) {
self.current_attribute.as_mut().unwrap().1.extend(s);
}
fn current_is_appropriate_end_tag_token(&mut self) -> bool {
match self.current_token {
Some(FaviconToken::EndTag(ref tag)) => !self.last_start_tag.is_empty() && self.last_start_tag == tag.name,
_ => false,
}
}
// We do not want and need these parts of the HTML document
// These will be skipped and ignored during the tokenization and iteration.
fn emit_current_comment(&mut self) {}
fn emit_current_doctype(&mut self) {}
fn emit_eof(&mut self) {}
fn emit_error(&mut self, _: html5gum::Error) {}
fn emit_string(&mut self, _: &[u8]) {}
fn init_comment(&mut self) {}
fn init_doctype(&mut self) {}
fn push_comment(&mut self, _: &[u8]) {}
fn push_doctype_name(&mut self, _: &[u8]) {}
fn push_doctype_public_identifier(&mut self, _: &[u8]) {}
fn push_doctype_system_identifier(&mut self, _: &[u8]) {}
fn set_doctype_public_identifier(&mut self, _: &[u8]) {}
fn set_doctype_system_identifier(&mut self, _: &[u8]) {}
fn set_force_quirks(&mut self) {}
fn set_self_closing(&mut self) {}
}

View File

@@ -1,16 +1,17 @@
use chrono::Local;
use chrono::Utc;
use num_traits::FromPrimitive;
use rocket::serde::json::Json;
use rocket::{
request::{Form, FormItems, FromForm},
form::{Form, FromForm},
Route,
};
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
api::{
core::accounts::{PreloginData, _prelogin},
core::two_factor::{duo, email, email::EmailTokenData, yubikey},
ApiResult, EmptyResult, JsonResult,
ApiResult, EmptyResult, JsonResult, JsonUpcase,
},
auth::ClientIp,
db::{models::*, DbConn},
@@ -19,17 +20,17 @@ use crate::{
};
pub fn routes() -> Vec<Route> {
routes![login]
routes![login, prelogin]
}
#[post("/connect/token", data = "<data>")]
fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
async fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
let data: ConnectData = data.into_inner();
match data.grant_type.as_ref() {
"refresh_token" => {
_check_is_some(&data.refresh_token, "refresh_token cannot be blank")?;
_refresh_login(data, conn)
_refresh_login(data, conn).await
}
"password" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
@@ -41,26 +42,35 @@ fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, conn, &ip)
_password_login(data, conn, &ip).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
_check_is_some(&data.client_secret, "client_secret cannot be blank")?;
_check_is_some(&data.scope, "scope cannot be blank")?;
_api_key_login(data, conn, &ip).await
}
t => err!("Invalid type", t),
}
}
fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
async fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
// Extract token
let token = data.refresh_token.unwrap();
// Get device by refresh token
let mut device = Device::find_by_refresh_token(&token, &conn).map_res("Invalid refresh token")?;
let mut device = Device::find_by_refresh_token(&token, &conn).await.map_res("Invalid refresh token")?;
// COMMON
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
let scope = "api offline_access";
let scope_vec = vec!["api".into(), "offline_access".into()];
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
// Common
let user = User::find_by_uuid(&device.user_uuid, &conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
device.save(&conn)?;
Ok(Json(json!({
"access_token": access_token,
"expires_in": expires_in,
@@ -72,21 +82,25 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": "api offline_access",
"scope": scope,
"unofficialServer": true,
})))
}
fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
async fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api offline_access" {
err!("Scope not supported")
}
let scope_vec = vec!["api".into(), "offline_access".into()];
// Ratelimit the login
crate::ratelimit::check_limit_login(&ip.ip)?;
// Get the user
let username = data.username.as_ref().unwrap();
let user = match User::find_by_mail(username, &conn) {
let username = data.username.as_ref().unwrap().trim();
let user = match User::find_by_mail(username, &conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
};
@@ -102,10 +116,9 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
}
let now = Local::now();
let now = Utc::now().naive_utc();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
let now = now.naive_utc();
if user.last_verifying_at.is_none()
|| now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds()
> CONFIG.signups_verify_resend_time() as i64
@@ -118,7 +131,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
user.last_verifying_at = Some(now);
user.login_verify_count += 1;
if let Err(e) = user.save(&conn) {
if let Err(e) = user.save(&conn).await {
error!("Error updating user: {:#?}", e);
}
@@ -132,9 +145,9 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
err!("Please verify your email before trying again.", format!("IP: {}. Username: {}.", ip.ip, username))
}
let (mut device, new_device) = get_device(&data, &conn, &user);
let (mut device, new_device) = get_device(&data, &conn, &user).await;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, &conn)?;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, &conn).await?;
if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) {
@@ -147,10 +160,9 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
}
// Common
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
device.save(&conn)?;
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
let mut result = json!({
"access_token": access_token,
@@ -164,7 +176,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false,// TODO: Same as above
"scope": "api offline_access",
"scope": scope,
"unofficialServer": true,
});
@@ -176,8 +188,78 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
Ok(Json(result))
}
async fn _api_key_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
// Validate scope
let scope = data.scope.as_ref().unwrap();
if scope != "api" {
err!("Scope not supported")
}
let scope_vec = vec!["api".into()];
// Ratelimit the login
crate::ratelimit::check_limit_login(&ip.ip)?;
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let user_uuid = match client_id.strip_prefix("user.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
};
let user = match User::find_by_uuid(user_uuid, &conn).await {
Some(user) => user,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
};
// Check if the user is disabled
if !user.enabled {
err!("This user has been disabled (API key login)", format!("IP: {}. Username: {}.", ip.ip, user.email))
}
// Check API key. Note that API key logins bypass 2FA.
let client_secret = data.client_secret.as_ref().unwrap();
if !user.check_valid_api_key(client_secret) {
err!("Incorrect client_secret", format!("IP: {}. Username: {}.", ip.ip, user.email))
}
let (mut device, new_device) = get_device(&data, &conn, &user).await;
if CONFIG.mail_enabled() && new_device {
let now = Utc::now().naive_utc();
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name) {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
err!("Could not send login notification email. Please contact your administrator.")
}
}
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
device.save(&conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
// Note: No refresh_token is returned. The CLI just repeats the
// client_credentials login flow when the existing token expires.
Ok(Json(json!({
"access_token": access_token,
"expires_in": expires_in,
"token_type": "Bearer",
"Key": user.akey,
"PrivateKey": user.private_key,
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false, // TODO: Same as above
"scope": scope,
"unofficialServer": true,
})))
}
/// Retrieves an existing device or creates a new device from ConnectData and the User
fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool) {
async fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool) {
// On iOS, device_type sends "iOS", on others it sends a number
let device_type = util::try_parse_string(data.device_type.as_ref()).unwrap_or(0);
let device_id = data.device_identifier.clone().expect("No device id provided");
@@ -185,17 +267,8 @@ fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool)
let mut new_device = false;
// Find device or create new
let device = match Device::find_by_uuid(&device_id, conn) {
Some(device) => {
// Check if owned device, and recreate if not
if device.user_uuid != user.uuid {
info!("Device exists but is owned by another user. The old device will be discarded");
new_device = true;
Device::new(device_id, user.uuid.clone(), device_name, device_type)
} else {
device
}
}
let device = match Device::find_by_uuid_and_user(&device_id, &user.uuid, conn).await {
Some(device) => device,
None => {
new_device = true;
Device::new(device_id, user.uuid.clone(), device_name, device_type)
@@ -205,26 +278,28 @@ fn get_device(data: &ConnectData, conn: &DbConn, user: &User) -> (Device, bool)
(device, new_device)
}
fn twofactor_auth(
async fn twofactor_auth(
user_uuid: &str,
data: &ConnectData,
device: &mut Device,
ip: &ClientIp,
conn: &DbConn,
) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn);
let twofactors = TwoFactor::find_by_user(user_uuid, conn).await;
// No twofactor token if twofactor is disabled
if twofactors.is_empty() {
return Ok(None);
}
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, asume the first one
let twofactor_code = match data.two_factor_token {
Some(ref code) => code,
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA token not provided"),
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?, "2FA token not provided"),
};
let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
@@ -237,16 +312,17 @@ fn twofactor_auth(
match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => {
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn).await?
}
Some(TwoFactorType::Webauthn) => {
_tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn).await?
}
Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::Webauthn) => _tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?
_tf::duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
}
Some(TwoFactorType::Email) => {
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn)?
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn).await?
}
Some(TwoFactorType::Remember) => {
@@ -255,13 +331,18 @@ fn twofactor_auth(
remember = 1; // Make sure we also return the token here, otherwise it will only remember the first time
}
_ => {
err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA Remember token not provided")
err_json!(
_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?,
"2FA Remember token not provided"
)
}
}
}
_ => err!("Invalid two factor provider"),
}
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn).await?;
if !CONFIG.disable_2fa_remember() && remember == 1 {
Ok(Some(device.refresh_twofactor_remember()))
} else {
@@ -274,7 +355,7 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
tf.map(|t| t.data).map_res("Two factor doesn't exist")
}
fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> ApiResult<Value> {
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> ApiResult<Value> {
use crate::api::core::two_factor;
let mut result = json!({
@@ -290,38 +371,18 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
match TwoFactorType::from_i32(*provider) {
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::U2f) if CONFIG.domain_set() => {
let request = two_factor::u2f::generate_u2f_login(user_uuid, conn)?;
let mut challenge_list = Vec::new();
for key in request.registered_keys {
challenge_list.push(json!({
"appId": request.app_id,
"challenge": request.challenge,
"version": key.version,
"keyHandle": key.key_handle,
}));
}
let challenge_list_str = serde_json::to_string(&challenge_list).unwrap();
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Challenges": challenge_list_str,
});
}
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn)?;
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0;
}
Some(TwoFactorType::Duo) => {
let email = match User::find_by_uuid(user_uuid, conn) {
let email = match User::find_by_uuid(user_uuid, conn).await {
Some(u) => u.email,
None => err!("User does not exist"),
};
let (signature, host) = duo::generate_duo_signature(&email, conn)?;
let (signature, host) = duo::generate_duo_signature(&email, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Host": host,
@@ -330,7 +391,7 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
}
Some(tf_type @ TwoFactorType::YubiKey) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No YubiKey devices registered"),
};
@@ -345,14 +406,14 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Some(tf_type @ TwoFactorType::Email) => {
use crate::api::core::two_factor as _tf;
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn) {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No twofactor email registered"),
};
// Send email immediately if email is the only 2FA option
if providers.len() == 1 {
_tf::email::send_token(user_uuid, conn)?
_tf::email::send_token(user_uuid, conn).await?
}
let email_data = EmailTokenData::from_json(&twofactor.data)?;
@@ -368,62 +429,63 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Ok(result)
}
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default)]
#[allow(non_snake_case)]
struct ConnectData {
grant_type: String, // refresh_token, password
// Needed for grant_type="refresh_token"
refresh_token: Option<String>,
// Needed for grant_type="password"
client_id: Option<String>, // web, cli, desktop, browser, mobile
password: Option<String>,
scope: Option<String>,
username: Option<String>,
device_identifier: Option<String>,
device_name: Option<String>,
device_type: Option<String>,
device_push_token: Option<String>, // Unused; mobile device push not yet supported.
// Needed for two-factor auth
two_factor_provider: Option<i32>,
two_factor_token: Option<String>,
two_factor_remember: Option<i32>,
#[post("/accounts/prelogin", data = "<data>")]
async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
impl<'f> FromForm<'f> for ConnectData {
type Error = String;
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default, FromForm)]
#[allow(non_snake_case)]
struct ConnectData {
#[field(name = uncased("grant_type"))]
#[field(name = uncased("granttype"))]
grant_type: String, // refresh_token, password, client_credentials (API key)
fn from_form(items: &mut FormItems<'f>, _strict: bool) -> Result<Self, Self::Error> {
let mut form = Self::default();
for item in items {
let (key, value) = item.key_value_decoded();
let mut normalized_key = key.to_lowercase();
normalized_key.retain(|c| c != '_'); // Remove '_'
// Needed for grant_type="refresh_token"
#[field(name = uncased("refresh_token"))]
#[field(name = uncased("refreshtoken"))]
refresh_token: Option<String>,
match normalized_key.as_ref() {
"granttype" => form.grant_type = value,
"refreshtoken" => form.refresh_token = Some(value),
"clientid" => form.client_id = Some(value),
"password" => form.password = Some(value),
"scope" => form.scope = Some(value),
"username" => form.username = Some(value),
"deviceidentifier" => form.device_identifier = Some(value),
"devicename" => form.device_name = Some(value),
"devicetype" => form.device_type = Some(value),
"devicepushtoken" => form.device_push_token = Some(value),
"twofactorprovider" => form.two_factor_provider = value.parse().ok(),
"twofactortoken" => form.two_factor_token = Some(value),
"twofactorremember" => form.two_factor_remember = value.parse().ok(),
key => warn!("Detected unexpected parameter during login: {}", key),
}
}
// Needed for grant_type = "password" | "client_credentials"
#[field(name = uncased("client_id"))]
#[field(name = uncased("clientid"))]
client_id: Option<String>, // web, cli, desktop, browser, mobile
#[field(name = uncased("client_secret"))]
#[field(name = uncased("clientsecret"))]
client_secret: Option<String>,
#[field(name = uncased("password"))]
password: Option<String>,
#[field(name = uncased("scope"))]
scope: Option<String>,
#[field(name = uncased("username"))]
username: Option<String>,
Ok(form)
}
#[field(name = uncased("device_identifier"))]
#[field(name = uncased("deviceidentifier"))]
device_identifier: Option<String>,
#[field(name = uncased("device_name"))]
#[field(name = uncased("devicename"))]
device_name: Option<String>,
#[field(name = uncased("device_type"))]
#[field(name = uncased("devicetype"))]
device_type: Option<String>,
#[allow(unused)]
#[field(name = uncased("device_push_token"))]
#[field(name = uncased("devicepushtoken"))]
_device_push_token: Option<String>, // Unused; mobile device push not yet supported.
// Needed for two-factor auth
#[field(name = uncased("two_factor_provider"))]
#[field(name = uncased("twofactorprovider"))]
two_factor_provider: Option<i32>,
#[field(name = uncased("two_factor_token"))]
#[field(name = uncased("twofactortoken"))]
two_factor_token: Option<String>,
#[field(name = uncased("two_factor_remember"))]
#[field(name = uncased("twofactorremember"))]
two_factor_remember: Option<i32>,
}
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {

View File

@@ -5,7 +5,7 @@ mod identity;
mod notifications;
mod web;
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use serde_json::Value;
pub use crate::api::{
@@ -13,6 +13,8 @@ pub use crate::api::{
core::purge_sends,
core::purge_trashed_ciphers,
core::routes as core_routes,
core::two_factor::send_incomplete_2fa_notifications,
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
icons::routes as icons_routes,
identity::routes as identity_routes,
notifications::routes as notifications_routes,

View File

@@ -1,10 +1,10 @@
use std::sync::atomic::{AtomicBool, Ordering};
use rocket::serde::json::Json;
use rocket::Route;
use rocket_contrib::json::Json;
use serde_json::Value as JsonValue;
use crate::{api::EmptyResult, auth::Headers, db::DbConn, Error, CONFIG};
use crate::{api::EmptyResult, auth::Headers, Error, CONFIG};
pub fn routes() -> Vec<Route> {
routes![negotiate, websockets_err]
@@ -30,7 +30,7 @@ fn websockets_err() -> EmptyResult {
}
#[post("/hub/negotiate")]
fn negotiate(_headers: Headers, _conn: DbConn) -> Json<JsonValue> {
fn negotiate(_headers: Headers) -> Json<JsonValue> {
use crate::crypto;
use data_encoding::BASE64URL;
@@ -65,7 +65,7 @@ use chashmap::CHashMap;
use chrono::NaiveDateTime;
use serde_json::from_str;
use crate::db::models::{Cipher, Folder, User};
use crate::db::models::{Cipher, Folder, Send, User};
use rmpv::Value;
@@ -335,6 +335,23 @@ impl WebSocketUsers {
self.send_update(uuid, &data).ok();
}
}
pub fn send_send_update(&self, ut: UpdateType, send: &Send, user_uuids: &[String]) {
let user_uuid = convert_option(send.user_uuid.clone());
let data = create_update(
vec![
("Id".into(), send.uuid.clone().into()),
("UserId".into(), user_uuid),
("RevisionDate".into(), serialize_date(send.revision_date)),
],
ut,
);
for uuid in user_uuids {
self.send_update(uuid, &data).ok();
}
}
}
/* Message Structure
@@ -400,7 +417,7 @@ pub enum UpdateType {
}
use rocket::State;
pub type Notify<'a> = State<'a, WebSocketUsers>;
pub type Notify<'a> = &'a State<WebSocketUsers>;
pub fn start_notification_server() -> WebSocketUsers {
let factory = WsFactory::init();
@@ -413,12 +430,11 @@ pub fn start_notification_server() -> WebSocketUsers {
settings.queue_size = 2;
settings.panic_on_internal = false;
ws::Builder::new()
.with_settings(settings)
.build(factory)
.unwrap()
.listen((CONFIG.websocket_address().as_str(), CONFIG.websocket_port()))
.unwrap();
let ws = ws::Builder::new().with_settings(settings).build(factory).unwrap();
CONFIG.set_ws_shutdown_handle(ws.broadcaster());
ws.listen((CONFIG.websocket_address().as_str(), CONFIG.websocket_port())).unwrap();
warn!("WS Server stopped!");
});
}

View File

@@ -1,10 +1,11 @@
use std::path::{Path, PathBuf};
use rocket::{http::ContentType, response::content::Content, response::NamedFile, Route};
use rocket_contrib::json::Json;
use rocket::serde::json::Json;
use rocket::{fs::NamedFile, http::ContentType, Route};
use serde_json::Value;
use crate::{
api::core::now,
error::Error,
util::{Cached, SafeString},
CONFIG,
@@ -21,77 +22,74 @@ pub fn routes() -> Vec<Route> {
}
#[get("/")]
fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).ok())
async fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).await.ok(), false)
}
#[get("/app-id.json")]
fn app_id() -> Cached<Content<Json<Value>>> {
fn app_id() -> Cached<(ContentType, Json<Value>)> {
let content_type = ContentType::new("application", "fido.trusted-apps+json");
Cached::long(Content(
content_type,
Json(json!({
"trustedFacets": [
{
"version": { "major": 1, "minor": 0 },
"ids": [
// Per <https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-appid-and-facets-v2.0-id-20180227.html#determining-the-facetid-of-a-calling-application>:
//
// "In the Web case, the FacetID MUST be the Web Origin [RFC6454]
// of the web page triggering the FIDO operation, written as
// a URI with an empty path. Default ports are omitted and any
// path component is ignored."
//
// This leaves it unclear as to whether the path must be empty,
// or whether it can be non-empty and will be ignored. To be on
// the safe side, use a proper web origin (with empty path).
&CONFIG.domain_origin(),
"ios:bundle-id:com.8bit.bitwarden",
"android:apk-key-hash:dUGFzUzf3lmHSLBDBIv+WaFyZMI" ]
}]
})),
))
Cached::long(
(
content_type,
Json(json!({
"trustedFacets": [
{
"version": { "major": 1, "minor": 0 },
"ids": [
// Per <https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-appid-and-facets-v2.0-id-20180227.html#determining-the-facetid-of-a-calling-application>:
//
// "In the Web case, the FacetID MUST be the Web Origin [RFC6454]
// of the web page triggering the FIDO operation, written as
// a URI with an empty path. Default ports are omitted and any
// path component is ignored."
//
// This leaves it unclear as to whether the path must be empty,
// or whether it can be non-empty and will be ignored. To be on
// the safe side, use a proper web origin (with empty path).
&CONFIG.domain_origin(),
"ios:bundle-id:com.8bit.bitwarden",
"android:apk-key-hash:dUGFzUzf3lmHSLBDBIv+WaFyZMI" ]
}]
})),
),
true,
)
}
#[get("/<p..>", rank = 10)] // Only match this if the other routes don't match
fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).ok())
async fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).await.ok(), true)
}
#[get("/attachments/<uuid>/<file_id>")]
fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).ok()
async fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).await.ok()
}
// We use DbConn here to let the alive healthcheck also verify the database connection.
use crate::db::DbConn;
#[get("/alive")]
fn alive() -> Json<String> {
use crate::util::format_date;
use chrono::Utc;
Json(format_date(&Utc::now().naive_utc()))
fn alive(_conn: DbConn) -> Json<String> {
now()
}
#[get("/bwrs_static/<filename>")]
fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
#[get("/vw_static/<filename>")]
fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Error> {
match filename.as_ref() {
"mail-github.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
"error-x.svg" => Ok(Content(ContentType::SVG, include_bytes!("../static/images/error-x.svg"))),
"hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"vaultwarden-icon.png" => {
Ok(Content(ContentType::PNG, include_bytes!("../static/images/vaultwarden-icon.png")))
}
"bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => {
Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js")))
}
"identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"mail-github.png" => Ok((ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok((ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
"error-x.svg" => Ok((ContentType::SVG, include_bytes!("../static/images/error-x.svg"))),
"hibp.png" => Ok((ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"vaultwarden-icon.png" => Ok((ContentType::PNG, include_bytes!("../static/images/vaultwarden-icon.png"))),
"bootstrap.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js"))),
"identicon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.6.0.slim.js" => {
Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.0.slim.js")))
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.0.slim.js")))
}
_ => err!(format!("Static file not found: {}", filename)),
}

View File

@@ -22,6 +22,8 @@ static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
static JWT_EMERGENCY_ACCESS_INVITE_ISSUER: Lazy<String> =
Lazy::new(|| format!("{}|emergencyaccessinvite", CONFIG.domain_origin()));
static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
@@ -53,15 +55,11 @@ pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
}
fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Error> {
let validation = jsonwebtoken::Validation {
leeway: 30, // 30 seconds
validate_exp: true,
validate_nbf: true,
aud: None,
iss: Some(issuer),
sub: None,
algorithms: vec![JWT_ALGORITHM],
};
let mut validation = jsonwebtoken::Validation::new(JWT_ALGORITHM);
validation.leeway = 30; // 30 seconds
validation.validate_exp = true;
validation.validate_nbf = true;
validation.set_issuer(&[issuer]);
let token = token.replace(char::is_whitespace, "");
jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation).map(|d| d.claims).map_res("Error decoding JWT")
@@ -75,6 +73,10 @@ pub fn decode_invite(token: &str) -> Result<InviteJwtClaims, Error> {
decode_jwt(token, JWT_INVITE_ISSUER.to_string())
}
pub fn decode_emergency_access_invite(token: &str) -> Result<EmergencyAccessInviteJwtClaims, Error> {
decode_jwt(token, JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string())
}
pub fn decode_delete(token: &str) -> Result<BasicJwtClaims, Error> {
decode_jwt(token, JWT_DELETE_ISSUER.to_string())
}
@@ -159,6 +161,43 @@ pub fn generate_invite_claims(
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct EmergencyAccessInviteJwtClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub email: String,
pub emer_id: Option<String>,
pub grantor_name: Option<String>,
pub grantor_email: Option<String>,
}
pub fn generate_emergency_access_invite_claims(
uuid: String,
email: String,
emer_id: Option<String>,
grantor_name: Option<String>,
grantor_email: Option<String>,
) -> EmergencyAccessInviteJwtClaims {
let time_now = Utc::now().naive_utc();
EmergencyAccessInviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid,
email,
emer_id,
grantor_name,
grantor_email,
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims {
// Not before
@@ -214,7 +253,10 @@ pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims {
//
// Bearer token authentication
//
use rocket::request::{FromRequest, Outcome, Request};
use rocket::{
outcome::try_outcome,
request::{FromRequest, Outcome, Request},
};
use crate::db::{
models::{CollectionUser, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
@@ -225,10 +267,11 @@ pub struct Host {
pub host: String,
}
impl<'a, 'r> FromRequest<'a, 'r> for Host {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Host {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
// Get host
@@ -271,17 +314,14 @@ pub struct Headers {
pub user: User,
}
impl<'a, 'r> FromRequest<'a, 'r> for Headers {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Headers {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
let host = match Host::from_request(request) {
Outcome::Forward(_) => return Outcome::Forward(()),
Outcome::Failure(f) => return Outcome::Failure(f),
Outcome::Success(host) => host.host,
};
let host = try_outcome!(Host::from_request(request).await).host;
// Get access_token
let access_token: &str = match headers.get_one("Authorization") {
@@ -301,17 +341,17 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
let device_uuid = claims.device;
let user_uuid = claims.sub;
let conn = match request.guard::<DbConn>() {
let conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let device = match Device::find_by_uuid(&device_uuid, &conn) {
let device = match Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &conn).await {
Some(device) => device,
None => err_handler!("Invalid device id"),
};
let user = match User::find_by_uuid(&user_uuid, &conn) {
let user = match User::find_by_uuid(&user_uuid, &conn).await {
Some(user) => user,
None => err_handler!("Device has no user associated"),
};
@@ -320,7 +360,7 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
if let Some(stamp_exception) =
user.stamp_exception.as_deref().and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{
let current_route = match request.route().and_then(|r| r.name) {
let current_route = match request.route().and_then(|r| r.name.as_deref()) {
Some(name) => name,
_ => err_handler!("Error getting current route for stamp exception"),
};
@@ -333,7 +373,7 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
// This prevents checking this stamp exception for new requests.
let mut user = user;
user.reset_stamp_exception();
if let Err(e) = user.save(&conn) {
if let Err(e) = user.save(&conn).await {
error!("Error updating user: {:#?}", e);
}
err_handler!("Stamp exception is expired")
@@ -367,14 +407,14 @@ pub struct OrgHeaders {
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_org_id(request: &Request) -> Option<String> {
if let Some(Ok(org_id)) = request.get_param::<String>(1) {
fn get_org_id(request: &Request<'_>) -> Option<String> {
if let Some(Ok(org_id)) = request.param::<String>(1) {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
}
if let Some(Ok(org_id)) = request.get_query_value::<String>("organizationId") {
if let Some(Ok(org_id)) = request.query_value::<String>("organizationId") {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
@@ -383,52 +423,48 @@ fn get_org_id(request: &Request) -> Option<String> {
None
}
impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<Headers>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
match get_org_id(request) {
Some(org_id) => {
let conn = match request.guard::<DbConn>() {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(Headers::from_request(request).await);
match get_org_id(request) {
Some(org_id) => {
let conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &conn) {
Some(user) => {
if user.status == UserOrgStatus::Confirmed as i32 {
user
} else {
err_handler!("The current user isn't confirmed member of the organization")
}
}
None => err_handler!("The current user isn't member of the organization"),
};
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user,
org_user_type: {
if let Some(org_usr_type) = UserOrgType::from_i32(org_user.atype) {
org_usr_type
} else {
// This should only happen if the DB is corrupted
err_handler!("Unknown user type in the database")
}
},
org_user,
org_id,
})
let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, &org_id, &conn).await {
Some(user) => {
if user.status == UserOrgStatus::Confirmed as i32 {
user
} else {
err_handler!("The current user isn't confirmed member of the organization")
}
}
_ => err_handler!("Error getting the organization id"),
}
None => err_handler!("The current user isn't member of the organization"),
};
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user,
org_user_type: {
if let Some(org_usr_type) = UserOrgType::from_i32(org_user.atype) {
org_usr_type
} else {
// This should only happen if the DB is corrupted
err_handler!("Unknown user type in the database")
}
},
org_user,
org_id,
})
}
_ => err_handler!("Error getting the organization id"),
}
}
}
@@ -440,25 +476,21 @@ pub struct AdminHeaders {
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for AdminHeaders {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for AdminHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Admin {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be Admin or Owner to call this endpoint")
}
}
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Admin {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be Admin or Owner to call this endpoint")
}
}
}
@@ -476,14 +508,14 @@ impl From<AdminHeaders> for Headers {
// col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"),
// but there could be cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_col_id(request: &Request) -> Option<String> {
if let Some(Ok(col_id)) = request.get_param::<String>(3) {
fn get_col_id(request: &Request<'_>) -> Option<String> {
if let Some(Ok(col_id)) = request.param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
}
if let Some(Ok(col_id)) = request.get_query_value::<String>("collectionId") {
if let Some(Ok(col_id)) = request.query_value::<String>("collectionId") {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
@@ -502,46 +534,40 @@ pub struct ManagerHeaders {
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ManagerHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
match get_col_id(request) {
Some(col_id) => {
let conn = match request.guard::<DbConn>() {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager {
match get_col_id(request) {
Some(col_id) => {
let conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(
&col_id,
&headers.org_user.user_uuid,
&conn,
) {
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),
}
}
if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn)
.await
{
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),
}
_ => err_handler!("Error getting the collection id"),
}
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
_ => err_handler!("Error getting the collection id"),
}
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
@@ -565,25 +591,21 @@ pub struct ManagerHeadersLoose {
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeadersLoose {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ManagerHeadersLoose {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
@@ -604,24 +626,20 @@ pub struct OwnerHeaders {
pub user: User,
}
impl<'a, 'r> FromRequest<'a, 'r> for OwnerHeaders {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OwnerHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type == UserOrgType::Owner {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
})
} else {
err_handler!("You need to be Owner to call this endpoint")
}
}
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type == UserOrgType::Owner {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
})
} else {
err_handler!("You need to be Owner to call this endpoint")
}
}
}
@@ -635,10 +653,11 @@ pub struct ClientIp {
pub ip: IpAddr,
}
impl<'a, 'r> FromRequest<'a, 'r> for ClientIp {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ClientIp {
type Error = ();
fn from_request(req: &'a Request<'r>) -> Outcome<Self, Self::Error> {
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let ip = if CONFIG._ip_header_enabled() {
req.headers().get_one(&CONFIG.ip_header()).and_then(|ip| {
match ip.find(',') {

View File

@@ -2,7 +2,6 @@ use std::process::exit;
use std::sync::RwLock;
use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::Url;
use crate::{
@@ -23,21 +22,6 @@ pub static CONFIG: Lazy<Config> = Lazy::new(|| {
})
});
static PRIVACY_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[\w]").unwrap());
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"database_url",
"domain_origin",
"domain_path",
"domain",
"helo_name",
"org_creation_users",
"signups_domains_whitelist",
"smtp_from",
"smtp_host",
"smtp_username",
];
pub type Pass = String;
macro_rules! make_config {
@@ -52,6 +36,9 @@ macro_rules! make_config {
pub struct Config { inner: RwLock<Inner> }
struct Inner {
rocket_shutdown_handle: Option<rocket::Shutdown>,
ws_shutdown_handle: Option<ws::Sender>,
templates: Handlebars<'static>,
config: ConfigItems,
@@ -61,7 +48,7 @@ macro_rules! make_config {
_overrides: Vec<String>,
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
#[derive(Clone, Default, Deserialize, Serialize)]
pub struct ConfigBuilder {
$($(
#[serde(skip_serializing_if = "Option::is_none")]
@@ -72,13 +59,13 @@ macro_rules! make_config {
impl ConfigBuilder {
#[allow(clippy::field_reassign_with_default)]
fn from_env() -> Self {
match dotenv::from_path(".env") {
match dotenvy::from_path(get_env("ENV_FILE").unwrap_or_else(|| String::from(".env"))) {
Ok(_) => (),
Err(e) => match e {
dotenv::Error::LineParse(msg, pos) => {
dotenvy::Error::LineParse(msg, pos) => {
panic!("Error loading the .env file:\nNear {:?} on position {}\nPlease fix and restart!\n", msg, pos);
},
dotenv::Error::Io(ioerr) => match ioerr.kind() {
dotenvy::Error::Io(ioerr) => match ioerr.kind() {
std::io::ErrorKind::NotFound => {
println!("[INFO] No .env file found.\n");
},
@@ -133,19 +120,6 @@ macro_rules! make_config {
builder
}
/// Returns a new builder with all the elements from self,
/// except those that are equal in both sides
fn _remove(&self, other: &Self) -> Self {
let mut builder = ConfigBuilder::default();
$($(
if &self.$name != &other.$name {
builder.$name = self.$name.clone();
}
)+)+
builder
}
fn build(&self) -> ConfigItems {
let mut config = ConfigItems::default();
let _domain_set = self.domain.is_some();
@@ -161,12 +135,13 @@ macro_rules! make_config {
}
}
#[derive(Debug, Clone, Default)]
pub struct ConfigItems { $($(pub $name: make_config!{@type $ty, $none_action}, )+)+ }
#[derive(Clone, Default)]
struct ConfigItems { $($( $name: make_config!{@type $ty, $none_action}, )+)+ }
#[allow(unused)]
impl Config {
$($(
$(#[doc = $doc])+
pub fn $name(&self) -> make_config!{@type $ty, $none_action} {
self.inner.read().unwrap().config.$name.clone()
}
@@ -189,38 +164,91 @@ macro_rules! make_config {
fn _get_doc(doc: &str) -> serde_json::Value {
let mut split = doc.split("|>").map(str::trim);
json!({
"name": split.next(),
"description": split.next()
// We do not use the json!() macro here since that causes a lot of macro recursion.
// This slows down compile time and it also causes issues with rust-analyzer
serde_json::Value::Object({
let mut doc_json = serde_json::Map::new();
doc_json.insert("name".into(), serde_json::to_value(split.next()).unwrap());
doc_json.insert("description".into(), serde_json::to_value(split.next()).unwrap());
doc_json
})
}
json!([ $({
"group": stringify!($group),
"grouptoggle": stringify!($($group_enabled)?),
"groupdoc": make_config!{ @show $($groupdoc)? },
"elements": [
$( {
"editable": $editable,
"name": stringify!($name),
"value": cfg.$name,
"default": def.$name,
"type": _get_form_type(stringify!($ty)),
"doc": _get_doc(concat!($($doc),+)),
"overridden": overriden.contains(&stringify!($name).to_uppercase()),
}, )+
]}, )+ ])
// We do not use the json!() macro here since that causes a lot of macro recursion.
// This slows down compile time and it also causes issues with rust-analyzer
serde_json::Value::Array(<[_]>::into_vec(Box::new([
$(
serde_json::Value::Object({
let mut group = serde_json::Map::new();
group.insert("group".into(), (stringify!($group)).into());
group.insert("grouptoggle".into(), (stringify!($($group_enabled)?)).into());
group.insert("groupdoc".into(), (make_config!{ @show $($groupdoc)? }).into());
group.insert("elements".into(), serde_json::Value::Array(<[_]>::into_vec(Box::new([
$(
serde_json::Value::Object({
let mut element = serde_json::Map::new();
element.insert("editable".into(), ($editable).into());
element.insert("name".into(), (stringify!($name)).into());
element.insert("value".into(), serde_json::to_value(cfg.$name).unwrap());
element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
element.insert("overridden".into(), (overriden.contains(&stringify!($name).to_uppercase())).into());
element
}),
)+
]))));
group
}),
)+
])))
}
pub fn get_support_json(&self) -> serde_json::Value {
// Define which config keys need to be masked.
// Pass types will always be masked and no need to put them in the list.
// Besides Pass, only String types will be masked via _privacy_mask.
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"database_url",
"domain_origin",
"domain_path",
"domain",
"helo_name",
"org_creation_users",
"signups_domains_whitelist",
"smtp_from",
"smtp_host",
"smtp_username",
];
let cfg = {
let inner = &self.inner.read().unwrap();
inner.config.clone()
};
json!({ $($(
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
)+)+ })
/// We map over the string and remove all alphanumeric, _ and - characters.
/// This is the fastest way (within micro-seconds) instead of using a regex (which takes mili-seconds)
fn _privacy_mask(value: &str) -> String {
value.chars().map(|c|
match c {
c if c.is_alphanumeric() => '*',
'_' => '*',
'-' => '*',
_ => c
}
).collect::<String>()
}
serde_json::Value::Object({
let mut json = serde_json::Map::new();
$($(
json.insert(stringify!($name).into(), make_config!{ @supportstr $name, cfg.$name, $ty, $none_action });
)+)+;
json
})
}
pub fn get_overrides(&self) -> Vec<String> {
@@ -228,29 +256,30 @@ macro_rules! make_config {
let inner = &self.inner.read().unwrap();
inner._overrides.clone()
};
overrides
}
}
};
// Support string print
( @supportstr $name:ident, $value:expr, Pass, option ) => { $value.as_ref().map(|_| String::from("***")) }; // Optional pass, we map to an Option<String> with "***"
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { String::from("***") }; // Required pass, we return "***"
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
( @supportstr $name:ident, $value:expr, Pass, option ) => { serde_json::to_value($value.as_ref().map(|_| String::from("***"))).unwrap() }; // Optional pass, we map to an Option<String> with "***"
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { "***".into() }; // Required pass, we return "***"
( @supportstr $name:ident, $value:expr, String, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!($value.as_ref().map(|x| PRIVACY_REGEX.replace_all(&x.to_string(), "${1}*").to_string()))
serde_json::to_value($value.as_ref().map(|x| _privacy_mask(x) )).unwrap()
} else {
json!($value)
serde_json::to_value($value).unwrap()
}
};
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
( @supportstr $name:ident, $value:expr, String, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!(PRIVACY_REGEX.replace_all(&$value.to_string(), "${1}*").to_string())
} else {
json!($value)
}
_privacy_mask(&$value).into()
} else {
($value).into()
}
};
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { serde_json::to_value($value).unwrap() }; // Optional other value, we return as is or convert to string to apply the privacy config
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { ($value).into() }; // Required other value, we return as is or convert to string to apply the privacy config
// Group or empty string
( @show ) => { "" };
@@ -300,14 +329,14 @@ make_config! {
data_folder: String, false, def, "data".to_string();
/// Database URL
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3");
/// Database connection pool size
database_max_conns: u32, false, def, 10;
/// Icon cache folder
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
/// Attachments folder
attachments_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "attachments");
/// Sends folder
sends_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "sends");
/// Temp folder |> Used for storing temporary file uploads
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp");
/// Templates folder
templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates");
/// Session JWT key
@@ -333,6 +362,15 @@ make_config! {
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
/// Defaults to daily. Set blank to disable this job.
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
/// Incomplete 2FA login schedule |> Cron schedule of the job that checks for incomplete 2FA logins.
/// Defaults to once every minute. Set blank to disable this job.
incomplete_2fa_schedule: String, false, def, "30 * * * * *".to_string();
/// Emergency notification reminder schedule |> Cron schedule of the job that sends expiration reminders to emergency access grantors.
/// Defaults to hourly. Set blank to disable this job.
emergency_notification_reminder_schedule: String, false, def, "0 5 * * * *".to_string();
/// Emergency request timeout schedule |> Cron schedule of the job that grants emergency access requests that have met the required wait time.
/// Defaults to hourly. Set blank to disable this job.
emergency_request_timeout_schedule: String, false, def, "0 5 * * * *".to_string();
},
/// General settings
@@ -366,9 +404,17 @@ make_config! {
/// sure to inform all users of any changes to this setting.
trash_auto_delete_days: i64, true, option;
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
/// otherwise it will delete them and they won't be downloaded again.
/// Incomplete 2FA time limit |> Number of minutes to wait before a 2FA-enabled login is
/// considered incomplete, resulting in an email notification. An incomplete 2FA login is one
/// where the correct master password was provided but the required 2FA step was not completed,
/// which potentially indicates a master password compromise. Set to 0 to disable this check.
/// This setting applies globally to all users.
incomplete_2fa_time_limit: i64, true, def, 3;
/// Disable icon downloads |> Set to true to disable icon downloading in the internal icon service.
/// This still serves existing icons from $ICON_CACHE_FOLDER, without generating any external
/// network requests. $ICON_CACHE_TTL must also be set to 0; otherwise, the existing icons
/// will be deleted eventually, but won't be downloaded again.
disable_icon_download: bool, true, def, false;
/// Allow new signups |> Controls whether new users can register. Users can be invited by the vaultwarden admin even if this is disabled
signups_allowed: bool, true, def, true;
@@ -385,6 +431,8 @@ make_config! {
org_creation_users: String, true, def, "".to_string();
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
invitations_allowed: bool, true, def, true;
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
emergency_access_allowed: bool, true, def, true;
/// Password iterations |> Number of server-side passwords hashing iterations.
/// The changes only apply when a user changes their password. Not recommended to lower the value
password_iterations: i32, true, def, 100_000;
@@ -407,6 +455,19 @@ make_config! {
ip_header: String, true, def, "X-Real-IP".to_string();
/// Internal IP header property, used to avoid recomputing each time
_ip_header_enabled: bool, false, gen, |c| &c.ip_header.trim().to_lowercase() != "none";
/// Icon service |> The predefined icon services are: internal, bitwarden, duckduckgo, google.
/// To specify a custom icon service, set a URL template with exactly one instance of `{}`,
/// which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
/// `internal` refers to Vaultwarden's built-in icon fetching implementation. If an external
/// service is set, an icon request to Vaultwarden will return an HTTP redirect to the
/// corresponding icon at the external service.
icon_service: String, false, def, "internal".to_string();
/// Icon redirect code |> The HTTP status code to use for redirects to an external icon service.
/// The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
/// Temporary redirects are useful while testing different icon services, but once a service
/// has been decided on, consider using permanent redirects for cacheability. The legacy codes
/// are currently better supported by the Bitwarden clients.
icon_redirect_code: u32, true, def, 302;
/// Positive icon cache expiry |> Number of seconds to consider that an already cached icon is fresh. After this period, the icon will be redownloaded
icon_cache_ttl: u64, true, def, 2_592_000;
/// Negative icon cache expiry |> Number of seconds before trying to download an icon that failed again.
@@ -453,11 +514,30 @@ make_config! {
/// Max database connection retries |> Number of times to retry the database connection during startup, with 1 second between each retry, set to 0 to retry indefinitely
db_connection_retries: u32, false, def, 15;
/// Timeout when aquiring database connection
database_timeout: u64, false, def, 30;
/// Database connection pool size
database_max_conns: u32, false, def, 10;
/// Database connection init |> SQL statements to run when creating a new database connection, mainly useful for connection-scoped pragmas. If empty, a database-specific default is used.
database_conn_init: String, false, def, "".to_string();
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
disable_admin_token: bool, true, def, false;
/// Allowed iframe ancestors (Know the risks!) |> Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
allowed_iframe_ancestors: String, true, def, String::new();
/// Seconds between login requests |> Number of seconds, on average, between login and 2FA requests from the same IP address before rate limiting kicks in
login_ratelimit_seconds: u64, false, def, 60;
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `login_ratelimit_seconds`. Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2
login_ratelimit_max_burst: u32, false, def, 10;
/// Seconds between admin requests |> Number of seconds, on average, between admin requests from the same IP address before rate limiting kicks in
admin_ratelimit_seconds: u64, false, def, 300;
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `admin_ratelimit_seconds`
admin_ratelimit_max_burst: u32, false, def, 3;
},
/// Yubikey settings
@@ -492,12 +572,14 @@ make_config! {
_enable_smtp: bool, true, def, true;
/// Host
smtp_host: String, true, option;
/// Enable Secure SMTP |> (Explicit) - Enabling this by default would use STARTTLS (Standard ports 587 or 25)
smtp_ssl: bool, true, def, true;
/// Force TLS |> (Implicit) - Enabling this would force the use of an SSL/TLS connection, instead of upgrading an insecure one with STARTTLS (Standard port 465)
smtp_explicit_tls: bool, true, def, false;
/// DEPRECATED smtp_ssl |> DEPRECATED - Please use SMTP_SECURITY
smtp_ssl: bool, false, option;
/// DEPRECATED smtp_explicit_tls |> DEPRECATED - Please use SMTP_SECURITY
smtp_explicit_tls: bool, false, option;
/// Secure SMTP |> ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption
smtp_security: String, true, auto, |c| smtp_convert_deprecated_ssl_options(c.smtp_ssl, c.smtp_explicit_tls); // TODO: After deprecation make it `def, "starttls".to_string()`
/// Port
smtp_port: u16, true, auto, |c| if c.smtp_explicit_tls {465} else if c.smtp_ssl {587} else {25};
smtp_port: u16, true, auto, |c| if c.smtp_security == *"force_tls" {465} else if c.smtp_security == *"starttls" {587} else {25};
/// From Address
smtp_from: String, true, def, String::new();
/// From Name
@@ -524,8 +606,8 @@ make_config! {
email_2fa: _enable_email_2fa {
/// Enabled |> Disabling will prevent users from setting up new email 2FA and using existing email 2FA configured
_enable_email_2fa: bool, true, auto, |c| c._enable_smtp && c.smtp_host.is_some();
/// Email token size |> Number of digits in an email token (min: 6, max: 19). Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting.
email_token_size: u32, true, def, 6;
/// Email token size |> Number of digits in an email 2FA token (min: 6, max: 255). Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting.
email_token_size: u8, true, def, 6;
/// Token expiration time |> Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
email_expiration_time: u64, true, def, 600;
/// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent
@@ -580,6 +662,13 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
if cfg._enable_smtp {
match cfg.smtp_security.as_str() {
"off" | "starttls" | "force_tls" => (),
_ => err!(
"`SMTP_SECURITY` is invalid. It needs to be one of the following options: starttls, force_tls or off"
),
}
if cfg.smtp_host.is_some() == cfg.smtp_from.is_empty() {
err!("Both `SMTP_HOST` and `SMTP_FROM` need to be set for email support")
}
@@ -599,21 +688,39 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
if cfg._enable_email_2fa && cfg.email_token_size < 6 {
err!("`EMAIL_TOKEN_SIZE` has a minimum size of 6")
}
if cfg._enable_email_2fa && cfg.email_token_size > 19 {
err!("`EMAIL_TOKEN_SIZE` has a maximum size of 19")
}
}
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
let validate_regex = Regex::new(r);
let validate_regex = regex::Regex::new(r);
match validate_regex {
Ok(_) => (),
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
}
}
// Check if the icon service is valid
let icon_service = cfg.icon_service.as_str();
match icon_service {
"internal" | "bitwarden" | "duckduckgo" | "google" => (),
_ => {
if !icon_service.starts_with("http") {
err!(format!("Icon service URL `{}` must start with \"http\"", icon_service))
}
match icon_service.matches("{}").count() {
1 => (), // nominal
0 => err!(format!("Icon service URL `{}` has no placeholder \"{{}}\"", icon_service)),
_ => err!(format!("Icon service URL `{}` has more than one placeholder \"{{}}\"", icon_service)),
}
}
}
// Check if the icon redirect code is valid
match cfg.icon_redirect_code {
301 | 302 | 307 | 308 => (),
_ => err!("Only HTTP 301/302 and 307/308 redirects are supported"),
}
Ok(())
}
@@ -640,6 +747,20 @@ fn extract_url_path(url: &str) -> String {
}
}
/// Convert the old SMTP_SSL and SMTP_EXPLICIT_TLS options
fn smtp_convert_deprecated_ssl_options(smtp_ssl: Option<bool>, smtp_explicit_tls: Option<bool>) -> String {
if smtp_explicit_tls.is_some() || smtp_ssl.is_some() {
println!("[DEPRECATED]: `SMTP_SSL` or `SMTP_EXPLICIT_TLS` is set. Please use `SMTP_SECURITY` instead.");
}
if smtp_explicit_tls.is_some() && smtp_explicit_tls.unwrap() {
return "force_tls".to_string();
} else if smtp_ssl.is_some() && !smtp_ssl.unwrap() {
return "off".to_string();
}
// Return the default `starttls` in all other cases
"starttls".to_string()
}
impl Config {
pub fn load() -> Result<Self, Error> {
// Loading from env and file
@@ -656,6 +777,8 @@ impl Config {
Ok(Config {
inner: RwLock::new(Inner {
rocket_shutdown_handle: None,
ws_shutdown_handle: None,
templates: load_templates(&config.templates_folder),
config,
_env,
@@ -699,7 +822,7 @@ impl Config {
Ok(())
}
pub fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
let builder = {
let usr = &self.inner.read().unwrap()._usr;
let mut _overrides = Vec::new();
@@ -820,6 +943,28 @@ impl Config {
hb.render(name, data).map_err(Into::into)
}
}
pub fn set_rocket_shutdown_handle(&self, handle: rocket::Shutdown) {
self.inner.write().unwrap().rocket_shutdown_handle = Some(handle);
}
pub fn set_ws_shutdown_handle(&self, handle: ws::Sender) {
self.inner.write().unwrap().ws_shutdown_handle = Some(handle);
}
pub fn shutdown(&self) {
if let Ok(c) = self.inner.read() {
if let Some(handle) = c.ws_shutdown_handle.clone() {
handle.shutdown().ok();
}
// Wait a bit before stopping the web server
tokio::runtime::Handle::current()
.block_on(async move { tokio::time::sleep(tokio::time::Duration::from_secs(1)).await });
if let Some(handle) = c.rocket_shutdown_handle.clone() {
handle.notify();
}
}
}
}
use handlebars::{Context, Handlebars, Helper, HelperResult, Output, RenderContext, RenderError, Renderable};
@@ -853,13 +998,23 @@ where
reg!("email/change_email", ".html");
reg!("email/delete_account", ".html");
reg!("email/emergency_access_invite_accepted", ".html");
reg!("email/emergency_access_invite_confirmed", ".html");
reg!("email/emergency_access_recovery_approved", ".html");
reg!("email/emergency_access_recovery_initiated", ".html");
reg!("email/emergency_access_recovery_rejected", ".html");
reg!("email/emergency_access_recovery_reminder", ".html");
reg!("email/emergency_access_recovery_timed_out", ".html");
reg!("email/incomplete_2fa_login", ".html");
reg!("email/invite_accepted", ".html");
reg!("email/invite_confirmed", ".html");
reg!("email/new_device_logged_in", ".html");
reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html");
reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/send_org_invite", ".html");
reg!("email/send_emergency_access_invite", ".html");
reg!("email/twofactor_email", ".html");
reg!("email/verify_email", ".html");
reg!("email/welcome", ".html");
@@ -883,7 +1038,7 @@ where
fn case_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
r: &'reg Handlebars,
r: &'reg Handlebars<'_>,
ctx: &'rc Context,
rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
@@ -900,7 +1055,7 @@ fn case_helper<'reg, 'rc>(
fn js_escape_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
_r: &'reg Handlebars,
_r: &'reg Handlebars<'_>,
_ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,

View File

@@ -6,8 +6,6 @@ use std::num::NonZeroU32;
use data_encoding::HEXLOWER;
use ring::{digest, hmac, pbkdf2};
use crate::error::Error;
static DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN;
@@ -51,6 +49,34 @@ pub fn get_random(mut array: Vec<u8>) -> Vec<u8> {
array
}
/// Generates a random string over a specified alphabet.
pub fn get_random_string(alphabet: &[u8], num_chars: usize) -> String {
// Ref: https://rust-lang-nursery.github.io/rust-cookbook/algorithms/randomness.html
use rand::Rng;
let mut rng = rand::thread_rng();
(0..num_chars)
.map(|_| {
let i = rng.gen_range(0..alphabet.len());
alphabet[i] as char
})
.collect()
}
/// Generates a random numeric string.
pub fn get_random_string_numeric(num_chars: usize) -> String {
const ALPHABET: &[u8] = b"0123456789";
get_random_string(ALPHABET, num_chars)
}
/// Generates a random alphanumeric string.
pub fn get_random_string_alphanum(num_chars: usize) -> String {
const ALPHABET: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZ\
abcdefghijklmnopqrstuvwxyz\
0123456789";
get_random_string(ALPHABET, num_chars)
}
pub fn generate_id(num_bytes: usize) -> String {
HEXLOWER.encode(&get_random(vec![0; num_bytes]))
}
@@ -65,23 +91,15 @@ pub fn generate_attachment_id() -> String {
generate_id(10) // 80 bits
}
pub fn generate_token(token_size: u32) -> Result<String, Error> {
// A u64 can represent all whole numbers up to 19 digits long.
if token_size > 19 {
err!("Token size is limited to 19 digits")
}
/// Generates a numeric token for email-based verifications.
pub fn generate_email_token(token_size: u8) -> String {
get_random_string_numeric(token_size as usize)
}
let low: u64 = 0;
let high: u64 = 10u64.pow(token_size);
// Generate a random number in the range [low, high), then format it as a
// token of fixed width, left-padding with 0 as needed.
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
let number: u64 = rng.gen_range(low..high);
let token = format!("{:0size$}", number, size = token_size as usize);
Ok(token)
/// Generates a personal API key.
/// Upstream uses 30 chars, which is ~178 bits of entropy.
pub fn generate_api_key() -> String {
get_random_string_alphanum(30)
}
//

View File

@@ -1,8 +1,20 @@
use diesel::r2d2::{ConnectionManager, Pool, PooledConnection};
use std::{sync::Arc, time::Duration};
use diesel::{
connection::SimpleConnection,
r2d2::{ConnectionManager, CustomizeConnection, Pool, PooledConnection},
};
use rocket::{
http::Status,
outcome::IntoOutcome,
request::{FromRequest, Outcome},
Request, State,
Request,
};
use tokio::{
sync::{Mutex, OwnedSemaphorePermit, Semaphore},
time::timeout,
};
use crate::{
@@ -22,6 +34,23 @@ pub mod __mysql_schema;
#[path = "schemas/postgresql/schema.rs"]
pub mod __postgresql_schema;
// These changes are based on Rocket 0.5-rc wrapper of Diesel: https://github.com/SergioBenitez/Rocket/blob/v0.5-rc/contrib/sync_db_pools
// A wrapper around spawn_blocking that propagates panics to the calling code.
pub async fn run_blocking<F, R>(job: F) -> R
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
match tokio::task::spawn_blocking(job).await {
Ok(ret) => ret,
Err(e) => match e.try_into_panic() {
Ok(panic) => std::panic::resume_unwind(panic),
Err(_) => unreachable!("spawn_blocking tasks are never cancelled"),
},
}
}
// This is used to generate the main DbConn and DbPool enums, which contain one variant for each database supported
macro_rules! generate_connections {
( $( $name:ident: $ty:ty ),+ ) => {
@@ -29,15 +58,74 @@ macro_rules! generate_connections {
#[derive(Eq, PartialEq)]
pub enum DbConnType { $( $name, )+ }
pub struct DbConn {
conn: Arc<Mutex<Option<DbConnInner>>>,
permit: Option<OwnedSemaphorePermit>,
}
#[allow(non_camel_case_types)]
pub enum DbConn { $( #[cfg($name)] $name(PooledConnection<ConnectionManager< $ty >>), )+ }
pub enum DbConnInner { $( #[cfg($name)] $name(PooledConnection<ConnectionManager< $ty >>), )+ }
#[derive(Debug)]
pub struct DbConnOptions {
pub init_stmts: String,
}
$( // Based on <https://stackoverflow.com/a/57717533>.
#[cfg($name)]
impl CustomizeConnection<$ty, diesel::r2d2::Error> for DbConnOptions {
fn on_acquire(&self, conn: &mut $ty) -> Result<(), diesel::r2d2::Error> {
(|| {
if !self.init_stmts.is_empty() {
conn.batch_execute(&self.init_stmts)?;
}
Ok(())
})().map_err(diesel::r2d2::Error::QueryError)
}
})+
#[derive(Clone)]
pub struct DbPool {
// This is an 'Option' so that we can drop the pool in a 'spawn_blocking'.
pool: Option<DbPoolInner>,
semaphore: Arc<Semaphore>
}
#[allow(non_camel_case_types)]
#[derive(Clone)]
pub enum DbPool { $( #[cfg($name)] $name(Pool<ConnectionManager< $ty >>), )+ }
pub enum DbPoolInner { $( #[cfg($name)] $name(Pool<ConnectionManager< $ty >>), )+ }
impl Drop for DbConn {
fn drop(&mut self) {
let conn = self.conn.clone();
let permit = self.permit.take();
// Since connection can't be on the stack in an async fn during an
// await, we have to spawn a new blocking-safe thread...
tokio::task::spawn_blocking(move || {
// And then re-enter the runtime to wait on the async mutex, but in a blocking fashion.
let mut conn = tokio::runtime::Handle::current().block_on(conn.lock_owned());
if let Some(conn) = conn.take() {
drop(conn);
}
// Drop permit after the connection is dropped
drop(permit);
});
}
}
impl Drop for DbPool {
fn drop(&mut self) {
let pool = self.pool.take();
tokio::task::spawn_blocking(move || drop(pool));
}
}
impl DbPool {
// For the given database URL, guess it's type, run migrations create pool and return it
// For the given database URL, guess its type, run migrations, create pool, and return it
#[allow(clippy::diverging_sub_expression)]
pub fn from_config() -> Result<Self, Error> {
let url = CONFIG.database_url();
let conn_type = DbConnType::from_url(&url)?;
@@ -50,9 +138,16 @@ macro_rules! generate_connections {
let manager = ConnectionManager::new(&url);
let pool = Pool::builder()
.max_size(CONFIG.database_max_conns())
.connection_timeout(Duration::from_secs(CONFIG.database_timeout()))
.connection_customizer(Box::new(DbConnOptions{
init_stmts: conn_type.get_init_stmts()
}))
.build(manager)
.map_res("Failed to create pool")?;
return Ok(Self::$name(pool));
return Ok(DbPool {
pool: Some(DbPoolInner::$name(pool)),
semaphore: Arc::new(Semaphore::new(CONFIG.database_max_conns() as usize)),
});
}
#[cfg(not($name))]
#[allow(unreachable_code)]
@@ -61,10 +156,26 @@ macro_rules! generate_connections {
)+ }
}
// Get a connection from the pool
pub fn get(&self) -> Result<DbConn, Error> {
match self { $(
pub async fn get(&self) -> Result<DbConn, Error> {
let duration = Duration::from_secs(CONFIG.database_timeout());
let permit = match timeout(duration, self.semaphore.clone().acquire_owned()).await {
Ok(p) => p.expect("Semaphore should be open"),
Err(_) => {
err!("Timeout waiting for database connection");
}
};
match self.pool.as_ref().expect("DbPool.pool should always be Some()") { $(
#[cfg($name)]
Self::$name(p) => Ok(DbConn::$name(p.get().map_res("Error retrieving connection from pool")?)),
DbPoolInner::$name(p) => {
let pool = p.clone();
let c = run_blocking(move || pool.get_timeout(duration)).await.map_res("Error retrieving connection from pool")?;
return Ok(DbConn {
conn: Arc::new(Mutex::new(Some(DbConnInner::$name(c)))),
permit: Some(permit)
});
},
)+ }
}
}
@@ -104,6 +215,23 @@ impl DbConnType {
err!("`DATABASE_URL` looks like a SQLite URL, but 'sqlite' feature is not enabled")
}
}
pub fn get_init_stmts(&self) -> String {
let init_stmts = CONFIG.database_conn_init();
if !init_stmts.is_empty() {
init_stmts
} else {
self.default_init_stmts()
}
}
pub fn default_init_stmts(&self) -> String {
match self {
Self::sqlite => "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;".to_string(),
Self::mysql => "".to_string(),
Self::postgresql => "".to_string(),
}
}
}
#[macro_export]
@@ -113,42 +241,52 @@ macro_rules! db_run {
db_run! { $conn: sqlite, mysql, postgresql $body }
};
// Different code for each db
( $conn:ident: $( $($db:ident),+ $body:block )+ ) => {{
#[allow(unused)] use diesel::prelude::*;
match $conn {
$($(
#[cfg($db)]
crate::db::DbConn::$db(ref $conn) => {
paste::paste! {
#[allow(unused)] use crate::db::[<__ $db _schema>]::{self as schema, *};
#[allow(unused)] use [<__ $db _model>]::*;
#[allow(unused)] use crate::db::FromDb;
}
$body
},
)+)+
}}
};
// Same for all dbs
( @raw $conn:ident: $body:block ) => {
db_run! { @raw $conn: sqlite, mysql, postgresql $body }
};
// Different code for each db
( @raw $conn:ident: $( $($db:ident),+ $body:block )+ ) => {
( $conn:ident: $( $($db:ident),+ $body:block )+ ) => {{
#[allow(unused)] use diesel::prelude::*;
#[allow(unused_variables)]
match $conn {
$($(
#[allow(unused)] use $crate::db::FromDb;
let conn = $conn.conn.clone();
let mut conn = conn.lock_owned().await;
match conn.as_mut().expect("internal invariant broken: self.connection is Some") {
$($(
#[cfg($db)]
crate::db::DbConn::$db(ref $conn) => {
$body
$crate::db::DbConnInner::$db($conn) => {
paste::paste! {
#[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *};
#[allow(unused)] use [<__ $db _model>]::*;
}
tokio::task::block_in_place(move || { $body }) // Run blocking can't be used due to the 'static limitation, use block_in_place instead
},
)+)+
}
};
}};
( @raw $conn:ident: $( $($db:ident),+ $body:block )+ ) => {{
#[allow(unused)] use diesel::prelude::*;
#[allow(unused)] use $crate::db::FromDb;
let conn = $conn.conn.clone();
let mut conn = conn.lock_owned().await;
match conn.as_mut().expect("internal invariant broken: self.connection is Some") {
$($(
#[cfg($db)]
$crate::db::DbConnInner::$db($conn) => {
paste::paste! {
#[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *};
// @ RAW: #[allow(unused)] use [<__ $db _model>]::*;
}
tokio::task::block_in_place(move || { $body }) // Run blocking can't be used due to the 'static limitation, use block_in_place instead
},
)+)+
}
}};
}
pub trait FromDb {
@@ -201,7 +339,7 @@ macro_rules! db_object {
paste::paste! {
#[allow(unused)] use super::*;
#[allow(unused)] use diesel::prelude::*;
#[allow(unused)] use crate::db::[<__ $db _schema>]::*;
#[allow(unused)] use $crate::db::[<__ $db _schema>]::*;
$( #[$attr] )*
pub struct [<$name Db>] { $(
@@ -213,7 +351,7 @@ macro_rules! db_object {
#[inline(always)] pub fn to_db(x: &super::$name) -> Self { Self { $( $field: x.$field.clone(), )+ } }
}
impl crate::db::FromDb for [<$name Db>] {
impl $crate::db::FromDb for [<$name Db>] {
type Output = super::$name;
#[allow(clippy::wrong_self_convention)]
#[inline(always)] fn from_db(self) -> Self::Output { super::$name { $( $field: self.$field, )+ } }
@@ -227,9 +365,10 @@ pub mod models;
/// Creates a back-up of the sqlite database
/// MySQL/MariaDB and PostgreSQL are not supported.
pub fn backup_database(conn: &DbConn) -> Result<(), Error> {
pub async fn backup_database(conn: &DbConn) -> Result<(), Error> {
db_run! {@raw conn:
postgresql, mysql {
let _ = conn;
err!("PostgreSQL and MySQL/MariaDB do not support this backup feature");
}
sqlite {
@@ -244,7 +383,7 @@ pub fn backup_database(conn: &DbConn) -> Result<(), Error> {
}
/// Get the SQL Server version
pub fn get_sql_server_version(conn: &DbConn) -> String {
pub async fn get_sql_server_version(conn: &DbConn) -> String {
db_run! {@raw conn:
postgresql, mysql {
no_arg_sql_function!(version, diesel::sql_types::Text);
@@ -260,15 +399,14 @@ pub fn get_sql_server_version(conn: &DbConn) -> String {
/// Attempts to retrieve a single connection from the managed database pool. If
/// no pool is currently managed, fails with an `InternalServerError` status. If
/// no connections are available, fails with a `ServiceUnavailable` status.
impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
#[rocket::async_trait]
impl<'r> FromRequest<'r> for DbConn {
type Error = ();
fn from_request(request: &'a Request<'r>) -> Outcome<DbConn, ()> {
// https://github.com/SergioBenitez/Rocket/commit/e3c1a4ad3ab9b840482ec6de4200d30df43e357c
let pool = try_outcome!(request.guard::<State<DbPool>>());
match pool.get() {
Ok(conn) => Outcome::Success(conn),
Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())),
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
match request.rocket().state::<DbPool>() {
Some(p) => p.get().await.map_err(|_| ()).into_outcome(Status::ServiceUnavailable),
None => Outcome::Failure((Status::InternalServerError, ())),
}
}
}
@@ -278,7 +416,6 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
#[cfg(sqlite)]
mod sqlite_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/sqlite");
pub fn run_migrations() -> Result<(), super::Error> {
@@ -315,7 +452,6 @@ mod sqlite_migrations {
#[cfg(mysql)]
mod mysql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/mysql");
pub fn run_migrations() -> Result<(), super::Error> {
@@ -336,7 +472,6 @@ mod mysql_migrations {
#[cfg(postgresql)]
mod postgresql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/postgresql");
pub fn run_migrations() -> Result<(), super::Error> {

View File

@@ -2,14 +2,12 @@ use std::io::ErrorKind;
use serde_json::Value;
use super::Cipher;
use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(super::Cipher, foreign_key = "cipher_uuid")]
#[primary_key(id)]
pub struct Attachment {
pub id: String,
@@ -60,7 +58,7 @@ use crate::error::MapResult;
/// Database methods
impl Attachment {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(attachments::table)
@@ -92,7 +90,7 @@ impl Attachment {
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
@@ -116,14 +114,14 @@ impl Attachment {
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(cipher_uuid, conn) {
attachment.delete(conn)?;
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(cipher_uuid, conn).await {
attachment.delete(conn).await?;
}
Ok(())
}
pub fn find_by_id(id: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_id(id: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::id.eq(id.to_lowercase()))
@@ -133,7 +131,7 @@ impl Attachment {
}}
}
pub fn find_by_cipher(cipher_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_cipher(cipher_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid))
@@ -143,59 +141,60 @@ impl Attachment {
}}
}
pub fn find_by_ciphers(cipher_uuids: Vec<String>, conn: &DbConn) -> Vec<Self> {
pub async fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub async fn count_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
pub async fn size_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
pub async fn find_all_by_ciphers(cipher_uuids: &Vec<String>, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
.select(attachments::all_columns)
.load::<AttachmentDb>(conn)
.expect("Error loading attachments")
.from_db()
}}
}
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub fn count_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
pub fn size_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
}

View File

@@ -1,19 +1,17 @@
use crate::CONFIG;
use chrono::{Duration, NaiveDateTime, Utc};
use serde_json::Value;
use crate::CONFIG;
use super::{Attachment, CollectionCipher, Favorite, FolderCipher, User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{
Attachment, CollectionCipher, Favorite, FolderCipher, Organization, User, UserOrgStatus, UserOrgType,
UserOrganization,
};
use crate::api::core::CipherSyncData;
use std::borrow::Cow;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)]
pub struct Cipher {
pub uuid: String,
@@ -82,22 +80,32 @@ use crate::error::MapResult;
/// Database methods
impl Cipher {
pub fn to_json(&self, host: &str, user_uuid: &str, conn: &DbConn) -> Value {
pub async fn to_json(
&self,
host: &str,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
) -> Value {
use crate::util::format_date;
let attachments = Attachment::find_by_cipher(&self.uuid, conn);
// When there are no attachments use null instead of an empty array
let attachments_json = if attachments.is_empty() {
Value::Null
let mut attachments_json: Value = Value::Null;
if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(attachments) = cipher_sync_data.cipher_attachments.get(&self.uuid) {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect();
}
} else {
attachments.iter().map(|c| c.to_json(host)).collect()
};
let attachments = Attachment::find_by_cipher(&self.uuid, conn).await;
if !attachments.is_empty() {
attachments_json = attachments.iter().map(|c| c.to_json(host)).collect()
}
}
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json =
self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let (read_only, hide_passwords) = match self.get_access_restrictions(user_uuid, conn) {
let (read_only, hide_passwords) = match self.get_access_restrictions(user_uuid, cipher_sync_data, conn).await {
Some((ro, hp)) => (ro, hp),
None => {
error!("Cipher ownership assertion failure");
@@ -109,7 +117,7 @@ impl Cipher {
// If not passing an empty object, mobile clients will crash.
let mut type_data_json: Value = serde_json::from_str(&self.data).unwrap_or_else(|_| json!({}));
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
if self.atype == 1 {
if type_data_json["Uris"].is_array() {
@@ -124,13 +132,23 @@ impl Cipher {
// Clone the type_data and add some default value.
let mut data_json = type_data_json.clone();
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// data_json should always contain the following keys with every atype
data_json["Fields"] = json!(fields_json);
data_json["Name"] = json!(self.name);
data_json["Notes"] = json!(self.notes);
data_json["PasswordHistory"] = json!(password_history_json);
let collection_ids = if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(cipher_collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
Cow::from(cipher_collections)
} else {
Cow::from(Vec::with_capacity(0))
}
} else {
Cow::from(self.get_collections(user_uuid, conn).await)
};
// There are three types of cipher response models in upstream
// Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order
// of increasing level of detail). vaultwarden currently only
@@ -144,8 +162,8 @@ impl Cipher {
"Type": self.atype,
"RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"FolderId": self.get_folder_uuid(user_uuid, conn),
"Favorite": self.is_favorite(user_uuid, conn),
"FolderId": if let Some(cipher_sync_data) = cipher_sync_data { cipher_sync_data.cipher_folders.get(&self.uuid).map(|c| c.to_string() ) } else { self.get_folder_uuid(user_uuid, conn).await },
"Favorite": if let Some(cipher_sync_data) = cipher_sync_data { cipher_sync_data.cipher_favorites.contains(&self.uuid) } else { self.is_favorite(user_uuid, conn).await },
"Reprompt": self.reprompt.unwrap_or(RepromptType::None as i32),
"OrganizationId": self.organization_uuid,
"Attachments": attachments_json,
@@ -154,7 +172,7 @@ impl Cipher {
"OrganizationUseTotp": true,
// This field is specific to the cipherDetails type.
"CollectionIds": self.get_collections(user_uuid, conn),
"CollectionIds": collection_ids,
"Name": self.name,
"Notes": self.notes,
@@ -189,28 +207,28 @@ impl Cipher {
json_object
}
pub fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
pub async fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
let mut user_uuids = Vec::new();
match self.user_uuid {
Some(ref user_uuid) => {
User::update_uuid_revision(user_uuid, conn);
User::update_uuid_revision(user_uuid, conn).await;
user_uuids.push(user_uuid.clone())
}
None => {
// Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid {
UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).iter().for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn);
for user_org in UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
user_uuids.push(user_org.user_uuid.clone())
});
}
}
}
};
user_uuids
}
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
@@ -244,13 +262,13 @@ impl Cipher {
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
FolderCipher::delete_all_by_cipher(&self.uuid, conn)?;
CollectionCipher::delete_all_by_cipher(&self.uuid, conn)?;
Attachment::delete_all_by_cipher(&self.uuid, conn)?;
Favorite::delete_all_by_cipher(&self.uuid, conn)?;
FolderCipher::delete_all_by_cipher(&self.uuid, conn).await?;
CollectionCipher::delete_all_by_cipher(&self.uuid, conn).await?;
Attachment::delete_all_by_cipher(&self.uuid, conn).await?;
Favorite::delete_all_by_cipher(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(ciphers::table.filter(ciphers::uuid.eq(&self.uuid)))
@@ -259,54 +277,55 @@ impl Cipher {
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for cipher in Self::find_by_org(org_uuid, conn) {
cipher.delete(conn)?;
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
// TODO: Optimize this by executing a DELETE directly on the database, instead of first fetching.
for cipher in Self::find_by_org(org_uuid, conn).await {
cipher.delete(conn).await?;
}
Ok(())
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, conn) {
cipher.delete(conn)?;
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, conn).await {
cipher.delete(conn).await?;
}
Ok(())
}
/// Purge all ciphers that are old enough to be auto-deleted.
pub fn purge_trash(conn: &DbConn) {
pub async fn purge_trash(conn: &DbConn) {
if let Some(auto_delete_days) = CONFIG.trash_auto_delete_days() {
let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days);
for cipher in Self::find_deleted_before(&dt, conn) {
cipher.delete(conn).ok();
for cipher in Self::find_deleted_before(&dt, conn).await {
cipher.delete(conn).await.ok();
}
}
}
pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn);
pub async fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
match (self.get_folder_uuid(user_uuid, conn), folder_uuid) {
match (self.get_folder_uuid(user_uuid, conn).await, folder_uuid) {
// No changes
(None, None) => Ok(()),
(Some(ref old), Some(ref new)) if old == new => Ok(()),
// Add to folder
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(conn),
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(conn).await,
// Remove from folder
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn) {
Some(old) => old.delete(conn),
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await {
Some(old) => old.delete(conn).await,
None => err!("Couldn't move from previous folder"),
},
// Move to another folder
(Some(old), Some(new)) => {
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn) {
old.delete(conn)?;
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await {
old.delete(conn).await?;
}
FolderCipher::new(&new, &self.uuid).save(conn)
FolderCipher::new(&new, &self.uuid).save(conn).await
}
}
}
@@ -317,13 +336,21 @@ impl Cipher {
}
/// Returns whether this cipher is owned by an org in which the user has full access.
pub fn is_in_full_access_org(&self, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_in_full_access_org(
&self,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
) -> bool {
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(user_org) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(cached_user_org) = cipher_sync_data.user_organizations.get(org_uuid) {
return cached_user_org.has_full_access();
}
} else if let Some(user_org) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
return user_org.has_full_access();
}
}
false
}
@@ -332,75 +359,99 @@ impl Cipher {
/// not in any collection the user has access to. Otherwise, the user has
/// access to this cipher, and Some(read_only, hide_passwords) represents
/// the access restrictions.
pub fn get_access_restrictions(&self, user_uuid: &str, conn: &DbConn) -> Option<(bool, bool)> {
pub async fn get_access_restrictions(
&self,
user_uuid: &str,
cipher_sync_data: Option<&CipherSyncData>,
conn: &DbConn,
) -> Option<(bool, bool)> {
// Check whether this cipher is directly owned by the user, or is in
// a collection that the user has full access to. If so, there are no
// access restrictions.
if self.is_owned_by_user(user_uuid) || self.is_in_full_access_org(user_uuid, conn) {
if self.is_owned_by_user(user_uuid) || self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await {
return Some((false, false));
}
let rows = if let Some(cipher_sync_data) = cipher_sync_data {
let mut rows: Vec<(bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections {
if let Some(uc) = cipher_sync_data.user_collections.get(collection) {
rows.push((uc.read_only, uc.hide_passwords));
}
}
}
rows
} else {
self.get_collections_access_flags(user_uuid, conn).await
};
if rows.is_empty() {
// This cipher isn't in any collections accessible to the user.
return None;
}
// A cipher can be in multiple collections with inconsistent access flags.
// For example, a cipher could be in one collection where the user has
// read-only access, but also in another collection where the user has
// read/write access. For a flag to be in effect for a cipher, upstream
// requires all collections the cipher is in to have that flag set.
// Therefore, we do a boolean AND of all values in each of the `read_only`
// and `hide_passwords` columns. This could ideally be done as part of the
// query, but Diesel doesn't support a min() or bool_and() function on
// booleans and this behavior isn't portable anyway.
let mut read_only = true;
let mut hide_passwords = true;
for (ro, hp) in rows.iter() {
read_only &= ro;
hide_passwords &= hp;
}
Some((read_only, hide_passwords))
}
pub async fn get_collections_access_flags(&self, user_uuid: &str, conn: &DbConn) -> Vec<(bool, bool)> {
db_run! {conn: {
// Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection.
let query = ciphers::table
ciphers::table
.filter(ciphers::uuid.eq(&self.uuid))
.inner_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
.inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords));
// There's an edge case where a cipher can be in multiple collections
// with inconsistent access flags. For example, a cipher could be in
// one collection where the user has read-only access, but also in
// another collection where the user has read/write access. To handle
// this, we do a boolean OR of all values in each of the `read_only`
// and `hide_passwords` columns. This could ideally be done as part
// of the query, but Diesel doesn't support a max() or bool_or()
// function on booleans and this behavior isn't portable anyway.
if let Ok(vec) = query.load::<(bool, bool)>(conn) {
let mut read_only = false;
let mut hide_passwords = false;
for (ro, hp) in vec.iter() {
read_only |= ro;
hide_passwords |= hp;
}
Some((read_only, hide_passwords))
} else {
// This cipher isn't in any collections accessible to the user.
None
}
.select((users_collections::read_only, users_collections::hide_passwords))
.load::<(bool, bool)>(conn)
.expect("Error getting access restrictions")
}}
}
pub fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match self.get_access_restrictions(user_uuid, conn) {
pub async fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match self.get_access_restrictions(user_uuid, None, conn).await {
Some((read_only, _hide_passwords)) => !read_only,
None => false,
}
}
pub fn is_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
self.get_access_restrictions(user_uuid, conn).is_some()
pub async fn is_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
self.get_access_restrictions(user_uuid, None, conn).await.is_some()
}
// Returns whether this cipher is a favorite of the specified user.
pub fn is_favorite(&self, user_uuid: &str, conn: &DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn)
pub async fn is_favorite(&self, user_uuid: &str, conn: &DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn).await
}
// Sets whether this cipher is a favorite of the specified user.
pub fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
match favorite {
None => Ok(()), // No change requested.
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn),
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn).await,
}
}
pub fn get_folder_uuid(&self, user_uuid: &str, conn: &DbConn) -> Option<String> {
pub async fn get_folder_uuid(&self, user_uuid: &str, conn: &DbConn) -> Option<String> {
db_run! {conn: {
folders_ciphers::table
.inner_join(folders::table)
@@ -412,7 +463,7 @@ impl Cipher {
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(uuid))
@@ -434,7 +485,7 @@ impl Cipher {
// true, then the non-interesting ciphers will not be returned. As a
// result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view.
pub fn find_by_user(user_uuid: &str, visible_only: bool, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
@@ -469,12 +520,12 @@ impl Cipher {
}
// Find all ciphers visible to the specified user.
pub fn find_by_user_visible(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn)
pub async fn find_by_user_visible(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn).await
}
// Find all ciphers directly owned by the specified user.
pub fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(
@@ -485,7 +536,7 @@ impl Cipher {
}}
}
pub fn count_owned_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_owned_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
@@ -496,7 +547,7 @@ impl Cipher {
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -504,7 +555,7 @@ impl Cipher {
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -515,7 +566,7 @@ impl Cipher {
}}
}
pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -525,7 +576,7 @@ impl Cipher {
}
/// Find all ciphers that were deleted before the specified datetime.
pub fn find_deleted_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
pub async fn find_deleted_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::deleted_at.lt(dt))
@@ -533,7 +584,7 @@ impl Cipher {
}}
}
pub fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> {
pub async fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
@@ -559,4 +610,32 @@ impl Cipher {
.load::<String>(conn).unwrap_or_default()
}}
}
/// Return a Vec with (cipher_uuid, collection_uuid)
/// This is used during a full sync so we only need one query for all collections accessible.
pub async fn get_collections_with_cipher_by_user(user_id: &str, conn: &DbConn) -> Vec<(String, String)> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id)
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id)
)
))
.filter(users_collections::user_uuid.eq(user_id).or( // User has access to collection
users_organizations::access_all.eq(true).or( // User has access all
users_organizations::atype.le(UserOrgType::Admin as i32) // User is admin or owner
)
))
.select(ciphers_collections::all_columns)
.load::<(String, String)>(conn).unwrap_or_default()
}}
}
}

View File

@@ -1,11 +1,10 @@
use serde_json::Value;
use super::{Cipher, Organization, User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{User, UserOrgStatus, UserOrgType, UserOrganization};
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "collections"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct Collection {
pub uuid: String,
@@ -13,10 +12,8 @@ db_object! {
pub name: String,
}
#[derive(Identifiable, Queryable, Insertable, Associations)]
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "users_collections"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(user_uuid, collection_uuid)]
pub struct CollectionUser {
pub user_uuid: String,
@@ -25,10 +22,8 @@ db_object! {
pub hide_passwords: bool,
}
#[derive(Identifiable, Queryable, Insertable, Associations)]
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "ciphers_collections"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(cipher_uuid, collection_uuid)]
pub struct CollectionCipher {
pub cipher_uuid: String,
@@ -57,11 +52,32 @@ impl Collection {
})
}
pub fn to_json_details(&self, user_uuid: &str, conn: &DbConn) -> Value {
pub async fn to_json_details(
&self,
user_uuid: &str,
cipher_sync_data: Option<&crate::api::core::CipherSyncData>,
conn: &DbConn,
) -> Value {
let (read_only, hide_passwords) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.user_organizations.get(&self.org_uuid) {
Some(uo) if uo.has_full_access() => (false, false),
Some(_) => {
if let Some(uc) = cipher_sync_data.user_collections.get(&self.uuid) {
(uc.read_only, uc.hide_passwords)
} else {
(false, false)
}
}
_ => (true, true),
}
} else {
(!self.is_writable_by_user(user_uuid, conn).await, self.hide_passwords_for_user(user_uuid, conn).await)
};
let mut json_object = self.to_json();
json_object["Object"] = json!("collectionDetails");
json_object["ReadOnly"] = json!(!self.is_writable_by_user(user_uuid, conn));
json_object["HidePasswords"] = json!(self.hide_passwords_for_user(user_uuid, conn));
json_object["ReadOnly"] = json!(read_only);
json_object["HidePasswords"] = json!(hide_passwords);
json_object
}
}
@@ -73,8 +89,8 @@ use crate::error::MapResult;
/// Database methods
impl Collection {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
db_run! { conn:
sqlite, mysql {
@@ -107,10 +123,10 @@ impl Collection {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
CollectionCipher::delete_all_by_collection(&self.uuid, conn)?;
CollectionUser::delete_all_by_collection(&self.uuid, conn)?;
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
CollectionCipher::delete_all_by_collection(&self.uuid, conn).await?;
CollectionUser::delete_all_by_collection(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
@@ -119,20 +135,20 @@ impl Collection {
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, conn) {
collection.delete(conn)?;
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, conn).await {
collection.delete(conn).await?;
}
Ok(())
}
pub fn update_users_revision(&self, conn: &DbConn) {
UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).iter().for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn);
});
pub async fn update_users_revision(&self, conn: &DbConn) {
for user_org in UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -142,7 +158,7 @@ impl Collection {
}}
}
pub fn find_by_user_uuid(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user_uuid(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
@@ -167,11 +183,11 @@ impl Collection {
}}
}
pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid, conn).into_iter().filter(|c| c.org_uuid == org_uuid).collect()
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid, conn).await.into_iter().filter(|c| c.org_uuid == org_uuid).collect()
}
pub fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
@@ -181,7 +197,7 @@ impl Collection {
}}
}
pub fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -193,7 +209,7 @@ impl Collection {
}}
}
pub fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
@@ -219,8 +235,8 @@ impl Collection {
}}
}
pub fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn) {
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
None => false, // Not in Org
Some(user_org) => {
if user_org.has_full_access() {
@@ -241,8 +257,8 @@ impl Collection {
}
}
pub fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn) {
pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
None => true, // Not in Org
Some(user_org) => {
if user_org.has_full_access() {
@@ -266,7 +282,7 @@ impl Collection {
/// Database methods
impl CollectionUser {
pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
@@ -279,14 +295,14 @@ impl CollectionUser {
}}
}
pub fn save(
pub async fn save(
user_uuid: &str,
collection_uuid: &str,
read_only: bool,
hide_passwords: bool,
conn: &DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn);
User::update_uuid_revision(user_uuid, conn).await;
db_run! { conn:
sqlite, mysql {
@@ -337,8 +353,8 @@ impl CollectionUser {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
db_run! { conn: {
diesel::delete(
@@ -351,7 +367,7 @@ impl CollectionUser {
}}
}
pub fn find_by_collection(collection_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_collection(collection_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -362,7 +378,7 @@ impl CollectionUser {
}}
}
pub fn find_by_collection_and_user(collection_uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_collection_and_user(collection_uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -374,10 +390,21 @@ impl CollectionUser {
}}
}
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
CollectionUser::find_by_collection(collection_uuid, conn).iter().for_each(|collection| {
User::update_uuid_revision(&collection.user_uuid, conn);
});
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
.select(users_collections::all_columns)
.load::<CollectionUserDb>(conn)
.expect("Error loading users_collections")
.from_db()
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
for collection in CollectionUser::find_by_collection(collection_uuid, conn).await.iter() {
User::update_uuid_revision(&collection.user_uuid, conn).await;
}
db_run! { conn: {
diesel::delete(users_collections::table.filter(users_collections::collection_uuid.eq(collection_uuid)))
@@ -386,8 +413,8 @@ impl CollectionUser {
}}
}
pub fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn);
pub async fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn).await;
db_run! { conn: {
for user in collectionusers {
@@ -405,8 +432,8 @@ impl CollectionUser {
/// Database methods
impl CollectionCipher {
pub fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn);
pub async fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn:
sqlite, mysql {
@@ -435,8 +462,8 @@ impl CollectionCipher {
}
}
pub fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn);
pub async fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn: {
diesel::delete(
@@ -449,7 +476,7 @@ impl CollectionCipher {
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -457,7 +484,7 @@ impl CollectionCipher {
}}
}
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(conn)
@@ -465,9 +492,9 @@ impl CollectionCipher {
}}
}
pub fn update_users_revision(collection_uuid: &str, conn: &DbConn) {
if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn) {
collection.update_users_revision(conn);
pub async fn update_users_revision(collection_uuid: &str, conn: &DbConn) {
if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn).await {
collection.update_users_revision(conn).await;
}
}
}

View File

@@ -1,14 +1,12 @@
use chrono::{NaiveDateTime, Utc};
use super::User;
use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
#[primary_key(uuid, user_uuid)]
pub struct Device {
pub uuid: String,
pub created_at: NaiveDateTime,
@@ -17,8 +15,7 @@ db_object! {
pub user_uuid: String,
pub name: String,
// https://github.com/bitwarden/core/tree/master/src/Core/Enums
pub atype: i32,
pub atype: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs
pub push_token: Option<String>,
pub refresh_token: String,
@@ -61,7 +58,12 @@ impl Device {
self.twofactor_remember = None;
}
pub fn refresh_tokens(&mut self, user: &super::User, orgs: Vec<super::UserOrganization>) -> (String, i64) {
pub fn refresh_tokens(
&mut self,
user: &super::User,
orgs: Vec<super::UserOrganization>,
scope: Vec<String>,
) -> (String, i64) {
// If there is no refresh token, we create one
if self.refresh_token.is_empty() {
use crate::crypto;
@@ -99,7 +101,7 @@ impl Device {
sstamp: user.security_stamp.to_string(),
device: self.uuid.to_string(),
scope: vec!["api".into(), "offline_access".into()],
scope,
amr: vec!["Application".into()],
};
@@ -114,7 +116,7 @@ use crate::error::MapResult;
/// Database methods
impl Device {
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
@@ -127,39 +129,33 @@ impl Device {
postgresql {
let value = DeviceDb::to_db(self);
crate::util::retry(
|| diesel::insert_into(devices::table).values(&value).on_conflict(devices::uuid).do_update().set(&value).execute(conn),
|| diesel::insert_into(devices::table).values(&value).on_conflict((devices::uuid, devices::user_uuid)).do_update().set(&value).execute(conn),
10,
).map_res("Error saving device")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(devices::table.filter(devices::uuid.eq(self.uuid)))
diesel::delete(devices::table.filter(devices::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error removing device")
.map_res("Error removing devices for user")
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for device in Self::find_by_user(user_uuid, conn) {
device.delete(conn)?;
}
Ok(())
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::uuid.eq(uuid))
.filter(devices::user_uuid.eq(user_uuid))
.first::<DeviceDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_refresh_token(refresh_token: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_refresh_token(refresh_token: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::refresh_token.eq(refresh_token))
@@ -169,17 +165,7 @@ impl Device {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
.load::<DeviceDb>(conn)
.expect("Error loading devices")
.from_db()
}}
}
pub fn find_latest_active_by_user(user_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_latest_active_by_user(user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))

View File

@@ -0,0 +1,281 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::User;
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "emergency_access"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
pub struct EmergencyAccess {
pub uuid: String,
pub grantor_uuid: String,
pub grantee_uuid: Option<String>,
pub email: Option<String>,
pub key_encrypted: Option<String>,
pub atype: i32, //EmergencyAccessType
pub status: i32, //EmergencyAccessStatus
pub wait_time_days: i32,
pub recovery_initiated_at: Option<NaiveDateTime>,
pub last_notification_at: Option<NaiveDateTime>,
pub updated_at: NaiveDateTime,
pub created_at: NaiveDateTime,
}
}
/// Local methods
impl EmergencyAccess {
pub fn new(grantor_uuid: String, email: Option<String>, status: i32, atype: i32, wait_time_days: i32) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
grantor_uuid,
grantee_uuid: None,
email,
status,
atype,
wait_time_days,
recovery_initiated_at: None,
created_at: now,
updated_at: now,
key_encrypted: None,
last_notification_at: None,
}
}
pub fn get_type_as_str(&self) -> &'static str {
if self.atype == EmergencyAccessType::View as i32 {
"View"
} else {
"Takeover"
}
}
pub fn has_type(&self, access_type: EmergencyAccessType) -> bool {
self.atype == access_type as i32
}
pub fn has_status(&self, status: EmergencyAccessStatus) -> bool {
self.status == status as i32
}
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"Object": "emergencyAccess",
})
}
pub async fn to_json_grantor_details(&self, conn: &DbConn) -> Value {
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).await.expect("Grantor user not found.");
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GrantorId": grantor_user.uuid,
"Email": grantor_user.email,
"Name": grantor_user.name,
"Object": "emergencyAccessGrantorDetails",
})
}
#[allow(clippy::manual_map)]
pub async fn to_json_grantee_details(&self, conn: &DbConn) -> Value {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
Some(User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found."))
} else if let Some(email) = self.email.as_deref() {
Some(User::find_by_mail(email, conn).await.expect("Grantee user not found."))
} else {
None
};
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
"Object": "emergencyAccessGranteeDetails",
})
}
}
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
pub enum EmergencyAccessType {
View = 0,
Takeover = 1,
}
impl EmergencyAccessType {
pub fn from_str(s: &str) -> Option<Self> {
match s {
"0" | "View" => Some(EmergencyAccessType::View),
"1" | "Takeover" => Some(EmergencyAccessType::Takeover),
_ => None,
}
}
}
impl PartialEq<i32> for EmergencyAccessType {
fn eq(&self, other: &i32) -> bool {
*other == *self as i32
}
}
impl PartialEq<EmergencyAccessType> for i32 {
fn eq(&self, other: &EmergencyAccessType) -> bool {
*self == *other as i32
}
}
pub enum EmergencyAccessStatus {
Invited = 0,
Accepted = 1,
Confirmed = 2,
RecoveryInitiated = 3,
RecoveryApproved = 4,
}
// region Database methods
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
impl EmergencyAccess {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(emergency_access::table)
.values(EmergencyAccessDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(emergency_access::table)
.filter(emergency_access::uuid.eq(&self.uuid))
.set(EmergencyAccessDb::to_db(self))
.execute(conn)
.map_res("Error updating emergency access")
}
Err(e) => Err(e.into()),
}.map_res("Error saving emergency access")
}
postgresql {
let value = EmergencyAccessDb::to_db(self);
diesel::insert_into(emergency_access::table)
.values(&value)
.on_conflict(emergency_access::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving emergency access")
}
}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn).await {
ea.delete(conn).await?;
}
for ea in Self::find_all_by_grantee_uuid(user_uuid, conn).await {
ea.delete(conn).await?;
}
Ok(())
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
db_run! { conn: {
diesel::delete(emergency_access::table.filter(emergency_access::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error removing user from emergency access")
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_by_grantor_uuid_and_grantee_uuid_or_email(
grantor_uuid: &str,
grantee_uuid: &str,
email: &str,
conn: &DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.filter(emergency_access::grantee_uuid.eq(grantee_uuid).or(emergency_access::email.eq(email)))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_all_recoveries(conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::status.eq(EmergencyAccessStatus::RecoveryInitiated as i32))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn find_invited_by_grantee_email(grantee_email: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::email.eq(grantee_email))
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
}
// endregion

View File

@@ -1,10 +1,8 @@
use super::{Cipher, User};
use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations)]
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "favorites"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[primary_key(user_uuid, cipher_uuid)]
pub struct Favorite {
pub user_uuid: String,
@@ -19,7 +17,7 @@ use crate::error::MapResult;
impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user.
pub fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool {
pub async fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool {
db_run! { conn: {
let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid))
@@ -31,11 +29,11 @@ impl Favorite {
}
// Sets whether the specified cipher is a favorite of the specified user.
pub fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn), favorite);
pub async fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn).await, favorite);
match (old, new) {
(false, true) => {
User::update_uuid_revision(user_uuid, conn);
User::update_uuid_revision(user_uuid, conn).await;
db_run! { conn: {
diesel::insert_into(favorites::table)
.values((
@@ -47,7 +45,7 @@ impl Favorite {
}}
}
(true, false) => {
User::update_uuid_revision(user_uuid, conn);
User::update_uuid_revision(user_uuid, conn).await;
db_run! { conn: {
diesel::delete(
favorites::table
@@ -64,7 +62,7 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified cipher.
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -73,11 +71,23 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified user.
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error removing favorites by user")
}}
}
/// Return a vec with (cipher_uuid) this will only contain favorite flagged ciphers
/// This is used during a full sync so we only need one query for all favorite cipher matches.
pub async fn get_all_cipher_uuid_by_user(user_uuid: &str, conn: &DbConn) -> Vec<String> {
db_run! { conn: {
favorites::table
.filter(favorites::user_uuid.eq(user_uuid))
.select(favorites::cipher_uuid)
.load::<String>(conn)
.unwrap_or_default()
}}
}
}

View File

@@ -1,12 +1,11 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::{Cipher, User};
use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "folders"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct Folder {
pub uuid: String,
@@ -16,10 +15,8 @@ db_object! {
pub name: String,
}
#[derive(Identifiable, Queryable, Insertable, Associations)]
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "folders_ciphers"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Folder, foreign_key = "folder_uuid")]
#[primary_key(cipher_uuid, folder_uuid)]
pub struct FolderCipher {
pub cipher_uuid: String,
@@ -70,8 +67,8 @@ use crate::error::MapResult;
/// Database methods
impl Folder {
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
self.updated_at = Utc::now().naive_utc();
db_run! { conn:
@@ -105,9 +102,9 @@ impl Folder {
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
FolderCipher::delete_all_by_folder(&self.uuid, conn)?;
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
FolderCipher::delete_all_by_folder(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid)))
@@ -116,14 +113,14 @@ impl Folder {
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, conn) {
folder.delete(conn)?;
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, conn).await {
folder.delete(conn).await?;
}
Ok(())
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
folders::table
.filter(folders::uuid.eq(uuid))
@@ -133,7 +130,7 @@ impl Folder {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
folders::table
.filter(folders::user_uuid.eq(user_uuid))
@@ -145,7 +142,7 @@ impl Folder {
}
impl FolderCipher {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
// Not checking for ForeignKey Constraints here.
@@ -167,7 +164,7 @@ impl FolderCipher {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(
folders_ciphers::table
@@ -179,7 +176,7 @@ impl FolderCipher {
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -187,7 +184,7 @@ impl FolderCipher {
}}
}
pub fn delete_all_by_folder(folder_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_folder(folder_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(conn)
@@ -195,7 +192,7 @@ impl FolderCipher {
}}
}
pub fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -206,7 +203,7 @@ impl FolderCipher {
}}
}
pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -215,4 +212,17 @@ impl FolderCipher {
.from_db()
}}
}
/// Return a vec with (cipher_uuid, folder_uuid)
/// This is used during a full sync so we only need one query for all folder matches.
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<(String, String)> {
db_run! { conn: {
folders_ciphers::table
.inner_join(folders::table)
.filter(folders::user_uuid.eq(user_uuid))
.select(folders_ciphers::all_columns)
.load::<(String, String)>(conn)
.unwrap_or_default()
}}
}
}

View File

@@ -2,22 +2,26 @@ mod attachment;
mod cipher;
mod collection;
mod device;
mod emergency_access;
mod favorite;
mod folder;
mod org_policy;
mod organization;
mod send;
mod two_factor;
mod two_factor_incomplete;
mod user;
pub use self::attachment::Attachment;
pub use self::cipher::Cipher;
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::Device;
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher};
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::send::{Send, SendType};
pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::two_factor_incomplete::TwoFactorIncomplete;
pub use self::user::{Invitation, User, UserStampException};

View File

@@ -6,12 +6,11 @@ use crate::db::DbConn;
use crate::error::MapResult;
use crate::util::UpCase;
use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
use super::{UserOrgStatus, UserOrgType, UserOrganization};
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "org_policies"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct OrgPolicy {
pub uuid: String,
@@ -27,7 +26,7 @@ pub enum OrgPolicyType {
TwoFactorAuthentication = 0,
MasterPassword = 1,
PasswordGenerator = 2,
// SingleOrg = 3, // Not currently supported.
SingleOrg = 3,
// RequireSso = 4, // Not currently supported.
PersonalOwnership = 5,
DisableSend = 6,
@@ -72,7 +71,7 @@ impl OrgPolicy {
/// Database methods
impl OrgPolicy {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(org_policies::table)
@@ -115,7 +114,7 @@ impl OrgPolicy {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::uuid.eq(self.uuid)))
.execute(conn)
@@ -123,7 +122,7 @@ impl OrgPolicy {
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
@@ -133,7 +132,7 @@ impl OrgPolicy {
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
@@ -143,7 +142,7 @@ impl OrgPolicy {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.inner_join(
@@ -161,7 +160,7 @@ impl OrgPolicy {
}}
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
pub async fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
@@ -172,7 +171,7 @@ impl OrgPolicy {
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(conn)
@@ -183,12 +182,12 @@ impl OrgPolicy {
/// Returns true if the user belongs to an org that has enabled the specified policy type,
/// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics.
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
// Returns confirmed users only.
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
pub async fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
// TODO: Should check confirmed and accepted users
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn).await {
if policy.enabled && policy.has_type(policy_type) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
return true;
}
@@ -200,12 +199,11 @@ impl OrgPolicy {
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
pub fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
// Returns confirmed users only.
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
pub async fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn).await {
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
match serde_json::from_str::<UpCase<SendOptionsPolicyData>>(&policy.data) {
Ok(opts) => {
@@ -221,12 +219,4 @@ impl OrgPolicy {
}
false
}
/*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error deleting twofactors")
}}
}*/
}

View File

@@ -193,10 +193,10 @@ use crate::error::MapResult;
/// Database methods
impl Organization {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
UserOrganization::find_by_org(&self.uuid, conn).iter().for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn);
});
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
for user_org in UserOrganization::find_by_org(&self.uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
}
db_run! { conn:
sqlite, mysql {
@@ -230,13 +230,13 @@ impl Organization {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
use super::{Cipher, Collection};
Cipher::delete_all_by_organization(&self.uuid, conn)?;
Collection::delete_all_by_organization(&self.uuid, conn)?;
UserOrganization::delete_all_by_organization(&self.uuid, conn)?;
OrgPolicy::delete_all_by_organization(&self.uuid, conn)?;
Cipher::delete_all_by_organization(&self.uuid, conn).await?;
Collection::delete_all_by_organization(&self.uuid, conn).await?;
UserOrganization::delete_all_by_organization(&self.uuid, conn).await?;
OrgPolicy::delete_all_by_organization(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
@@ -245,7 +245,7 @@ impl Organization {
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
organizations::table
.filter(organizations::uuid.eq(uuid))
@@ -254,7 +254,7 @@ impl Organization {
}}
}
pub fn get_all(conn: &DbConn) -> Vec<Self> {
pub async fn get_all(conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
organizations::table.load::<OrganizationDb>(conn).expect("Error loading organizations").from_db()
}}
@@ -262,8 +262,8 @@ impl Organization {
}
impl UserOrganization {
pub fn to_json(&self, conn: &DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).unwrap();
pub async fn to_json(&self, conn: &DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).await.unwrap();
json!({
"Id": self.org_uuid,
@@ -290,6 +290,8 @@ impl UserOrganization {
// For now they still have that code also in the web-vault, but they will remove it at some point.
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
"UseBusinessPortal": false, // Disable BusinessPortal Button
"ProviderId": null,
"ProviderName": null,
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
@@ -320,8 +322,8 @@ impl UserOrganization {
})
}
pub fn to_json_user_details(&self, conn: &DbConn) -> Value {
let user = User::find_by_uuid(&self.user_uuid, conn).unwrap();
pub async fn to_json_user_details(&self, conn: &DbConn) -> Value {
let user = User::find_by_uuid(&self.user_uuid, conn).await.unwrap();
json!({
"Id": self.uuid,
@@ -345,11 +347,12 @@ impl UserOrganization {
})
}
pub fn to_json_details(&self, conn: &DbConn) -> Value {
pub async fn to_json_details(&self, conn: &DbConn) -> Value {
let coll_uuids = if self.access_all {
vec![] // If we have complete access, no need to fill the array
} else {
let collections = CollectionUser::find_by_organization_and_user_uuid(&self.org_uuid, &self.user_uuid, conn);
let collections =
CollectionUser::find_by_organization_and_user_uuid(&self.org_uuid, &self.user_uuid, conn).await;
collections
.iter()
.map(|c| {
@@ -374,8 +377,8 @@ impl UserOrganization {
"Object": "organizationUserDetails",
})
}
pub fn save(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
db_run! { conn:
sqlite, mysql {
@@ -408,10 +411,10 @@ impl UserOrganization {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn).await;
CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, conn)?;
CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, conn).await?;
db_run! { conn: {
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
@@ -420,23 +423,23 @@ impl UserOrganization {
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_by_org(org_uuid, conn) {
user_org.delete(conn)?;
pub async fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_by_org(org_uuid, conn).await {
user_org.delete(conn).await?;
}
Ok(())
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_any_state_by_user(user_uuid, conn) {
user_org.delete(conn)?;
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for user_org in Self::find_any_state_by_user(user_uuid, conn).await {
user_org.delete(conn).await?;
}
Ok(())
}
pub fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn) {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, conn) {
pub async fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn).await {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, conn).await {
return Some(user_org);
}
}
@@ -456,7 +459,7 @@ impl UserOrganization {
(self.access_all || self.atype >= UserOrgType::Admin) && self.has_status(UserOrgStatus::Confirmed)
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
@@ -465,7 +468,7 @@ impl UserOrganization {
}}
}
pub fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
@@ -475,7 +478,7 @@ impl UserOrganization {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -485,7 +488,7 @@ impl UserOrganization {
}}
}
pub fn find_invited_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_invited_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -495,7 +498,7 @@ impl UserOrganization {
}}
}
pub fn find_any_state_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_any_state_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -504,7 +507,7 @@ impl UserOrganization {
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -513,7 +516,7 @@ impl UserOrganization {
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -524,7 +527,7 @@ impl UserOrganization {
}}
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -534,7 +537,7 @@ impl UserOrganization {
}}
}
pub fn find_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
@@ -544,7 +547,16 @@ impl UserOrganization {
}}
}
pub fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.load::<UserOrganizationDb>(conn)
.expect("Error loading user organizations").from_db()
}}
}
pub async fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.inner_join(
@@ -563,7 +575,7 @@ impl UserOrganization {
}}
}
pub fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
@@ -585,7 +597,7 @@ impl UserOrganization {
}}
}
pub fn find_by_collection_and_org(collection_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_collection_and_org(collection_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))

View File

@@ -1,14 +1,12 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::{Organization, User};
use super::User;
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "sends"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)]
pub struct Send {
pub uuid: String,
@@ -47,7 +45,7 @@ pub enum SendType {
}
impl Send {
pub fn new(atype: i32, name: String, data: String, akey: String, deletion_date: NaiveDateTime) -> Self {
pub async fn new(atype: i32, name: String, data: String, akey: String, deletion_date: NaiveDateTime) -> Self {
let now = Utc::now().naive_utc();
Self {
@@ -103,7 +101,7 @@ impl Send {
}
}
pub fn creator_identifier(&self, conn: &DbConn) -> Option<String> {
pub async fn creator_identifier(&self, conn: &DbConn) -> Option<String> {
if let Some(hide_email) = self.hide_email {
if hide_email {
return None;
@@ -111,7 +109,7 @@ impl Send {
}
if let Some(user_uuid) = &self.user_uuid {
if let Some(user) = User::find_by_uuid(user_uuid, conn) {
if let Some(user) = User::find_by_uuid(user_uuid, conn).await {
return Some(user.email);
}
}
@@ -150,7 +148,7 @@ impl Send {
})
}
pub fn to_json_access(&self, conn: &DbConn) -> Value {
pub async fn to_json_access(&self, conn: &DbConn) -> Value {
use crate::util::format_date;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
@@ -164,7 +162,7 @@ impl Send {
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"ExpirationDate": self.expiration_date.as_ref().map(format_date),
"CreatorIdentifier": self.creator_identifier(conn),
"CreatorIdentifier": self.creator_identifier(conn).await,
"Object": "send-access",
})
}
@@ -176,8 +174,8 @@ use crate::api::EmptyResult;
use crate::error::MapResult;
impl Send {
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
self.revision_date = Utc::now().naive_utc();
db_run! { conn:
@@ -211,8 +209,8 @@ impl Send {
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
pub async fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn).await;
if self.atype == SendType::File as i32 {
std::fs::remove_dir_all(std::path::Path::new(&crate::CONFIG.sends_folder()).join(&self.uuid)).ok();
@@ -226,31 +224,34 @@ impl Send {
}
/// Purge all sends that are past their deletion date.
pub fn purge(conn: &DbConn) {
for send in Self::find_by_past_deletion_date(conn) {
send.delete(conn).ok();
pub async fn purge(conn: &DbConn) {
for send in Self::find_by_past_deletion_date(conn).await {
send.delete(conn).await.ok();
}
}
pub fn update_users_revision(&self, conn: &DbConn) {
pub async fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
let mut user_uuids = Vec::new();
match &self.user_uuid {
Some(user_uuid) => {
User::update_uuid_revision(user_uuid, conn);
User::update_uuid_revision(user_uuid, conn).await;
user_uuids.push(user_uuid.clone())
}
None => {
// Belongs to Organization, not implemented
}
}
};
user_uuids
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, conn) {
send.delete(conn)?;
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, conn).await {
send.delete(conn).await?;
}
Ok(())
}
pub fn find_by_access_id(access_id: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_access_id(access_id: &str, conn: &DbConn) -> Option<Self> {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
@@ -264,10 +265,10 @@ impl Send {
Err(_) => return None,
};
Self::find_by_uuid(&uuid, conn)
Self::find_by_uuid(&uuid, conn).await
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
@@ -277,7 +278,7 @@ impl Send {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::user_uuid.eq(user_uuid))
@@ -285,7 +286,7 @@ impl Send {
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::organization_uuid.eq(org_uuid))
@@ -293,7 +294,7 @@ impl Send {
}}
}
pub fn find_by_past_deletion_date(conn: &DbConn) -> Vec<Self> {
pub async fn find_by_past_deletion_date(conn: &DbConn) -> Vec<Self> {
let now = Utc::now().naive_utc();
db_run! {conn: {
sends::table

View File

@@ -1,15 +1,10 @@
use serde_json::Value;
use crate::api::EmptyResult;
use crate::db::DbConn;
use crate::error::MapResult;
use super::User;
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "twofactor"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct TwoFactor {
pub uuid: String,
@@ -73,7 +68,7 @@ impl TwoFactor {
/// Database methods
impl TwoFactor {
pub fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(twofactor::table)
@@ -112,7 +107,7 @@ impl TwoFactor {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::uuid.eq(self.uuid)))
.execute(conn)
@@ -120,7 +115,7 @@ impl TwoFactor {
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -131,7 +126,7 @@ impl TwoFactor {
}}
}
pub fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
pub async fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -142,7 +137,7 @@ impl TwoFactor {
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -150,7 +145,7 @@ impl TwoFactor {
}}
}
pub fn migrate_u2f_to_webauthn(conn: &DbConn) -> EmptyResult {
pub async fn migrate_u2f_to_webauthn(conn: &DbConn) -> EmptyResult {
let u2f_factors = db_run! { conn: {
twofactor::table
.filter(twofactor::atype.eq(TwoFactorType::U2f as i32))
@@ -159,9 +154,8 @@ impl TwoFactor {
.from_db()
}};
use crate::api::core::two_factor::u2f::U2FRegistration;
use crate::api::core::two_factor::webauthn::U2FRegistration;
use crate::api::core::two_factor::webauthn::{get_webauthn_registrations, WebauthnRegistration};
use std::convert::TryInto;
use webauthn_rs::proto::*;
for mut u2f in u2f_factors {
@@ -171,7 +165,7 @@ impl TwoFactor {
continue;
}
let (_, mut webauthn_regs) = get_webauthn_registrations(&u2f.user_uuid, conn)?;
let (_, mut webauthn_regs) = get_webauthn_registrations(&u2f.user_uuid, conn).await?;
// If the user already has webauthn registrations saved, don't overwrite them
if !webauthn_regs.is_empty() {
@@ -210,10 +204,11 @@ impl TwoFactor {
}
u2f.data = serde_json::to_string(&regs)?;
u2f.save(conn)?;
u2f.save(conn).await?;
TwoFactor::new(u2f.user_uuid.clone(), TwoFactorType::Webauthn, serde_json::to_string(&webauthn_regs)?)
.save(conn)?;
.save(conn)
.await?;
}
Ok(())

View File

@@ -0,0 +1,105 @@
use chrono::{NaiveDateTime, Utc};
use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONFIG};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "twofactor_incomplete"]
#[primary_key(user_uuid, device_uuid)]
pub struct TwoFactorIncomplete {
pub user_uuid: String,
// This device UUID is simply what's claimed by the device. It doesn't
// necessarily correspond to any UUID in the devices table, since a device
// must complete 2FA login before being added into the devices table.
pub device_uuid: String,
pub device_name: String,
pub login_time: NaiveDateTime,
pub ip_address: String,
}
}
impl TwoFactorIncomplete {
pub async fn mark_incomplete(
user_uuid: &str,
device_uuid: &str,
device_name: &str,
ip: &ClientIp,
conn: &DbConn,
) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(());
}
// Don't update the data for an existing user/device pair, since that
// would allow an attacker to arbitrarily delay notifications by
// sending repeated 2FA attempts to reset the timer.
let existing = Self::find_by_user_and_device(user_uuid, device_uuid, conn).await;
if existing.is_some() {
return Ok(());
}
db_run! { conn: {
diesel::insert_into(twofactor_incomplete::table)
.values((
twofactor_incomplete::user_uuid.eq(user_uuid),
twofactor_incomplete::device_uuid.eq(device_uuid),
twofactor_incomplete::device_name.eq(device_name),
twofactor_incomplete::login_time.eq(Utc::now().naive_utc()),
twofactor_incomplete::ip_address.eq(ip.ip.to_string()),
))
.execute(conn)
.map_res("Error adding twofactor_incomplete record")
}}
}
pub async fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(());
}
Self::delete_by_user_and_device(user_uuid, device_uuid, conn).await
}
pub async fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
.filter(twofactor_incomplete::device_uuid.eq(device_uuid))
.first::<TwoFactorIncompleteDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_logins_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
twofactor_incomplete::table
.filter(twofactor_incomplete::login_time.lt(dt))
.load::<TwoFactorIncompleteDb>(conn)
.expect("Error loading twofactor_incomplete")
.from_db()
}}
}
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn).await
}
pub async fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
.filter(twofactor_incomplete::device_uuid.eq(device_uuid)))
.execute(conn)
.map_res("Error in twofactor_incomplete::delete_by_user_and_device()")
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error in twofactor_incomplete::delete_all_by_user()")
}}
}
}

View File

@@ -44,8 +44,9 @@ db_object! {
pub client_kdf_type: i32,
pub client_kdf_iter: i32,
}
pub api_key: Option<String>,
}
#[derive(Identifiable, Queryable, Insertable)]
#[table_name = "invitations"]
@@ -73,9 +74,9 @@ impl User {
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000;
pub fn new(mail: String) -> Self {
pub fn new(email: String) -> Self {
let now = Utc::now().naive_utc();
let email = mail.to_lowercase();
let email = email.to_lowercase();
Self {
uuid: crate::util::get_uuid(),
@@ -110,6 +111,8 @@ impl User {
client_kdf_type: Self::CLIENT_KDF_TYPE_DEFAULT,
client_kdf_iter: Self::CLIENT_KDF_ITER_DEFAULT,
api_key: None,
}
}
@@ -130,6 +133,10 @@ impl User {
}
}
pub fn check_valid_api_key(&self, key: &str) -> bool {
matches!(self.api_key, Some(ref api_key) if crate::crypto::ct_eq(api_key, key))
}
/// Set the password hash generated
/// And resets the security_stamp. Based upon the allow_next_route the security_stamp will be different.
///
@@ -176,18 +183,29 @@ impl User {
}
}
use super::{Cipher, Device, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization};
use super::{
Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, TwoFactorIncomplete, UserOrgType,
UserOrganization,
};
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
use futures::{stream, stream::StreamExt};
/// Database methods
impl User {
pub fn to_json(&self, conn: &DbConn) -> Value {
let orgs = UserOrganization::find_by_user(&self.uuid, conn);
let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(conn)).collect();
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty();
pub async fn to_json(&self, conn: &DbConn) -> Value {
let orgs_json = stream::iter(UserOrganization::find_confirmed_by_user(&self.uuid, conn).await)
.then(|c| async {
let c = c; // Move out this single variable
c.to_json(conn).await
})
.collect::<Vec<Value>>()
.await;
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).await.is_empty();
// TODO: Might want to save the status field in the DB
let status = if self.password_hash.is_empty() {
@@ -210,11 +228,14 @@ impl User {
"PrivateKey": self.private_key,
"SecurityStamp": self.security_stamp,
"Organizations": orgs_json,
"Object": "profile"
"Providers": [],
"ProviderOrganizations": [],
"ForcePasswordReset": false,
"Object": "profile",
})
}
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("User email can't be empty")
}
@@ -252,24 +273,26 @@ impl User {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
for user_org in UserOrganization::find_by_user(&self.uuid, conn) {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await {
if user_org.atype == UserOrgType::Owner {
let owner_type = UserOrgType::Owner as i32;
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).await.len() <= 1 {
err!("Can't delete last owner")
}
}
}
Send::delete_all_by_user(&self.uuid, conn)?;
UserOrganization::delete_all_by_user(&self.uuid, conn)?;
Cipher::delete_all_by_user(&self.uuid, conn)?;
Favorite::delete_all_by_user(&self.uuid, conn)?;
Folder::delete_all_by_user(&self.uuid, conn)?;
Device::delete_all_by_user(&self.uuid, conn)?;
TwoFactor::delete_all_by_user(&self.uuid, conn)?;
Invitation::take(&self.email, conn); // Delete invitation if any
Send::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_user(&self.uuid, conn).await?;
UserOrganization::delete_all_by_user(&self.uuid, conn).await?;
Cipher::delete_all_by_user(&self.uuid, conn).await?;
Favorite::delete_all_by_user(&self.uuid, conn).await?;
Folder::delete_all_by_user(&self.uuid, conn).await?;
Device::delete_all_by_user(&self.uuid, conn).await?;
TwoFactor::delete_all_by_user(&self.uuid, conn).await?;
TwoFactorIncomplete::delete_all_by_user(&self.uuid, conn).await?;
Invitation::take(&self.email, conn).await; // Delete invitation if any
db_run! {conn: {
diesel::delete(users::table.filter(users::uuid.eq(self.uuid)))
@@ -278,13 +301,13 @@ impl User {
}}
}
pub fn update_uuid_revision(uuid: &str, conn: &DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn) {
pub async fn update_uuid_revision(uuid: &str, conn: &DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e);
}
}
pub fn update_all_revisions(conn: &DbConn) -> EmptyResult {
pub async fn update_all_revisions(conn: &DbConn) -> EmptyResult {
let updated_at = Utc::now().naive_utc();
db_run! {conn: {
@@ -297,13 +320,13 @@ impl User {
}}
}
pub fn update_revision(&mut self, conn: &DbConn) -> EmptyResult {
pub async fn update_revision(&mut self, conn: &DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
Self::_update_revision(&self.uuid, &self.updated_at, conn)
Self::_update_revision(&self.uuid, &self.updated_at, conn).await
}
fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &DbConn) -> EmptyResult {
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid)))
@@ -314,7 +337,7 @@ impl User {
}}
}
pub fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
db_run! {conn: {
users::table
@@ -325,20 +348,20 @@ impl User {
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! {conn: {
users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db()
}}
}
pub fn get_all(conn: &DbConn) -> Vec<Self> {
pub async fn get_all(conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
users::table.load::<UserDb>(conn).expect("Error loading users").from_db()
}}
}
pub fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn) {
pub async fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn).await {
Some(device) => Some(device.updated_at),
None => None,
}
@@ -346,13 +369,14 @@ impl User {
}
impl Invitation {
pub const fn new(email: String) -> Self {
pub fn new(email: String) -> Self {
let email = email.to_lowercase();
Self {
email,
}
}
pub fn save(&self, conn: &DbConn) -> EmptyResult {
pub async fn save(&self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("Invitation email can't be empty")
}
@@ -377,7 +401,7 @@ impl Invitation {
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
pub async fn delete(self, conn: &DbConn) -> EmptyResult {
db_run! {conn: {
diesel::delete(invitations::table.filter(invitations::email.eq(self.email)))
.execute(conn)
@@ -385,7 +409,7 @@ impl Invitation {
}}
}
pub fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
pub async fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
db_run! {conn: {
invitations::table
@@ -396,9 +420,9 @@ impl Invitation {
}}
}
pub fn take(mail: &str, conn: &DbConn) -> bool {
match Self::find_by_mail(mail, conn) {
Some(invitation) => invitation.delete(conn).is_ok(),
pub async fn take(mail: &str, conn: &DbConn) -> bool {
match Self::find_by_mail(mail, conn).await {
Some(invitation) => invitation.delete(conn).await.is_ok(),
None => false,
}
}

View File

@@ -42,7 +42,7 @@ table! {
}
table! {
devices (uuid) {
devices (uuid, user_uuid) {
uuid -> Text,
created_at -> Datetime,
updated_at -> Datetime,
@@ -140,6 +140,16 @@ table! {
}
}
table! {
twofactor_incomplete (user_uuid, device_uuid) {
user_uuid -> Text,
device_uuid -> Text,
device_name -> Text,
login_time -> Timestamp,
ip_address -> Text,
}
}
table! {
users (uuid) {
uuid -> Text,
@@ -168,6 +178,7 @@ table! {
excluded_globals -> Text,
client_kdf_type -> Integer,
client_kdf_iter -> Integer,
api_key -> Nullable<Text>,
}
}
@@ -192,6 +203,23 @@ table! {
}
}
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -210,6 +238,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -227,4 +256,5 @@ allow_tables_to_appear_in_same_query!(
users,
users_collections,
users_organizations,
emergency_access,
);

View File

@@ -42,7 +42,7 @@ table! {
}
table! {
devices (uuid) {
devices (uuid, user_uuid) {
uuid -> Text,
created_at -> Timestamp,
updated_at -> Timestamp,
@@ -140,6 +140,16 @@ table! {
}
}
table! {
twofactor_incomplete (user_uuid, device_uuid) {
user_uuid -> Text,
device_uuid -> Text,
device_name -> Text,
login_time -> Timestamp,
ip_address -> Text,
}
}
table! {
users (uuid) {
uuid -> Text,
@@ -168,6 +178,7 @@ table! {
excluded_globals -> Text,
client_kdf_type -> Integer,
client_kdf_iter -> Integer,
api_key -> Nullable<Text>,
}
}
@@ -192,6 +203,23 @@ table! {
}
}
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -210,6 +238,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -227,4 +256,5 @@ allow_tables_to_appear_in_same_query!(
users,
users_collections,
users_organizations,
emergency_access,
);

View File

@@ -42,7 +42,7 @@ table! {
}
table! {
devices (uuid) {
devices (uuid, user_uuid) {
uuid -> Text,
created_at -> Timestamp,
updated_at -> Timestamp,
@@ -140,6 +140,16 @@ table! {
}
}
table! {
twofactor_incomplete (user_uuid, device_uuid) {
user_uuid -> Text,
device_uuid -> Text,
device_name -> Text,
login_time -> Timestamp,
ip_address -> Text,
}
}
table! {
users (uuid) {
uuid -> Text,
@@ -168,6 +178,7 @@ table! {
excluded_globals -> Text,
client_kdf_type -> Integer,
client_kdf_iter -> Integer,
api_key -> Nullable<Text>,
}
}
@@ -192,6 +203,23 @@ table! {
}
}
table! {
emergency_access (uuid) {
uuid -> Text,
grantor_uuid -> Text,
grantee_uuid -> Nullable<Text>,
email -> Nullable<Text>,
key_encrypted -> Nullable<Text>,
atype -> Integer,
status -> Integer,
wait_time_days -> Integer,
recovery_initiated_at -> Nullable<Timestamp>,
last_notification_at -> Nullable<Timestamp>,
updated_at -> Timestamp,
created_at -> Timestamp,
}
}
joinable!(attachments -> ciphers (cipher_uuid));
joinable!(ciphers -> organizations (organization_uuid));
joinable!(ciphers -> users (user_uuid));
@@ -210,6 +238,7 @@ joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid));
joinable!(users_organizations -> organizations (org_uuid));
joinable!(users_organizations -> users (user_uuid));
joinable!(emergency_access -> users (grantor_uuid));
allow_tables_to_appear_in_same_query!(
attachments,
@@ -227,4 +256,5 @@ allow_tables_to_appear_in_same_query!(
users,
users_collections,
users_organizations,
emergency_access,
);

Some files were not shown because too many files have changed in this diff Show More