Compare commits

...

178 Commits

Author SHA1 Message Date
Daniel García
52ed8e4d75 Merge pull request #1026 from BlackDex/issue-1022
Fixes #1022 cloning with attachments
2020-06-07 19:53:47 +02:00
BlackDex
24c914799d Fixes #1022 cloning with attachments
When a cipher has one or more attachments it wasn't able to be cloned.
This commit fixes that issue.
2020-06-07 17:57:04 +02:00
Daniel García
db53511855 Merge pull request #1020 from BlackDex/admin-interface
Fixed wrong status if there is an update.
2020-06-04 18:50:00 +02:00
BlackDex
325691e588 Fixed wrong status if there is an update.
- Checking the sha hash first if this is also in the server version.
- Added a badge to show if you are on a branched build.
2020-06-04 17:05:17 +02:00
Daniel García
fac3cb687d Merge pull request #1019 from xoxys/master
Add back openssl crate
2020-06-04 01:24:28 +02:00
Robert Kaussow
afbf1db331 add back openssl crate 2020-06-04 01:21:30 +02:00
Daniel García
1aefaec297 Merge pull request #1018 from BlackDex/admin-interface
Admin interface
2020-06-03 22:48:03 +02:00
Daniel García
f1d3fb5d40 Merge pull request #1017 from dprobinson/patch-1
Added missing ENV Variable for Implicit TLS
2020-06-03 22:47:53 +02:00
BlackDex
ac2723f898 Updated Organizations overview
- Changed HTML to match users overview
- Added User count
- Added Org cipher amount
- Added Attachment count and size
2020-06-03 20:37:31 +02:00
BlackDex
2fffaec226 Added attachment info per user and some layout fix
- Added the amount and size of the attachments per user
- Changed the items count function a bit
- Some small layout changes
2020-06-03 17:57:03 +02:00
BlackDex
5c54dfee3a Fixed an issue when DNS resolving fails.
In the event of a failed DNS Resolving checking for new versions will
cause a huge delay, and in the end a timeout when loading the page.

- Check if DNS resolving failed, if that is the case, do not check for
  new versions
- Changed `fn get_github_api` to make use of structs
- Added a timeout of 10 seconds for the version check requests
- Moved the "Unknown" lables to the "Latest" lable
2020-06-03 17:07:32 +02:00
David P Robinson
967d2d78ec Added missing ENV Variable for Implicit TLS 2020-06-02 23:46:26 +01:00
Daniel García
1aa5e0d4dc Merge pull request #1012 from BlackDex/admin-interface
Updated js/css libraries and fixed smallscreen err
2020-06-01 20:07:13 +02:00
BlackDex
b47cf97409 Updated js/css libraries and fixed smallscreen err
- Updated bootstrap js and css to the latest version
- Fixed issue with small-screens where the menu overlaps the token input
  - The menu now collapses to a hamburger menu
  - Menu's only accessable when logedin are hidden when you are not
- Changed Users Overview to use a table to prevent small-screen issues.
2020-06-01 18:58:38 +02:00
Daniel García
5e802f8aa3 Update lettre to alpha release instead of git commit, and update the rest of dependencies while we are at it 2020-05-31 17:58:06 +02:00
Daniel García
0bdeb02a31 Merge pull request #1009 from jjlin/email-subject
Don't HTML-escape email subject lines
2020-05-31 00:22:58 +02:00
Daniel García
b03698fadb Merge pull request #1010 from jjlin/admin-url
Avoid double-slashes in the admin URL
2020-05-31 00:22:46 +02:00
Jeremy Lin
39d1a09704 Avoid double-slashes in the admin URL 2020-05-30 01:06:40 -07:00
Jeremy Lin
a447e4e7ef Don't HTML-escape email subject lines
For example, this causes org names like `X&Y` to appear as `X&Y`.
2020-05-30 00:36:43 -07:00
Daniel García
4eee6e7aee Merge pull request #1007 from BlackDex/admin-interface
Admin interface restyle
2020-05-28 20:54:11 +02:00
BlackDex
b6fde857a7 Added version check to diagnostics
- Added a version check based upon the github api information.
2020-05-28 20:25:25 +02:00
BlackDex
3c66deb5cc Redesign of the admin interface.
Main changes:
 - Splitted up settings and users into two separate pages.
 - Added verified shield when the e-mail address has been verified.
 - Added the amount of personal items in the database to the users overview.
 - Added Organizations and Diagnostics pages.
   - Shows if DNS resolving works.
   - Shows if there is a posible time drift.
   - Shows current versions of server and web-vault.
 - Optimized logo-gray.png using optipng

Items which can be added later:
 - Amount of cipher items accessible for a user, not only his personal items.
 - Amount of users per Org
 - Version update check in the diagnostics overview.
 - Copy/Pasteable runtime config which has sensitive data changed or removed for support questions either on the forum or github issues.
 - Option to delete Orgs and all its passwords (when there are no members anymore).
 - Etc....
2020-05-28 10:46:25 +02:00
Daniel García
4146612a32 Merge pull request #1006 from jjlin/email-change
Allow email changes for existing accounts even when signups are disabled
2020-05-27 18:18:21 +02:00
Jeremy Lin
a314933557 Allow email changes for existing accounts even when signups are disabled 2020-05-24 14:38:19 -07:00
Daniel García
c5d7e3f2bc Merge pull request #1003 from frdescam/fix_arm_displaysize
Use format! for rounding to fix arm issue
2020-05-23 13:10:06 +02:00
Daniel García
c95a2881b5 Merge pull request #998 from frdescam/fix_email_templates
Fixing bad width in 2FA email template
2020-05-23 13:09:44 +02:00
fdeĉ
4c3727b4a3 use format! for rounding to fix arm issue 2020-05-22 12:10:56 +02:00
Daniel García
a1f304dff7 Update web vault to v2.14.0 2020-05-21 22:49:15 +02:00
Daniel García
a8870eef0d Convert to f32 before rounding to fix arm issue 2020-05-20 17:58:39 +02:00
François
afaebc6cf3 fixing hard coded width email templates 2020-05-20 13:38:04 +02:00
François
8f4a1f4fc2 fixing bad width in 2FA email template 2020-05-18 12:27:21 +02:00
Daniel García
0807783388 Add ip on totp miss 2020-05-14 00:19:50 +02:00
Daniel García
80d4061d14 Update dependencies 2020-05-14 00:18:18 +02:00
Daniel García
dc2f8e5c85 Merge pull request #994 from jjlin/help-text
Update startup banner to direct usage/config questions to the forum
2020-05-13 22:34:30 +02:00
Daniel García
aee1ea032b Merge pull request #989 from theycallmesteve/update_responses
Update responses
2020-05-13 22:34:16 +02:00
Daniel García
484e82fb9f Merge pull request #988 from theycallmesteve/rename_functions
Rename functions
2020-05-13 22:34:06 +02:00
Jeremy Lin
322a08edfb Update startup banner to direct usage/config questions to the forum 2020-05-13 12:29:47 -07:00
theycallmesteve
08afc312c3 Add missing items to profileOrganization response model 2020-05-08 13:39:17 -04:00
theycallmesteve
5571a5d8ed Update post_keys to return a keys response model 2020-05-08 13:38:49 -04:00
theycallmesteve
6a8c65493f Rename collection_user_details to collection_read_only to reflect the response model 2020-05-08 13:37:40 -04:00
theycallmesteve
dfdf4473ea Rename to_json_list to to_json_provder to reflect the response model 2020-05-08 13:36:35 -04:00
Daniel García
8bbbff7567 Merge pull request #987 from theycallmesteve/global_domains
GlobalEquivalentDomains updates from upstream bitwarden
2020-05-08 01:04:10 +02:00
theycallmesteve
42e37ebea1 Apply upstream global domain values and whitespace fixes 2020-05-07 18:05:17 -04:00
theycallmesteve
632f4d5453 Whitespace fixes 2020-05-07 18:02:37 -04:00
Daniel García
6c5e35ce5c Change the mails content types to more closely match what we sent before 2020-05-07 00:51:46 +02:00
Daniel García
4ff15f6dc2 Merge pull request #978 from AltiUP/patch-1
Delete the call to the map file
2020-05-03 22:30:06 +02:00
Daniel García
ec8028aef2 Merge pull request #979 from jjlin/admin-redirect
Use absolute URIs for admin page redirects
2020-05-03 22:27:09 +02:00
Daniel García
63cbd9ef9c Update lettre to latest master 2020-05-03 17:41:53 +02:00
Daniel García
9cca64003a Remove unused dependency and simple feature, update dependencies and fix some clippy lints 2020-05-03 17:24:51 +02:00
Jeremy Lin
819d5e2dc8 Use absolute URIs for admin page redirects
This is technically required per RFC 2616 (HTTP/1.1); some proxies will
rewrite a plain `/admin` path to an unexpected URL otherwise.
2020-05-01 00:31:47 -07:00
Christophe Gherardi
3b06ab296b Delete the call to the map file
The file bootstrap.css.map is missing, the reference can be deleted.
2020-04-30 19:41:58 +02:00
Daniel García
0de52c6c99 Merge pull request #957 from jjlin/domain-whitelist
Domain whitelist cleanup and fixes
2020-04-18 12:08:48 +02:00
Daniel García
e3b00b59a7 Initial support for soft deletes 2020-04-17 22:35:27 +02:00
Daniel García
5a390a973f Merge pull request #966 from BlackDex/issue-965
Fixed issue #965
2020-04-15 17:15:59 +02:00
BlackDex
1ee8e44912 Fixed issue #965
PostgreSQL updates/inserts ignored None/null values.
This is nice for new entries, but not for updates.
Added derive option to allways add these none/null values for Option<>
variables.

This solves issue #965
2020-04-15 16:49:33 +02:00
Jeremy Lin
86685c1cd2 Ensure email domain comparison is case-insensitive 2020-04-11 14:51:36 -07:00
Daniel García
e3feba2a2c Merge pull request #960 from jjlin/admin-token
Warn on empty `ADMIN_TOKEN` instead of bailing out
2020-04-11 23:34:37 +02:00
Jeremy Lin
0a68de6c24 Warn on empty ADMIN_TOKEN instead of bailing out
The admin page will still be disabled.

Fixes #849.
2020-04-09 20:55:08 -07:00
Daniel García
4be8dae626 Make web vault show a more informative error when browsers block WebCrypto in insecure contexts and update dependencies 2020-04-09 22:54:31 +02:00
Jeremy Lin
e4d08836e2 Make org owner invitations respect the email domain whitelist
This closes a loophole where org owners can invite new users from any domain.
2020-04-09 01:51:05 -07:00
Jeremy Lin
c2a324e5da Clean up domain whitelist logic
* Make `SIGNUPS_DOMAINS_WHITELIST` override the `SIGNUPS_ALLOWED` setting.
  Otherwise, a common pitfall is to set `SIGNUPS_DOMAINS_WHITELIST` without
  realizing that `SIGNUPS_ALLOWED=false` must also be set.

* Whitespace is now accepted in `SIGNUPS_DOMAINS_WHITELIST`. That is,
  `foo.com, bar.com` is now equivalent to `foo.com,bar.com`.

* Add validation on `SIGNUPS_DOMAINS_WHITELIST`. For example, `foo.com,`
  is rejected as containing an empty token.
2020-04-09 01:42:27 -07:00
Daniel García
77f95146d6 Merge pull request #956 from jjlin/duo
Fix Duo auth failure with non-lowercased email addresses
2020-04-08 08:43:24 +02:00
Jeremy Lin
6cd8512bbd Fix Duo auth failure with non-lowercased email addresses 2020-04-07 20:40:51 -07:00
Daniel García
843604c9e7 Merge pull request #939 from jjlin/attachment-size
Fix attachment size limit calculation
2020-03-31 12:56:49 +02:00
Jeremy Lin
7407b8326a Fix attachment size limit calculation
The config values (in KB) need to be converted to bytes when comparing
against total attachment sizes.
2020-03-31 02:30:28 -07:00
Daniel García
adf47827c9 Make sure the data field is always returned, otherwise the mobile apps seem to have issues 2020-03-30 22:19:50 +02:00
Daniel García
5471088e93 Merge pull request #933 from jjlin/dockerfiles
Rebuild Dockerfiles to match latest Dockerfile.j2 template
2020-03-27 17:45:10 +01:00
Daniel García
4e85a1dee1 Update web vault to 2.13.2 2020-03-27 17:44:10 +01:00
Daniel García
ec60839064 Merge pull request #932 from jjlin/ws-fix
Fix WebSocket notifications
2020-03-27 08:38:54 +01:00
Jeremy Lin
d4bfa1a189 Rebuild Dockerfiles to match latest Dockerfile.j2 template
Picks up a couple of missed changes from b837348b and ccf6ee79.
2020-03-26 20:10:33 -07:00
Jeremy Lin
862d401077 Fix WebSocket notifications
Ignore a missing `id` query param; it's unclear what this ID represents,
but it wasn't being used in the existing bitwarden_rs code, and no longer
seems to be sent in the latest versions of the official clients.
2020-03-26 19:26:44 -07:00
Daniel García
255a06382d Merge pull request #928 from jjlin/healthcheck
Healthcheck fixes/optimizations
2020-03-26 21:13:31 +01:00
Jeremy Lin
bbb0484d03 Healthcheck fixes/optimizations
* Switch healthcheck interval/timeout from 30s/3s to 60s/10s.
  30s interval is arguably overkill, and 3s timeout is definitely too short
  for lower end machines.
* Use HEALTHCHECK CMD exec form to avoid superfluous `sh` invocations.
* Add `--silent --show-error` flags to curl call to avoid progress meter being
  shown in healthcheck logs.
2020-03-25 20:13:36 -07:00
Daniel García
93346bc05d Merge pull request #927 from jjlin/healthcheck
Update healthcheck script to handle alternate base dir
2020-03-25 22:21:08 +01:00
Jeremy Lin
fdf50f0064 Update healthcheck script to handle alternate base dir 2020-03-24 20:00:35 -07:00
Daniel García
ccf6ee79d0 Update dependencies, mainly diesel and sqlite 2020-03-24 20:36:19 +01:00
Daniel García
91dd19473d Merge pull request #922 from jjlin/device-push-token
Handle `devicePushToken`
2020-03-23 00:03:10 +01:00
Jeremy Lin
c06162b22f Handle devicePushToken
Mobile push isn't currently supported, but this should get rid of spurious
`Detected unexpected parameter during login: devicepushtoken` warnings.
2020-03-22 15:04:25 -07:00
Daniel García
7a6a3e4160 Set the cargo version and allow changing it during build time with BWRS_VERSION.
Also renamed GIT_VERSION because that's not the only source anymore.
2020-03-22 16:13:34 +01:00
Daniel García
94341f9f3f Fix token error while accepting invite 2020-03-20 10:51:17 +01:00
Daniel García
ff19fb3426 Merge pull request #919 from BlackDex/issue-908
Fixed issue #908
2020-03-19 18:11:47 +01:00
BlackDex
baac8d9627 Fixed issue #908
The organization uuid is most of the time within the uri path as a
parameter. But sometimes it only is there as a query value.

This fix checks both, and returns the uuid when possible.
2020-03-19 17:37:10 +01:00
BlackDex
669b101e6a Fixing issue #908
Sometimes an org-uuid is not within the path but in a query value,
This fixes the check for that.
2020-03-19 16:50:47 +01:00
Daniel García
935f38692f Merge pull request #918 from dani-garcia/revert-901-feature/opportunistic_tls
Revert "Use opportunistic TLS in SMTP connections"
2020-03-19 13:58:00 +01:00
Daniel García
d2d9fb08cc Revert "Use opportunistic TLS in SMTP connections" 2020-03-19 13:56:53 +01:00
Daniel García
b85d548879 Merge pull request #916 from BlackDex/issue-759
Fixing issue #759 by disabling Foreign Key Checks.
2020-03-18 18:48:08 +01:00
BlackDex
35f30088b2 Fixing issue #759 by disabling Foreign Key Checks.
During migrations some queries are out of order regarding to foreign
keys.
Because of this the migrations fail when the sql database has this
enforced by default.
Turning of this check during the migrations will fix this and this is
only per session.
2020-03-18 18:11:11 +01:00
Daniel García
dce054e632 Merge pull request #912 from ymage/openssl_as_default
Fix alpine build with openssl crate as default
2020-03-16 23:02:07 +01:00
Ymage
ba725e1c25 Make openssl crate as default (non feature-flipped) 2020-03-16 22:39:10 +01:00
Ymage
b837348b25 Build as static 2020-03-16 22:34:59 +01:00
Daniel García
7d9c7017c9 Merge pull request #911 from BlackDex/upgrade-rocket
Upgrade rocket
2020-03-16 18:17:17 +01:00
Daniel García
d6b9b8bf0c Merge pull request #876 from BlackDex/log-panics
Make panics logable (as warn)
2020-03-16 18:16:49 +01:00
BlackDex
bd09fe1a3d Updated code so backtraces are logged also. 2020-03-16 17:53:22 +01:00
BlackDex
bcbe6177b8 Merge branch 'master' of https://github.com/dani-garcia/bitwarden_rs into log-panics 2020-03-16 17:19:27 +01:00
BlackDex
9b1d07365e Updated ring
Some small changes to match the updated ring package.
2020-03-16 16:39:20 +01:00
BlackDex
37b212427c Updated jsonwebtoken
Updated to the latest version of jsonwebtoken.
Some small code changes to match the new versions.
2020-03-16 16:38:00 +01:00
BlackDex
078234d8b3 Small change for rocket compatibilty 2020-03-16 16:36:44 +01:00
BlackDex
3ce0c3d1a5 Update dependencies
Primarily updating rocket, which needed some dependencies

Latest versions of:
 - ring
 - time
 - jsonwebtoken
 - yubico
 - rocket (git)
2020-03-16 16:32:33 +01:00
Daniel García
2ee07ea1d8 Fix empty data when cloning cipher 2020-03-15 17:26:34 +01:00
Daniel García
40c339db9b Fix postgres policies, second try 2020-03-14 23:53:12 +01:00
Daniel García
402c1cd06c Merge pull request #906 from BlackDex/upgrade-reqwest
Updated reqwest to the latest version.
2020-03-14 23:35:52 +01:00
Daniel García
819f340f39 Fix issue with postgres 2020-03-14 23:35:34 +01:00
BlackDex
1b4b40c95d Updated reqwest to the latest version.
- Use the blocking client (no async).
- Disabled gzip.
- use_sys_proxy is now default.
2020-03-14 23:12:45 +01:00
Daniel García
afd9f4e278 Allow the smtp mechanism to be provided without quotes and all lowercase 2020-03-14 22:31:41 +01:00
Daniel García
47a9461f39 Merge pull request #903 from TheBinaryLoop/patch-1
Updated domains with new values vualt
2020-03-14 14:41:39 +01:00
Daniel García
c6f64d8368 Merge pull request #901 from sleweke/feature/opportunistic_tls
Use opportunistic TLS in SMTP connections
2020-03-14 14:41:00 +01:00
Daniel García
edabf19ddf Update vault to 2.13.1 2020-03-14 14:40:06 +01:00
Daniel García
a30d5f4cf9 Fix cloning issues 2020-03-14 14:08:57 +01:00
Daniel García
3fa78e7bb1 Initial version of policies 2020-03-14 13:32:28 +01:00
Lukas Eßmann
a8a7e4f9a5 Updated domains with new values vualt
Added domains from official vault.bitwarden.com
2020-03-13 20:08:52 +01:00
Samuel Leweke
5d3b765a23 Use opportunistic TLS in SMTP connections
If SSL is disabled, the SMTP ClientSecurity of the lettre crate
defaults to None, that is, an insecure connection. This is changed to
Opportunistic, which uses TLS if available. If TLS is not available,
the insecure connection is used (i.e., this change is backward
compatible).
2020-03-12 11:40:52 +01:00
Daniel García
70f3ab8ec3 Migrate lazy_static to once_cell, less macro magic and slightly faster 2020-03-09 22:04:03 +01:00
Daniel García
b6612e90ca Update dependencies 2020-03-09 22:00:59 +01:00
Daniel García
161cccca30 Merge pull request #892 from BlackDex/smtp-test-button
Relocated SMTP test input+button.
2020-03-05 00:05:52 +01:00
BlackDex
84dc2eda1f Changed javascript default argument construction 2020-03-04 15:08:14 +01:00
BlackDex
390d10d656 Relocated SMTP test input+button.
- Moved smtp test option to within the "SMTP Email" Settings block.
- Added optional option to prevent full page reload.
- SMTP Test and Backup do not reload the admin interface any more.
2020-03-04 13:25:38 +01:00
Daniel García
1f775f4414 Merge pull request #888 from zethra/add_cli_args
Added command line flags for help and version
2020-03-03 00:12:15 +01:00
zethra
cc404b4edc Added command line flags for help and version
Signed-off-by: zethra <benaagoldberg@gmail.com>
2020-03-02 15:51:57 -05:00
Daniel García
536672ac1b Delete ISSUE_TEMPLATE.md 2020-03-02 19:58:53 +01:00
Daniel García
e41e7c07db Update issue templates 2020-03-02 19:58:36 +01:00
Daniel García
f1d3b03c60 Update README.md 2020-03-02 19:37:49 +01:00
Daniel García
2ebff958a4 Merge pull request #879 from BlackDex/smtp-test-button
Added SMTP test button in the admin gui
2020-03-01 15:00:20 +01:00
Daniel García
edfdda86ae Use web vault built by docker autobuild, using the hash to reference the image for extra security 2020-03-01 02:40:18 +01:00
BlackDex
97fb7b5b96 Added urlpath to smtpTest function 2020-02-26 16:58:57 +01:00
BlackDex
f6de144cbb Merge branch 'smtp-test-button' of github.com:BlackDex/bitwarden_rs into smtp-test-button 2020-02-26 16:56:03 +01:00
BlackDex
5a974c7b94 Added SMTP test button in the admin gui
- Added a test button for checking the e-mail settings.
- Fixed a bug with the _post JavaScript function:
  A function was overwriten with a variable and errors were not handled
correctly like a 500 for example.
2020-02-26 16:49:56 +01:00
BlackDex
5f61607419 Added SMTP test button in the admin gui
- Added a test button for checking the e-mail settings.
- Fixed a bug with the _post JavaScript function:
  A function was overwriten with a variable and errors were not handled
correctly like a 500 for example.
2020-02-26 11:02:22 +01:00
BlackDex
7439aeb63e Make panics logable (as warn)
panic!()'s only appear on stderr, this makes tracking down some strange
issues harder with the usage of docker since stderr does not get logged
into the bitwarden.log file. This change logs the message to stdout and
the logfile when activated.
2020-02-25 14:10:52 +01:00
Daniel García
cd8907542a Make sure the provided domain contains the protocol and show a useful error when it doesn't 2020-02-23 14:55:27 +01:00
Daniel García
8a5450e830 Merge pull request #868 from jjlin/alt-base
Add backend support for alternate base dir (subdir/subpath) hosting
2020-02-22 22:06:07 +01:00
Daniel García
ad9f2b2d8e Removed test urlpath 2020-02-22 19:01:58 +01:00
Daniel García
2f4a9865e1 Use absolute paths in the admin page 2020-02-22 17:49:33 +01:00
Daniel García
0a3008e753 Update web vault used in docker 2020-02-22 16:00:43 +01:00
Jeremy Lin
29a0795219 Add backend support for alternate base dir (subdir/subpath) hosting
To use this, include a path in the `DOMAIN` URL, e.g.:

* `DOMAIN=https://example.com/custom-path`
* `DOMAIN=https://example.com/multiple/levels/are/ok`
2020-02-18 21:27:00 -08:00
Daniel García
63459c5f72 Updated FUNDING as mentioned in #859 2020-02-18 21:48:11 +01:00
Daniel García
916e96b143 Update web vault to fix copy issues 2020-02-18 20:08:21 +01:00
Daniel García
325039c316 Attachment size limits, per-user and per-organization 2020-02-17 22:56:26 +01:00
Daniel García
c5b97f4146 Merge pull request #864 from mprasil/admin-invitation
Do not disable invitations via admin API
2020-02-16 22:12:00 +01:00
Miro Prasil
03233429f4 Remove check from Invitation:take()
I've checked the spots when `Invitation::new()` and `Invitation::take()`
are used and it seems like all spots are already correctly gated. So to
enable invitations via admin API even when invitations are otherwise
disabled, this check can be removed.
2020-02-16 20:28:50 +00:00
Miroslav Prasil
0a72c4b6db Do not disable invitations via admin API
This was brought up today:

https://github.com/dani-garcia/bitwarden_rs/issues/752#issuecomment-586715073

I don't think it makes much sense in checking whether admin has the
right to send invitation as admin can change the setting anyway.

Removing the condition allows users to forbid regular users from
inviting new users to server while still preserving the option to do so
via the admin API.
2020-02-16 15:01:07 +00:00
Daniel García
8867626de8 Add option to change invitation org name, fixes #825
Add option to allow additional iframe ancestors, fixes #843
Sort the rocket routes before printing them
2020-02-04 22:14:50 +01:00
Daniel García
f5916ec396 Fix backwards indices 2020-01-30 22:33:50 +01:00
Daniel García
ebb36235a7 Cache icons in the clients 2020-01-30 22:30:57 +01:00
Daniel García
def174a517 Convert email domains to punycode 2020-01-30 22:11:53 +01:00
Daniel García
2798f623d4 Updated rust toolchain version 2020-01-30 22:11:44 +01:00
Daniel García
480ba933fa Don't error if admin token is empty but disabled 2020-01-30 22:10:50 +01:00
Daniel García
3d1ee9ef62 Use rust-toolchain file to determine version in workflows, disabled mac builds for now 2020-01-29 19:26:06 +01:00
Daniel García
5352321fe1 Merge pull request #831 from mprasil/whitelist-fix
SIGNUPS_ALLOWED with no whitelist [fixes #830]
2020-01-29 13:28:07 +01:00
Miro Prasil
c4101162d6 SIGNUPS_ALLOWED with no whitelist [fixes #830]
This reverts back to `SIGNUPS_ALLOWED` when there is no domain whitelist
set. The functionality was broken in 64d6f72.
2020-01-29 11:32:42 +00:00
Daniel García
632d55265b Merge pull request #824 from tomuta/fix_change_email
Fix change email when no whitelist is configured
2020-01-28 20:52:16 +01:00
tomuta
e277f7d1c1 Fix change email when no whitelist is configured
Fixes issue #792
2020-01-26 13:34:56 -07:00
Daniel García
ff7b4a3d38 Update handlebars to 3.0 which included performance improvements.
Updated lettre to newer git revision, which should give better error messages now.
2020-01-26 15:29:14 +01:00
Daniel García
d212dfe735 Accept y/n, True/False, 1/0 as booleans in environment vars 2020-01-20 22:28:54 +01:00
Daniel García
84ed185579 Update u2f to 0.2, which requires OpenSSL but also might solve the problems we've had with certificates.
The rust image doesn't need installing curl or tar, so removed. Also collapsed ENV lines.
2020-01-19 21:34:13 +01:00
Daniel García
c0ba3406ef Merge pull request #812 from swedishborgie/postgresql
Fixes #635 - Unique constraint violation when using U2F tokens on PostgreSQL
2020-01-16 16:21:57 +01:00
Michael Powers
e196ba6e86 Switch error handling to ? operator instead of explicit handling. 2020-01-16 08:14:25 -05:00
Michael Powers
76743aee48 Fixes #635 - Unique constraint violation when using U2F tokens on PostgreSQL
Because of differences in how .on_conflict() works compared to .replace_into() the PostgreSQL backend wasn't correctly ensuring the unique constraint on user_uuid and atype wasn't getting violated.

This change simply issues a DELETE on the unique constraint prior to the insert to ensure uniqueness. PostgreSQL does not support multiple constraints in ON CONFLICT clauses.
2020-01-13 21:53:57 -05:00
Daniel García
9ebca99290 Update dependencies 2020-01-10 18:37:16 +01:00
Daniel García
a734ad2d36 Add contributor 2020-01-10 18:36:36 +01:00
Daniel García
baf7d1be4e Delete old workflow file and disable building binaries on pull requests, as we already have CI for that 2020-01-05 22:46:34 +01:00
Daniel García
31bcd1bf7c Merge pull request #784 from ypid/docker/use-debian-base
Use Debian base image for all steps of the build process
2020-01-05 22:42:43 +01:00
Daniel García
a3b30ed65a Add missing target armv7 and cross compile envs 2020-01-05 22:41:58 +01:00
Robin Schneider
402c857d17 Add hint to Dockerfile's that they are generated 2020-01-03 22:07:56 +01:00
Robin Schneider
def858854b Readd missing cargo build for armv7. Thanks to @dani-garcia! 2020-01-03 22:00:45 +01:00
Robin Schneider
f6761ac30e Remove debugging echo statement from Dockerfiles 2020-01-01 15:17:33 +01:00
Robin Schneider
f8e49ea3f4 Use apt-get instead of apt in Dockerfiles, also --no-install-recommends
apt is intended for humans, not scripts.

--no-install-recommends improves build time by avoiding to install
unneeded packages.
2019-12-31 16:46:08 +01:00
Robin Schneider
f6a4a2127b Remove duplicate empty lines in generated Dockerfiles
Checked with:

```Shell
find . -type f -print0 | xargs -0 pcregrep -M '\n\n\n'
```
2019-12-31 16:33:00 +01:00
Robin Schneider
446fc3f1f8 Set build time options for dpkg and reproducible builds
Ref: https://github.com/moby/moby/issues/4032
Ref: https://sweetcode.io/using-docker-reproducible-build-environments/
Ref: https://github.com/hashbang/aosp-build/blob/master/config/container/Dockerfile
2019-12-31 16:33:00 +01:00
Robin Schneider
146525db91 Improve Jinja2 template logic a bit 2019-12-31 16:33:00 +01:00
Robin Schneider
1698b43f9b Readd missing cargo setup for armv7. Thanks to @dani-garcia! 2019-12-31 16:33:00 +01:00
Robin Schneider
078b21db85 Fix armv6 build, thanks to @dani-garcia for the review! 2019-12-31 16:33:00 +01:00
Robin Schneider
43adcde094 Move rustup target before cargo build. Thanks to @dani-garcia!
Note from @dani-garcia:

> I don't think this is doing anything right now because the target is probably
> installed already.
2019-12-31 16:32:59 +01:00
Daniel García
7a0bb18dcf Make cargo new independent of workdir to be exact
The muslrust images seem to have a workdir of /volume as opposed to / in the
others so doing cargo new like this would create the folder in /volume/app.
2019-12-31 16:32:59 +01:00
Robin Schneider
47a5a4e1fc Fix package name for Ubuntu 16.04 based image. Thanks @dani-garcia! 2019-12-31 16:32:59 +01:00
Robin Schneider
0f0e5876ae Move dpkg --add-architecture before the first apt call
Thanks to @dani-garcia for the review!
2019-12-31 16:32:59 +01:00
Robin Schneider
43aa75dc89 Fix cross platform build support, thanks to @dani-garcia for the review 2019-12-31 16:32:59 +01:00
Robin Schneider
8280d200ea Generate Dockerfiles from one source for maintainability. Closes #785. 2019-12-28 22:52:20 +01:00
Robin Schneider
f250c54813 WIP: Use Debian base image for all steps of the build process
No need to use two different base images. Debian buster is pulled later
anyway so we can just use it for the vault stage as well.

My reason for this change is partly to avoid redundancy and partly to
make it easier to build everything yourself. When all the build
environment is based on Debian than you just have to figure out how to
build a Debian Docker base image (ref:
https://github.com/ypid/docker-makefile).
2019-12-28 14:43:08 +01:00
110 changed files with 6833 additions and 4853 deletions

View File

@@ -185,6 +185,7 @@
# SMTP_FROM_NAME=Bitwarden_RS # SMTP_FROM_NAME=Bitwarden_RS
# SMTP_PORT=587 # SMTP_PORT=587
# SMTP_SSL=true # SMTP_SSL=true
# SMTP_EXPLICIT_TLS=true # N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851)
# SMTP_USERNAME=username # SMTP_USERNAME=username
# SMTP_PASSWORD=password # SMTP_PASSWORD=password
# SMTP_AUTH_MECHANISM="Plain" # SMTP_AUTH_MECHANISM="Plain"

1
.github/FUNDING.yml vendored
View File

@@ -1 +1,2 @@
github: dani-garcia github: dani-garcia
custom: ["https://paypal.me/DaniGG"]

View File

@@ -1,3 +1,12 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
<!-- <!--
Please fill out the following template to make solving your problem easier and faster for us. Please fill out the following template to make solving your problem easier and faster for us.
This is only a guideline. If you think that parts are unneccessary for your issue, feel free to remove them. This is only a guideline. If you think that parts are unneccessary for your issue, feel free to remove them.

View File

@@ -0,0 +1,11 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your feature requests to the forum
Link: https://bitwardenrs.discourse.group/c/feature-requests

View File

@@ -0,0 +1,11 @@
---
name: Help with installation/configuration
about: Any questions about the setup of bitwarden_rs
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your third party help requests to the forum
Link: https://bitwardenrs.discourse.group/c/help

View File

@@ -0,0 +1,11 @@
---
name: Help with proxy/database/NAS setup
about: Any questions about third party software
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your third party help requests to the forum
Link: https://bitwardenrs.discourse.group/c/third-party-help

View File

@@ -1,70 +0,0 @@
name: build-windows
on: [push, pull_request]
jobs:
build:
runs-on: windows-latest
strategy:
matrix:
db-backend: [sqlite, mysql, postgresql]
steps:
- uses: actions/checkout@v1
- name: Cache choco cache
uses: actions/cache@v1.0.3
with:
path: ~\AppData\Local\Temp\chocolatey
key: ${{ runner.os }}-choco-cache
- name: Install dependencies
run: choco install openssl sqlite postgresql12 mysql
- name: Cache cargo registry
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo index
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/git
key: ${{ runner.os }}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo build
uses: actions/cache@v1.0.3
with:
path: target
key: ${{ runner.os }}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
- name: Install latest nightly
uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
profile: minimal
target: x86_64-pc-windows-msvc
- name: Build
run: cargo.exe build --verbose --features ${{ matrix.db-backend }} --release --target x86_64-pc-windows-msvc
env:
OPENSSL_DIR: C:\Program Files\OpenSSL-Win64\
- name: Run tests
run: cargo test --features ${{ matrix.db-backend }}
- name: Upload windows artifact
uses: actions/upload-artifact@v1.0.0
with:
name: x86_64-pc-windows-msvc-${{ matrix.db-backend }}-bitwarden_rs
path: target/release/bitwarden_rs.exe
- name: Release
uses: Shopify/upload-to-release@1.0.0
if: startsWith(github.ref, 'refs/tags/')
with:
name: x86_64-pc-windows-msvc-${{ matrix.db-backend }}-bitwarden_rs
path: target/release/bitwarden_rs.exe
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -4,9 +4,9 @@ on:
push: push:
paths-ignore: paths-ignore:
- "**.md" - "**.md"
pull_request: #pull_request:
paths-ignore: # paths-ignore:
- "**.md" # - "**.md"
jobs: jobs:
build: build:
@@ -18,7 +18,7 @@ jobs:
target: target:
- x86_64-unknown-linux-gnu - x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl # - x86_64-unknown-linux-musl
- x86_64-apple-darwin # - x86_64-apple-darwin
# - x86_64-pc-windows-msvc # - x86_64-pc-windows-msvc
include: include:
- target: x86_64-unknown-linux-gnu - target: x86_64-unknown-linux-gnu
@@ -27,9 +27,9 @@ jobs:
# - target: x86_64-unknown-linux-musl # - target: x86_64-unknown-linux-musl
# os: ubuntu-latest # os: ubuntu-latest
# ext: # ext:
- target: x86_64-apple-darwin # - target: x86_64-apple-darwin
os: macOS-latest # os: macOS-latest
ext: # ext:
# - target: x86_64-pc-windows-msvc # - target: x86_64-pc-windows-msvc
# os: windows-latest # os: windows-latest
# ext: .exe # ext: .exe
@@ -110,10 +110,9 @@ jobs:
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }} key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
- name: Install latest nightly - name: Install latest nightly
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1.0.5
with: with:
toolchain: nightly # Uses rust-toolchain to determine version
override: true
profile: minimal profile: minimal
target: ${{ matrix.target }} target: ${{ matrix.target }}

3173
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@ build = "build.rs"
# Empty to keep compatibility, prefer to set USE_SYSLOG=true # Empty to keep compatibility, prefer to set USE_SYSLOG=true
enable_syslog = [] enable_syslog = []
mysql = ["diesel/mysql", "diesel_migrations/mysql"] mysql = ["diesel/mysql", "diesel_migrations/mysql"]
postgresql = ["diesel/postgres", "diesel_migrations/postgres", "openssl"] postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"] sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"]
[target."cfg(not(windows))".dependencies] [target."cfg(not(windows))".dependencies]
@@ -26,7 +26,7 @@ rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev" rocket_contrib = "0.5.0-dev"
# HTTP client # HTTP client
reqwest = "0.9.24" reqwest = { version = "0.10.6", features = ["blocking", "json"] }
# multipart/form-data support # multipart/form-data support
multipart = { version = "0.16.1", features = ["server"], default-features = false } multipart = { version = "0.16.1", features = ["server"], default-features = false }
@@ -35,92 +35,92 @@ multipart = { version = "0.16.1", features = ["server"], default-features = fals
ws = "0.9.1" ws = "0.9.1"
# MessagePack library # MessagePack library
rmpv = "0.4.3" rmpv = "0.4.4"
# Concurrent hashmap implementation # Concurrent hashmap implementation
chashmap = "2.2.2" chashmap = "2.2.2"
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = "1.0.104" serde = "1.0.111"
serde_derive = "1.0.104" serde_derive = "1.0.111"
serde_json = "1.0.44" serde_json = "1.0.53"
# Logging # Logging
log = "0.4.8" log = "0.4.8"
fern = { version = "0.5.9", features = ["syslog-4"] } fern = { version = "0.6.0", features = ["syslog-4"] }
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "1.4.3", features = [ "chrono", "r2d2"] } diesel = { version = "1.4.4", features = [ "chrono", "r2d2"] }
diesel_migrations = "1.4.0" diesel_migrations = "1.4.0"
# Bundled SQLite # Bundled SQLite
libsqlite3-sys = { version = "0.16.0", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.17.3", features = ["bundled"], optional = true }
# Crypto library # Crypto library
ring = "0.14.6" ring = "0.16.14"
# UUID generation # UUID generation
uuid = { version = "0.8.1", features = ["v4"] } uuid = { version = "0.8.1", features = ["v4"] }
# Date and time library for Rust # Date and time librar for Rust
chrono = "0.4.10" chrono = "0.4.11"
time = "0.2.16"
# TOTP library # TOTP library
oath = "0.10.2" oath = "0.10.2"
# Data encoding library # Data encoding library
data-encoding = "2.1.2" data-encoding = "2.2.1"
# JWT library # JWT library
jsonwebtoken = "6.0.1" jsonwebtoken = "7.1.0"
# U2F library # U2F library
u2f = "0.1.6" u2f = "0.2.0"
# Yubico Library # Yubico Library
yubico = { version = "0.7.1", features = ["online-tokio"], default-features = false } yubico = { version = "0.9.0", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust # A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false } dotenv = { version = "0.15.0", default-features = false }
# Lazy static macro # Lazy initialization
lazy_static = "1.4.0" once_cell = "1.4.0"
# More derives
derive_more = "0.99.2"
# Numerical libraries # Numerical libraries
num-traits = "0.2.10" num-traits = "0.2.11"
num-derive = "0.3.0" num-derive = "0.3.0"
# Email libraries # Email libraries
lettre = "0.9.2" lettre = { version = "0.10.0-alpha.1", features = ["smtp-transport", "builder", "serde", "native-tls"], default-features = false }
lettre_email = "0.9.2" native-tls = "0.2.4"
native-tls = "0.2.3"
quoted_printable = "0.4.1"
# Template library # Template library
handlebars = "=2.0.2" handlebars = { version = "3.0.1", features = ["dir_source"] }
# For favicon extraction from main website # For favicon extraction from main website
soup = "0.4.1" soup = "0.5.0"
regex = "1.3.1" regex = "1.3.9"
data-url = "0.1.0" data-url = "0.1.0"
# Required for SSL support for PostgreSQL # Used by U2F, JWT and Postgres
openssl = { version = "0.10.26", optional = true } openssl = "0.10.29"
# URL encoding library # URL encoding library
percent-encoding = "2.1.0" percent-encoding = "2.1.0"
# Punycode conversion
idna = "0.2.0"
# CLI argument parsing
structopt = "0.3.14"
# Logging panics to logfile instead stderr only
backtrace = "0.3.48"
[patch.crates-io] [patch.crates-io]
# Use newest ring # Use newest ring
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'b95b6765e1cc8be7c1e7eaef8a9d9ad940b0ac13' } rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'b95b6765e1cc8be7c1e7eaef8a9d9ad940b0ac13' } rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' }
# Use git version for timeout fix #706
lettre = { git = 'https://github.com/lettre/lettre', rev = '24d694db3be017d82b1cdc8bf9da601420b31bb0' }
lettre_email = { git = 'https://github.com/lettre/lettre', rev = '24d694db3be017d82b1cdc8bf9da601420b31bb0' }
# For favicon extraction from main website # For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '7f1bd6ce1c2fde599a757302a843a60e714c5f72' } data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '7f1bd6ce1c2fde599a757302a843a60e714c5f72' }

View File

@@ -13,7 +13,7 @@ Image is based on [Rust implementation of Bitwarden API](https://github.com/dani
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.** **This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.**
#### ⚠️**IMPORTANT**⚠️: When using this server, please report any Bitwarden related bug-reports or suggestions [here](https://github.com/dani-garcia/bitwarden_rs/issues/new), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels. #### ⚠️**IMPORTANT**⚠️: When using this server, please report any bugs or suggestions to us directly (look at the bottom of this page for ways to get in touch), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels.
--- ---
@@ -21,14 +21,14 @@ Image is based on [Rust implementation of Bitwarden API](https://github.com/dani
Basically full implementation of Bitwarden API is provided including: Basically full implementation of Bitwarden API is provided including:
* Basic single user functionality * Single user functionality
* Organizations support * Organizations support
* Attachments * Attachments
* Vault API support * Vault API support
* Serving the static files for Vault interface * Serving the static files for Vault interface
* Website icons API * Website icons API
* Authenticator and U2F support * Authenticator and U2F support
* YubiKey OTP * YubiKey and Duo support
## Installation ## Installation
Pull the docker image and mount a volume from the host for persistent storage: Pull the docker image and mount a volume from the host for persistent storage:
@@ -49,12 +49,13 @@ If you have an available domain name, you can get HTTPS certificates with [Let's
See the [bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs/wiki) for more information on how to configure and run the bitwarden_rs server. See the [bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs/wiki) for more information on how to configure and run the bitwarden_rs server.
## Get in touch ## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please [use the forum](https://bitwardenrs.discourse.group/).
To ask a question, [raising an issue](https://github.com/dani-garcia/bitwarden_rs/issues/new) is fine. Please also report any bugs spotted here. If you spot any bugs or crashes with bitwarden_rs itself, please [create an issue](https://github.com/dani-garcia/bitwarden_rs/issues/). Make sure there aren't any similar issues open, though!
If you prefer to chat, we're usually hanging around at [#bitwarden_rs:matrix.org](https://matrix.to/#/#bitwarden_rs:matrix.org) room on Matrix. Feel free to join us! If you prefer to chat, we're usually hanging around at [#bitwarden_rs:matrix.org](https://matrix.to/#/#bitwarden_rs:matrix.org) room on Matrix. Feel free to join us!
### Sponsors ### Sponsors
Thanks for your contribution to the project! Thanks for your contribution to the project!
- [@Skaronator](https://github.com/Skaronator) - [@ChonoN](https://github.com/ChonoN)

View File

@@ -1,4 +1,5 @@
use std::process::Command; use std::process::Command;
use std::env;
fn main() { fn main() {
#[cfg(all(feature = "sqlite", feature = "mysql"))] #[cfg(all(feature = "sqlite", feature = "mysql"))]
@@ -10,8 +11,13 @@ fn main() {
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))] #[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!("You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"); compile_error!("You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite");
read_git_info().ok(); if let Ok(version) = env::var("BWRS_VERSION") {
println!("cargo:rustc-env=BWRS_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
} else {
read_git_info().ok();
}
} }
fn run(args: &[&str]) -> Result<String, std::io::Error> { fn run(args: &[&str]) -> Result<String, std::io::Error> {
@@ -54,14 +60,16 @@ fn read_git_info() -> Result<(), std::io::Error> {
} else { } else {
format!("{}-{}", last_tag, rev_short) format!("{}-{}", last_tag, rev_short)
}; };
println!("cargo:rustc-env=GIT_VERSION={}", version);
println!("cargo:rustc-env=BWRS_VERSION={}", version);
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", version);
// To access these values, use: // To access these values, use:
// env!("GIT_EXACT_TAG") // env!("GIT_EXACT_TAG")
// env!("GIT_LAST_TAG") // env!("GIT_LAST_TAG")
// env!("GIT_BRANCH") // env!("GIT_BRANCH")
// env!("GIT_REV") // env!("GIT_REV")
// env!("GIT_VERSION") // env!("BWRS_VERSION")
Ok(()) Ok(())
} }

296
docker/Dockerfile.j2 Normal file
View File

@@ -0,0 +1,296 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
{% set build_stage_base_image = "rust:1.40" %}
{% if "alpine" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2020-03-09" %}
{% set runtime_stage_base_image = "alpine:3.11" %}
{% set package_arch_name = "" %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %}
{% set package_arch_name = "" %}
{% elif "aarch64" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set package_arch_name = "arm64" %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set package_arch_name = "armel" %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set package_arch_name = "armhf" %}
{% endif %}
{% set package_arch_prefix = ":" + package_arch_name %}
{% if package_arch_name == "" %}
{% set package_arch_prefix = "" %}
{% endif %}
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_image_hash = "sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5" %}
{% raw %}
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.14.0
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
{% endraw %}
FROM bitwardenrs/web-vault@{{ vault_image_hash }} as vault
########################## BUILD IMAGE ##########################
{% if "musl" in build_stage_base_image %}
# Musl build image for statically compiled binary
{% else %}
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
{% endif %}
FROM {{ build_stage_base_image }} as build
{% if "sqlite" in target_file %}
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
{% elif "mysql" in target_file %}
# set mysql backend
ARG DB=mysql
{% elif "postgresql" in target_file %}
# set postgresql backend
ARG DB=postgresql
{% endif %}
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
{% if "alpine" in target_file %}
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
{% elif "aarch64" in target_file or "armv" in target_file %}
# Install required build libs for {{ package_arch_name }} architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture {{ package_arch_name }} \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev{{ package_arch_prefix }} \
libc6-dev{{ package_arch_prefix }}
{% endif -%}
{% if "aarch64" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "armv6" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "armv6" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "armv7" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif %}
{% if "mysql" in target_file %}
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
{% if "musl" in build_stage_base_image %}
libmysqlclient-dev{{ package_arch_prefix }} \
{% else %}
libmariadb-dev{{ package_arch_prefix }} \
{% endif %}
&& rm -rf /var/lib/apt/lists/*
{% elif "postgresql" in target_file %}
# Install PostgreSQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libpq-dev{{ package_arch_prefix }} \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
{% if "aarch64" in target_file %}
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
{% elif "armv6" in target_file %}
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
{% elif "armv7" in target_file %}
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
{% endif -%}
{% if "alpine" in target_file %}
RUN rustup target add x86_64-unknown-linux-musl
{% elif "aarch64" in target_file %}
RUN rustup target add aarch64-unknown-linux-gnu
{% elif "armv6" in target_file %}
RUN rustup target add arm-unknown-linux-gnueabi
{% elif "armv7" in target_file %}
RUN rustup target add armv7-unknown-linux-gnueabihf
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
{% if "amd64" in target_file %}
RUN cargo build --features ${DB} --release
{% elif "aarch64" in target_file %}
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
{% elif "armv6" in target_file %}
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
{% elif "armv7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
{% endif %}
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM {{ runtime_stage_base_image }}
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
{% if "alpine" in runtime_stage_base_image %}
ENV SSL_CERT_DIR=/etc/ssl/certs
{% endif %}
{% if "amd64" not in target_file %}
RUN [ "cross-build-start" ]
{% endif %}
# Install needed libraries
{% if "alpine" in runtime_stage_base_image %}
RUN apk add --no-cache \
openssl \
curl \
{% if "sqlite" in target_file %}
sqlite \
{% elif "mysql" in target_file %}
mariadb-connector-c \
{% elif "postgresql" in target_file %}
postgresql-libs \
{% endif %}
ca-certificates
{% else %}
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
{% if "sqlite" in target_file %}
sqlite3 \
{% elif "mysql" in target_file %}
libmariadbclient-dev \
{% elif "postgresql" in target_file %}
libpq5 \
{% endif %}
&& rm -rf /var/lib/apt/lists/*
{% endif %}
RUN mkdir /data
{% if "amd64" not in target_file %}
RUN [ "cross-build-end" ]
{% endif %}
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
{% if "alpine" in target_file %}
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
{% elif "aarch64" in target_file %}
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
{% elif "armv6" in target_file %}
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
{% elif "armv7" in target_file %}
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
{% else %}
COPY --from=build app/target/release/bitwarden_rs .
{% endif %}
COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/bitwarden_rs"]

9
docker/Makefile Normal file
View File

@@ -0,0 +1,9 @@
OBJECTS := $(shell find -mindepth 2 -name 'Dockerfile*')
all: $(OBJECTS)
%/Dockerfile: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
%/Dockerfile.alpine: Dockerfile.j2 render_template
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"

View File

@@ -1,24 +1,21 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
@@ -28,9 +25,22 @@ FROM rust:1.40 as build
# set mysql backend # set mysql backend
ARG DB=mysql ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for arm64 architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,30 +52,42 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:arm64 \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl arm64 libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture arm64 \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64 \
libmariadb-dev:arm64
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu" ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add aarch64-unknown-linux-gnu RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -86,14 +108,15 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
libmariadbclient-dev \ libmariadbclient-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -101,9 +124,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs . COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,36 +1,46 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling # we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build FROM rust:1.40 as build
# set sqlite as default for DB ARG for backward comaptibility # set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for arm64 architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,29 +52,36 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl arm64 libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture arm64 \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu" ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add aarch64-unknown-linux-gnu RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -85,14 +102,15 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
sqlite3 \ sqlite3 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -100,9 +118,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs . COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,24 +1,21 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
@@ -28,6 +25,9 @@ FROM rust:1.40 as build
# set mysql backend # set mysql backend
ARG DB=mysql ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
@@ -38,7 +38,7 @@ RUN apt-get update && apt-get install -y \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin app RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
@@ -92,9 +92,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs . COPY --from=build app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,55 +1,70 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary # Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2019-12-19 as build FROM clux/muslrust:nightly-2020-03-09 as build
# set mysql backend # set mysql backend
ARG DB=mysql ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Install needed libraries # Install MySQL package
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
libmysqlclient-dev \ libmysqlclient-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
RUN rustup target add x86_64-unknown-linux-musl
# Make sure that we actually build the project # Make sure that we actually build the project
RUN touch src/main.rs RUN touch src/main.rs
# Build # Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -65,8 +80,8 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
# Install needed libraries # Install needed libraries
RUN apk add --no-cache \ RUN apk add --no-cache \
openssl \ openssl \
mariadb-connector-c \
curl \ curl \
mariadb-connector-c \
ca-certificates ca-certificates
RUN mkdir /data RUN mkdir /data
@@ -80,9 +95,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs . COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,50 +1,44 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling # we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build FROM rust:1.40 as build
# set mysql backend # set postgresql backend
ARG DB=postgresql ARG DB=postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Using bundled SQLite, no need to install it # Install PostgreSQL package
# RUN apt-get update && apt-get install -y\
# --no-install-recommends \
# sqlite3\
# && rm -rf /var/lib/apt/lists/*
# Install MySQL package
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
libpq-dev \ libpq-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin app RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
@@ -84,7 +78,6 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
sqlite3 \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -99,9 +92,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs . COPY --from=build app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,55 +1,70 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary # Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2019-12-19 as build FROM clux/muslrust:nightly-2020-03-09 as build
# set postgresql backend # set postgresql backend
ARG DB=postgresql ARG DB=postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Install needed libraries # Install PostgreSQL package
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
libpq-dev \ libpq-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
RUN rustup target add x86_64-unknown-linux-musl
# Make sure that we actually build the project # Make sure that we actually build the project
RUN touch src/main.rs RUN touch src/main.rs
# Build # Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -65,9 +80,8 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
# Install needed libraries # Install needed libraries
RUN apk add --no-cache \ RUN apk add --no-cache \
openssl \ openssl \
postgresql-libs \
curl \ curl \
sqlite \ postgresql-libs \
ca-certificates ca-certificates
RUN mkdir /data RUN mkdir /data
@@ -81,9 +95,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs . COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,38 +1,38 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling # we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build FROM rust:1.40 as build
# set sqlite as default for DB ARG for backward comaptibility # set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin app RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
@@ -86,9 +86,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs . COPY --from=build app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,49 +1,64 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary # Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2019-12-19 as build FROM clux/muslrust:nightly-2020-03-09 as build
# set sqlite as default for DB ARG for backward comaptibility # set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
RUN rustup target add x86_64-unknown-linux-musl
# Make sure that we actually build the project # Make sure that we actually build the project
RUN touch src/main.rs RUN touch src/main.rs
# Build # Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -74,10 +89,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs . COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,24 +1,21 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
@@ -28,9 +25,22 @@ FROM rust:1.40 as build
# set mysql backend # set mysql backend
ARG DB=mysql ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for armel architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,30 +52,42 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:armel \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl armel libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture armel \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel \
libmariadb-dev:armel
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi" ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add arm-unknown-linux-gnueabi RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -90,10 +112,11 @@ RUN apt-get update && apt-get install -y \
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -101,9 +124,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs . COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,36 +1,46 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling # we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build FROM rust:1.40 as build
# set sqlite as default for DB ARG for backward comaptibility # set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for armel architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,29 +52,36 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl armel libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture armel \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi" ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add arm-unknown-linux-gnueabi RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -89,10 +106,11 @@ RUN apt-get update && apt-get install -y \
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -100,9 +118,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs . COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,24 +1,21 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
@@ -28,9 +25,22 @@ FROM rust:1.40 as build
# set mysql backend # set mysql backend
ARG DB=mysql ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for armhf architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,31 +52,41 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:armhf \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl armhf libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture armhf \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf \
libmariadb-dev:armhf
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf" ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add armv7-unknown-linux-gnueabihf RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -91,10 +111,11 @@ RUN apt-get update && apt-get install -y \
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -102,9 +123,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs . COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

View File

@@ -1,36 +1,46 @@
# Using multistage build: # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
FROM alpine:3.11 as vault
ENV VAULT_VERSION "v2.12.0b" # This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
ENV URL "https://github.com/dani-garcia/bw_web_builds/releases/download/$VAULT_VERSION/bw_web_$VAULT_VERSION.tar.gz" # - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
RUN apk add --no-cache --upgrade \ # docker pull bitwardenrs/web-vault:v2.14.0
curl \ # docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.14.0
tar #
# - To do the opposite, and get the tag from the hash, you can do:
RUN mkdir /web-vault # docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5
WORKDIR /web-vault FROM bitwardenrs/web-vault@sha256:c62e0b8698562e03fe2759374f3ecd760d9612e4eaf0af4583f231ebf05d6df5 as vault
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN curl -L $URL | tar xz
RUN ls
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because # We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling # we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build FROM rust:1.40 as build
# set sqlite as default for DB ARG for backward comaptibility # set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs # Don't download rust docs
RUN rustup set profile minimal RUN rustup set profile minimal
# Install required build libs for armhf architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y \ && apt-get install -y \
--no-install-recommends \ --no-install-recommends \
@@ -42,29 +52,35 @@ RUN apt-get update \
ENV CARGO_HOME "/root/.cargo" ENV CARGO_HOME "/root/.cargo"
ENV USER "root" ENV USER "root"
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Prepare openssl armhf libs # Copies over *only* your manifests and build files
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \ COPY ./Cargo.* ./
/etc/apt/sources.list.d/deb-src.list \ COPY ./rust-toolchain ./rust-toolchain
&& dpkg --add-architecture armhf \ COPY ./build.rs ./build.rs
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1" ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf" ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
# To avoid copying unneeded files, use .dockerignore # To avoid copying unneeded files, use .dockerignore
COPY . . COPY . .
# Build # Make sure that we actually build the project
RUN rustup target add armv7-unknown-linux-gnueabihf RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
@@ -89,10 +105,11 @@ RUN apt-get update && apt-get install -y \
RUN mkdir /data RUN mkdir /data
RUN [ "cross-build-end" ] RUN [ "cross-build-end" ]
VOLUME /data VOLUME /data
EXPOSE 80 EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
@@ -100,9 +117,9 @@ COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs . COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
COPY docker/healthcheck.sh ./healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=3s CMD sh healthcheck.sh || exit 1 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /

59
docker/healthcheck.sh Normal file → Executable file
View File

@@ -1,8 +1,53 @@
#!/usr/bin/env sh #!/bin/sh
if [ -z "$ROCKET_TLS"] # Use the value of the corresponding env var (if present),
then # or a default value otherwise.
curl --fail http://localhost:${ROCKET_PORT:-"80"}/alive || exit 1 : ${DATA_FOLDER:="data"}
else : ${ROCKET_PORT:="80"}
curl --insecure --fail https://localhost:${ROCKET_PORT:-"80"}/alive || exit 1
fi CONFIG_FILE="${DATA_FOLDER}"/config.json
# Given a config key, return the corresponding config value from the
# config file. If the key doesn't exist, return an empty string.
get_config_val() {
local key="$1"
# Extract a line of the form:
# "domain": "https://bw.example.com/path",
grep "\"${key}\":" "${CONFIG_FILE}" |
# To extract just the value (https://bw.example.com/path), delete:
# (1) everything up to and including the first ':',
# (2) whitespace and '"' from the front,
# (3) ',' and '"' from the back.
sed -e 's/[^:]\+://' -e 's/^[ "]\+//' -e 's/[,"]\+$//'
}
# Extract the base path from a domain URL. For example:
# - `` -> ``
# - `https://bw.example.com` -> ``
# - `https://bw.example.com/` -> ``
# - `https://bw.example.com/path` -> `/path`
# - `https://bw.example.com/multi/path` -> `/multi/path`
get_base_path() {
echo "$1" |
# Delete:
# (1) everything up to and including '://',
# (2) everything up to '/',
# (3) trailing '/' from the back.
sed -e 's|.*://||' -e 's|[^/]\+||' -e 's|/*$||'
}
# Read domain URL from config.json, if present.
if [ -r "${CONFIG_FILE}" ]; then
domain="$(get_config_val 'domain')"
if [ -n "${domain}" ]; then
# config.json 'domain' overrides the DOMAIN env var.
DOMAIN="${domain}"
fi
fi
base_path="$(get_base_path "${DOMAIN}")"
if [ -n "${ROCKET_TLS}" ]; then
s='s'
fi
curl --insecure --fail --silent --show-error \
"http${s}://localhost:${ROCKET_PORT}${base_path}/alive" || exit 1

17
docker/render_template Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env python3
import os, argparse, json
import jinja2
args_parser = argparse.ArgumentParser()
args_parser.add_argument('template_file', help='Jinja2 template file to render.')
args_parser.add_argument('render_vars', help='JSON-encoded data to pass to the templating engine.')
cli_args = args_parser.parse_args()
render_vars = json.loads(cli_args.render_vars)
environment = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.getcwd()),
trim_blocks=True,
)
print(environment.get_template(cli_args.template_file).render(render_vars))

View File

@@ -0,0 +1 @@
DROP TABLE org_policies;

View File

@@ -0,0 +1,9 @@
CREATE TABLE org_policies (
uuid CHAR(36) NOT NULL PRIMARY KEY,
org_uuid CHAR(36) NOT NULL REFERENCES organizations (uuid),
atype INTEGER NOT NULL,
enabled BOOLEAN NOT NULL,
data TEXT NOT NULL,
UNIQUE (org_uuid, atype)
);

View File

@@ -0,0 +1,3 @@
ALTER TABLE ciphers
ADD COLUMN
deleted_at DATETIME;

View File

@@ -0,0 +1 @@
DROP TABLE org_policies;

View File

@@ -0,0 +1,9 @@
CREATE TABLE org_policies (
uuid CHAR(36) NOT NULL PRIMARY KEY,
org_uuid CHAR(36) NOT NULL REFERENCES organizations (uuid),
atype INTEGER NOT NULL,
enabled BOOLEAN NOT NULL,
data TEXT NOT NULL,
UNIQUE (org_uuid, atype)
);

View File

@@ -0,0 +1,3 @@
ALTER TABLE ciphers
ADD COLUMN
deleted_at TIMESTAMP;

View File

@@ -0,0 +1 @@
DROP TABLE org_policies;

View File

@@ -0,0 +1,9 @@
CREATE TABLE org_policies (
uuid TEXT NOT NULL PRIMARY KEY,
org_uuid TEXT NOT NULL REFERENCES organizations (uuid),
atype INTEGER NOT NULL,
enabled BOOLEAN NOT NULL,
data TEXT NOT NULL,
UNIQUE (org_uuid, atype)
);

View File

@@ -0,0 +1,3 @@
ALTER TABLE ciphers
ADD COLUMN
deleted_at DATETIME;

View File

@@ -1 +1 @@
nightly-2019-12-19 nightly-2020-05-31

View File

@@ -1,4 +1,6 @@
use once_cell::sync::Lazy;
use serde_json::Value; use serde_json::Value;
use serde::de::DeserializeOwned;
use std::process::Command; use std::process::Command;
use rocket::http::{Cookie, Cookies, SameSite}; use rocket::http::{Cookie, Cookies, SameSite};
@@ -13,16 +15,17 @@ use crate::config::ConfigBuilder;
use crate::db::{backup_database, models::*, DbConn}; use crate::db::{backup_database, models::*, DbConn};
use crate::error::Error; use crate::error::Error;
use crate::mail; use crate::mail;
use crate::util::get_display_size;
use crate::CONFIG; use crate::CONFIG;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
if CONFIG.admin_token().is_none() && !CONFIG.disable_admin_token() { if !CONFIG.disable_admin_token() && !CONFIG.is_admin_token_set() {
return routes![admin_disabled]; return routes![admin_disabled];
} }
routes![ routes![
admin_login, admin_login,
get_users, get_users_json,
post_admin_login, post_admin_login,
admin_page, admin_page,
invite_user, invite_user,
@@ -34,12 +37,15 @@ pub fn routes() -> Vec<Route> {
post_config, post_config,
delete_config, delete_config,
backup_db, backup_db,
test_smtp,
users_overview,
organizations_overview,
diagnostics,
] ]
} }
lazy_static! { static CAN_BACKUP: Lazy<bool> =
static ref CAN_BACKUP: bool = cfg!(feature = "sqlite") && Command::new("sqlite3").arg("-version").status().is_ok(); Lazy::new(|| cfg!(feature = "sqlite") && Command::new("sqlite3").arg("-version").status().is_ok());
}
#[get("/")] #[get("/")]
fn admin_disabled() -> &'static str { fn admin_disabled() -> &'static str {
@@ -50,13 +56,25 @@ const COOKIE_NAME: &str = "BWRS_ADMIN";
const ADMIN_PATH: &str = "/admin"; const ADMIN_PATH: &str = "/admin";
const BASE_TEMPLATE: &str = "admin/base"; const BASE_TEMPLATE: &str = "admin/base";
const VERSION: Option<&str> = option_env!("GIT_VERSION"); const VERSION: Option<&str> = option_env!("BWRS_VERSION");
fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
}
/// Used for `Location` response headers, which must specify an absolute URI
/// (see https://tools.ietf.org/html/rfc2616#section-14.30).
fn admin_url() -> String {
// Don't use CONFIG.domain() directly, since the user may want to keep a
// trailing slash there, particularly when running under a subpath.
format!("{}{}{}", CONFIG.domain_origin(), CONFIG.domain_path(), ADMIN_PATH)
}
#[get("/", rank = 2)] #[get("/", rank = 2)]
fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> { fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> {
// If there is an error, show it // If there is an error, show it
let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg())); let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg()));
let json = json!({"page_content": "admin/login", "version": VERSION, "error": msg}); let json = json!({"page_content": "admin/login", "version": VERSION, "error": msg, "urlpath": CONFIG.domain_path()});
// Return the page // Return the page
let text = CONFIG.render_template(BASE_TEMPLATE, &json)?; let text = CONFIG.render_template(BASE_TEMPLATE, &json)?;
@@ -76,7 +94,7 @@ fn post_admin_login(data: Form<LoginForm>, mut cookies: Cookies, ip: ClientIp) -
if !_validate_token(&data.token) { if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip); error!("Invalid admin token. IP: {}", ip.ip);
Err(Flash::error( Err(Flash::error(
Redirect::to(ADMIN_PATH), Redirect::to(admin_url()),
"Invalid admin token, please try again.", "Invalid admin token, please try again.",
)) ))
} else { } else {
@@ -85,14 +103,14 @@ fn post_admin_login(data: Form<LoginForm>, mut cookies: Cookies, ip: ClientIp) -
let jwt = encode_jwt(&claims); let jwt = encode_jwt(&claims);
let cookie = Cookie::build(COOKIE_NAME, jwt) let cookie = Cookie::build(COOKIE_NAME, jwt)
.path(ADMIN_PATH) .path(admin_path())
.max_age(chrono::Duration::minutes(20)) .max_age(time::Duration::minutes(20))
.same_site(SameSite::Strict) .same_site(SameSite::Strict)
.http_only(true) .http_only(true)
.finish(); .finish();
cookies.add(cookie); cookies.add(cookie);
Ok(Redirect::to(ADMIN_PATH)) Ok(Redirect::to(admin_url()))
} }
} }
@@ -107,21 +125,69 @@ fn _validate_token(token: &str) -> bool {
struct AdminTemplateData { struct AdminTemplateData {
page_content: String, page_content: String,
version: Option<&'static str>, version: Option<&'static str>,
users: Vec<Value>, users: Option<Vec<Value>>,
organizations: Option<Vec<Value>>,
diagnostics: Option<Value>,
config: Value, config: Value,
can_backup: bool, can_backup: bool,
logged_in: bool, logged_in: bool,
urlpath: String,
} }
impl AdminTemplateData { impl AdminTemplateData {
fn new(users: Vec<Value>) -> Self { fn new() -> Self {
Self { Self {
page_content: String::from("admin/page"), page_content: String::from("admin/settings"),
version: VERSION, version: VERSION,
users,
config: CONFIG.prepare_json(), config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP, can_backup: *CAN_BACKUP,
logged_in: true, logged_in: true,
urlpath: CONFIG.domain_path(),
users: None,
organizations: None,
diagnostics: None,
}
}
fn users(users: Vec<Value>) -> Self {
Self {
page_content: String::from("admin/users"),
version: VERSION,
users: Some(users),
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
organizations: None,
diagnostics: None,
}
}
fn organizations(organizations: Vec<Value>) -> Self {
Self {
page_content: String::from("admin/organizations"),
version: VERSION,
organizations: Some(organizations),
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
users: None,
diagnostics: None,
}
}
fn diagnostics(diagnostics: Value) -> Self {
Self {
page_content: String::from("admin/diagnostics"),
version: VERSION,
organizations: None,
config: CONFIG.prepare_json(),
can_backup: *CAN_BACKUP,
logged_in: true,
urlpath: CONFIG.domain_path(),
users: None,
diagnostics: Some(diagnostics),
} }
} }
@@ -131,11 +197,8 @@ impl AdminTemplateData {
} }
#[get("/", rank = 1)] #[get("/", rank = 1)]
fn admin_page(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> { fn admin_page(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn); let text = AdminTemplateData::new().render()?;
let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect();
let text = AdminTemplateData::new(users_json).render()?;
Ok(Html(text)) Ok(Html(text))
} }
@@ -153,36 +216,58 @@ fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> Empt
err!("User already exists") err!("User already exists")
} }
if !CONFIG.invitations_allowed() {
err!("Invitations are not allowed")
}
let mut user = User::new(email); let mut user = User::new(email);
user.save(&conn)?; user.save(&conn)?;
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org_name = "bitwarden_rs"; mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)
mail::send_invite(&user.email, &user.uuid, None, None, &org_name, None)
} else { } else {
let invitation = Invitation::new(data.email); let invitation = Invitation::new(data.email);
invitation.save(&conn) invitation.save(&conn)
} }
} }
#[post("/test/smtp", data = "<data>")]
fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
let data: InviteData = data.into_inner();
if CONFIG.mail_enabled() {
mail::send_test(&data.email)
} else {
err!("Mail is not enabled")
}
}
#[get("/logout")] #[get("/logout")]
fn logout(mut cookies: Cookies) -> Result<Redirect, ()> { fn logout(mut cookies: Cookies) -> Result<Redirect, ()> {
cookies.remove(Cookie::named(COOKIE_NAME)); cookies.remove(Cookie::named(COOKIE_NAME));
Ok(Redirect::to(ADMIN_PATH)) Ok(Redirect::to(admin_url()))
} }
#[get("/users")] #[get("/users")]
fn get_users(_token: AdminToken, conn: DbConn) -> JsonResult { fn get_users_json(_token: AdminToken, conn: DbConn) -> JsonResult {
let users = User::get_all(&conn); let users = User::get_all(&conn);
let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect(); let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect();
Ok(Json(Value::Array(users_json))) Ok(Json(Value::Array(users_json)))
} }
#[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn);
let users_json: Vec<Value> = users.iter()
.map(|u| {
let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
usr
}).collect();
let text = AdminTemplateData::users(users_json).render()?;
Ok(Html(text))
}
#[post("/users/<uuid>/delete")] #[post("/users/<uuid>/delete")]
fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn delete_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let user = match User::find_by_uuid(&uuid, &conn) { let user = match User::find_by_uuid(&uuid, &conn) {
@@ -223,6 +308,109 @@ fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn) User::update_all_revisions(&conn)
} }
#[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations.iter().map(|o| {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org
}).collect();
let text = AdminTemplateData::organizations(organizations_json).render()?;
Ok(Html(text))
}
#[derive(Deserialize)]
struct WebVaultVersion {
version: String,
}
#[derive(Deserialize)]
struct GitRelease {
tag_name: String,
}
#[derive(Deserialize)]
struct GitCommit {
sha: String,
}
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
use reqwest::{header::USER_AGENT, blocking::Client};
use std::time::Duration;
let github_api = Client::builder().build()?;
Ok(
github_api.get(url)
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()?
.error_for_status()?
.json::<T>()?
)
}
#[get("/diagnostics")]
fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
use std::net::ToSocketAddrs;
use chrono::prelude::*;
use crate::util::read_file_string;
let vault_version_path = format!("{}/{}", CONFIG.web_vault_folder(), "version.json");
let vault_version_str = read_file_string(&vault_version_path)?;
let web_vault_version: WebVaultVersion = serde_json::from_str(&vault_version_str)?;
let github_ips = ("github.com", 0).to_socket_addrs().map(|mut i| i.next());
let (dns_resolved, dns_ok) = match github_ips {
Ok(Some(a)) => (a.ip().to_string(), true),
_ => ("Could not resolve domain name.".to_string(), false),
};
// If the DNS Check failed, do not even attempt to check for new versions since we were not able to resolve github.com
let (latest_release, latest_commit, latest_web_build) = if dns_ok {
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bitwarden_rs/releases/latest") {
Ok(r) => r.tag_name,
_ => "-".to_string()
},
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/bitwarden_rs/commits/master") {
Ok(mut c) => {
c.sha.truncate(8);
c.sha
},
_ => "-".to_string()
},
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string()
},
)
} else {
("-".to_string(), "-".to_string(), "-".to_string())
};
// Run the date check as the last item right before filling the json.
// This should ensure that the time difference between the browser and the server is as minimal as possible.
let dt = Utc::now();
let server_time = dt.format("%Y-%m-%d %H:%M:%S").to_string();
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
"server_time": server_time,
"web_vault_version": web_vault_version.version,
"latest_release": latest_release,
"latest_commit": latest_commit,
"latest_web_build": latest_web_build,
});
let text = AdminTemplateData::diagnostics(diagnostics_json).render()?;
Ok(Html(text))
}
#[post("/config", data = "<data>")] #[post("/config", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult { fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner(); let data: ConfigBuilder = data.into_inner();

View File

@@ -68,7 +68,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
let mut user = match User::find_by_mail(&data.Email, &conn) { let mut user = match User::find_by_mail(&data.Email, &conn) {
Some(user) => { Some(user) => {
if !user.password_hash.is_empty() { if !user.password_hash.is_empty() {
if CONFIG.signups_allowed() { if CONFIG.is_signup_allowed(&data.Email) {
err!("User already exists") err!("User already exists")
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
@@ -89,14 +89,17 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
} }
user user
} else if CONFIG.signups_allowed() { } else if CONFIG.is_signup_allowed(&data.Email) {
err!("Account with this email already exists") err!("Account with this email already exists")
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
} }
} }
None => { None => {
if CONFIG.signups_allowed() || Invitation::take(&data.Email, &conn) || CONFIG.can_signup_user(&data.Email) { // Order is important here; the invitation check must come first
// because the bitwarden_rs admin can invite anyone, regardless
// of other signup restrictions.
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) {
User::new(data.Email.clone()) User::new(data.Email.clone())
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
@@ -207,7 +210,12 @@ fn post_keys(data: JsonUpcase<KeysData>, headers: Headers, conn: DbConn) -> Json
user.public_key = Some(data.PublicKey); user.public_key = Some(data.PublicKey);
user.save(&conn)?; user.save(&conn)?;
Ok(Json(user.to_json(&conn)))
Ok(Json(json!({
"PrivateKey": user.private_key,
"PublicKey": user.public_key,
"Object":"keys"
})))
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -371,8 +379,8 @@ fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, conn: Db
err!("Email already in use"); err!("Email already in use");
} }
if !CONFIG.signups_allowed() && !CONFIG.can_signup_user(&data.NewEmail) { if !CONFIG.is_email_domain_allowed(&data.NewEmail) {
err!("Email cannot be changed to this address"); err!("Email domain not allowed");
} }
let token = crypto::generate_token(6)?; let token = crypto::generate_token(6)?;

View File

@@ -49,10 +49,16 @@ pub fn routes() -> Vec<Route> {
put_cipher, put_cipher,
delete_cipher_post, delete_cipher_post,
delete_cipher_post_admin, delete_cipher_post_admin,
delete_cipher_put,
delete_cipher_put_admin,
delete_cipher, delete_cipher,
delete_cipher_admin, delete_cipher_admin,
delete_cipher_selected, delete_cipher_selected,
delete_cipher_selected_post, delete_cipher_selected_post,
delete_cipher_selected_put,
restore_cipher_put,
restore_cipher_put_admin,
restore_cipher_selected,
delete_all, delete_all,
move_cipher_selected, move_cipher_selected,
move_cipher_selected_put, move_cipher_selected_put,
@@ -79,6 +85,9 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn); let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn);
let collections_json: Vec<Value> = collections.iter().map(Collection::to_json).collect(); let collections_json: Vec<Value> = collections.iter().map(Collection::to_json).collect();
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
let ciphers = Cipher::find_by_user(&headers.user.uuid, &conn); let ciphers = Cipher::find_by_user(&headers.user.uuid, &conn);
let ciphers_json: Vec<Value> = ciphers let ciphers_json: Vec<Value> = ciphers
.iter() .iter()
@@ -95,6 +104,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
"Profile": user_json, "Profile": user_json,
"Folders": folders_json, "Folders": folders_json,
"Collections": collections_json, "Collections": collections_json,
"Policies": policies_json,
"Ciphers": ciphers_json, "Ciphers": ciphers_json,
"Domains": domains_json, "Domains": domains_json,
"Object": "sync" "Object": "sync"
@@ -264,7 +274,10 @@ pub fn update_cipher_from_data(
}; };
if saved_att.cipher_uuid != cipher.uuid { if saved_att.cipher_uuid != cipher.uuid {
err!("Attachment is not owned by the cipher") // Warn and break here since cloning ciphers provides attachment data but will not be cloned.
// If we error out here it will break the whole cloning and causes empty ciphers to appear.
warn!("Attachment is not owned by the cipher");
break;
} }
saved_att.akey = Some(attachment.Key); saved_att.akey = Some(attachment.Key);
@@ -599,10 +612,14 @@ fn share_cipher_by_uuid(
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
}; };
let mut shared_to_collection = false;
match data.Cipher.OrganizationId.clone() { match data.Cipher.OrganizationId.clone() {
None => err!("Organization id not provided"), // If we don't get an organization ID, we don't do anything
// No error because this is used when using the Clone functionality
None => {},
Some(organization_uuid) => { Some(organization_uuid) => {
let mut shared_to_collection = false;
for uuid in &data.CollectionIds { for uuid in &data.CollectionIds {
match Collection::find_by_uuid_and_org(uuid, &organization_uuid, &conn) { match Collection::find_by_uuid_and_org(uuid, &organization_uuid, &conn) {
None => err!("Invalid collection ID provided"), None => err!("Invalid collection ID provided"),
@@ -616,19 +633,20 @@ fn share_cipher_by_uuid(
} }
} }
} }
update_cipher_from_data(
&mut cipher,
data.Cipher,
&headers,
shared_to_collection,
&conn,
&nt,
UpdateType::CipherUpdate,
)?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
} }
} };
update_cipher_from_data(
&mut cipher,
data.Cipher,
&headers,
shared_to_collection,
&conn,
&nt,
UpdateType::CipherUpdate,
)?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
} }
#[post("/ciphers/<uuid>/attachment", format = "multipart/form-data", data = "<data>")] #[post("/ciphers/<uuid>/attachment", format = "multipart/form-data", data = "<data>")]
@@ -642,20 +660,49 @@ fn post_attachment(
) -> JsonResult { ) -> JsonResult {
let cipher = match Cipher::find_by_uuid(&uuid, &conn) { let cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err_discard!("Cipher doesn't exist", data),
}; };
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) { if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) {
err!("Cipher is not write accessible") err_discard!("Cipher is not write accessible", data)
} }
let mut params = content_type.params(); let mut params = content_type.params();
let boundary_pair = params.next().expect("No boundary provided"); let boundary_pair = params.next().expect("No boundary provided");
let boundary = boundary_pair.1; let boundary = boundary_pair.1;
let size_limit = if let Some(ref user_uuid) = cipher.user_uuid {
match CONFIG.user_attachment_limit() {
Some(0) => err_discard!("Attachments are disabled", data),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, &conn);
if left <= 0 {
err_discard!("Attachment size limit reached! Delete some files to open space", data)
}
Some(left as u64)
}
None => None,
}
} else if let Some(ref org_uuid) = cipher.organization_uuid {
match CONFIG.org_attachment_limit() {
Some(0) => err_discard!("Attachments are disabled", data),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, &conn);
if left <= 0 {
err_discard!("Attachment size limit reached! Delete some files to open space", data)
}
Some(left as u64)
}
None => None,
}
} else {
err_discard!("Cipher is neither owned by a user nor an organization", data);
};
let base_path = Path::new(&CONFIG.attachments_folder()).join(&cipher.uuid); let base_path = Path::new(&CONFIG.attachments_folder()).join(&cipher.uuid);
let mut attachment_key = None; let mut attachment_key = None;
let mut error = None;
Multipart::with_body(data.open(), boundary) Multipart::with_body(data.open(), boundary)
.foreach_entry(|mut field| { .foreach_entry(|mut field| {
@@ -674,18 +721,21 @@ fn post_attachment(
let file_name = HEXLOWER.encode(&crypto::get_random(vec![0; 10])); let file_name = HEXLOWER.encode(&crypto::get_random(vec![0; 10]));
let path = base_path.join(&file_name); let path = base_path.join(&file_name);
let size = match field.data.save().memory_threshold(0).size_limit(None).with_path(path) { let size = match field.data.save().memory_threshold(0).size_limit(size_limit).with_path(path.clone()) {
SaveResult::Full(SavedData::File(_, size)) => size as i32, SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => { SaveResult::Full(other) => {
error!("Attachment is not a file: {:?}", other); std::fs::remove_file(path).ok();
error = Some(format!("Attachment is not a file: {:?}", other));
return; return;
} }
SaveResult::Partial(_, reason) => { SaveResult::Partial(_, reason) => {
error!("Partial result: {:?}", reason); std::fs::remove_file(path).ok();
error = Some(format!("Attachment size limit exceeded with this file: {:?}", reason));
return; return;
} }
SaveResult::Error(e) => { SaveResult::Error(e) => {
error!("Error: {:?}", e); std::fs::remove_file(path).ok();
error = Some(format!("Error: {:?}", e));
return; return;
} }
}; };
@@ -699,6 +749,10 @@ fn post_attachment(
}) })
.expect("Error processing multipart data"); .expect("Error processing multipart data");
if let Some(ref e) = error {
err!(e);
}
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn))) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
@@ -716,11 +770,7 @@ fn post_attachment_admin(
post_attachment(uuid, data, content_type, headers, conn, nt) post_attachment(uuid, data, content_type, headers, conn, nt)
} }
#[post( #[post("/ciphers/<uuid>/attachment/<attachment_id>/share", format = "multipart/form-data", data = "<data>")]
"/ciphers/<uuid>/attachment/<attachment_id>/share",
format = "multipart/form-data",
data = "<data>"
)]
fn post_attachment_share( fn post_attachment_share(
uuid: String, uuid: String,
attachment_id: String, attachment_id: String,
@@ -774,48 +824,62 @@ fn delete_attachment_admin(
#[post("/ciphers/<uuid>/delete")] #[post("/ciphers/<uuid>/delete")]
fn delete_cipher_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_post(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, &nt) _delete_cipher_by_uuid(&uuid, &headers, &conn, false, &nt)
} }
#[post("/ciphers/<uuid>/delete-admin")] #[post("/ciphers/<uuid>/delete-admin")]
fn delete_cipher_post_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_post_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, &nt) _delete_cipher_by_uuid(&uuid, &headers, &conn, false, &nt)
}
#[put("/ciphers/<uuid>/delete")]
fn delete_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, true, &nt)
}
#[put("/ciphers/<uuid>/delete-admin")]
fn delete_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, true, &nt)
} }
#[delete("/ciphers/<uuid>")] #[delete("/ciphers/<uuid>")]
fn delete_cipher(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, &nt) _delete_cipher_by_uuid(&uuid, &headers, &conn, false, &nt)
} }
#[delete("/ciphers/<uuid>/admin")] #[delete("/ciphers/<uuid>/admin")]
fn delete_cipher_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_cipher_by_uuid(&uuid, &headers, &conn, &nt) _delete_cipher_by_uuid(&uuid, &headers, &conn, false, &nt)
} }
#[delete("/ciphers", data = "<data>")] #[delete("/ciphers", data = "<data>")]
fn delete_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let data: Value = data.into_inner().data; _delete_multiple_ciphers(data, headers, conn, false, nt)
let uuids = match data.get("Ids") {
Some(ids) => match ids.as_array() {
Some(ids) => ids.iter().filter_map(Value::as_str),
None => err!("Posted ids field is not an array"),
},
None => err!("Request missing ids field"),
};
for uuid in uuids {
if let error @ Err(_) = _delete_cipher_by_uuid(uuid, &headers, &conn, &nt) {
return error;
};
}
Ok(())
} }
#[post("/ciphers/delete", data = "<data>")] #[post("/ciphers/delete", data = "<data>")]
fn delete_cipher_selected_post(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_selected_post(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected(data, headers, conn, nt) _delete_multiple_ciphers(data, headers, conn, false, nt)
}
#[put("/ciphers/delete", data = "<data>")]
fn delete_cipher_selected_put(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_multiple_ciphers(data, headers, conn, true, nt)
}
#[put("/ciphers/<uuid>/restore")]
fn restore_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
}
#[put("/ciphers/<uuid>/restore-admin")]
fn restore_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
}
#[put("/ciphers/restore", data = "<data>")]
fn restore_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_restore_multiple_ciphers(data, headers, conn, nt)
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -929,8 +993,8 @@ fn delete_all(
} }
} }
fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> EmptyResult { fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_delete: bool, nt: &Notify) -> EmptyResult {
let cipher = match Cipher::find_by_uuid(&uuid, &conn) { let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
}; };
@@ -939,11 +1003,74 @@ fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Not
err!("Cipher can't be deleted by user") err!("Cipher can't be deleted by user")
} }
cipher.delete(&conn)?; if soft_delete {
cipher.deleted_at = Some(chrono::Utc::now().naive_utc());
cipher.save(&conn)?;
} else {
cipher.delete(&conn)?;
}
nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(&conn));
Ok(()) Ok(())
} }
fn _delete_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, soft_delete: bool, nt: Notify) -> EmptyResult {
let data: Value = data.into_inner().data;
let uuids = match data.get("Ids") {
Some(ids) => match ids.as_array() {
Some(ids) => ids.iter().filter_map(Value::as_str),
None => err!("Posted ids field is not an array"),
},
None => err!("Request missing ids field"),
};
for uuid in uuids {
if let error @ Err(_) = _delete_cipher_by_uuid(uuid, &headers, &conn, soft_delete, &nt) {
return error;
};
}
Ok(())
}
fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> EmptyResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) {
err!("Cipher can't be restored by user")
}
cipher.deleted_at = None;
cipher.save(&conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
Ok(())
}
fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let data: Value = data.into_inner().data;
let uuids = match data.get("Ids") {
Some(ids) => match ids.as_array() {
Some(ids) => ids.iter().filter_map(Value::as_str),
None => err!("Posted ids field is not an array"),
},
None => err!("Request missing ids field"),
};
for uuid in uuids {
if let error @ Err(_) = _restore_cipher_by_uuid(uuid, &headers, &conn, &nt) {
return error;
};
}
Ok(())
}
fn _delete_cipher_attachment_by_id( fn _delete_cipher_attachment_by_id(
uuid: &str, uuid: &str,
attachment_id: &str, attachment_id: &str,

View File

@@ -50,7 +50,6 @@ fn get_folder(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
pub struct FolderData { pub struct FolderData {
pub Name: String, pub Name: String,
} }

View File

@@ -2,7 +2,7 @@ mod accounts;
mod ciphers; mod ciphers;
mod folders; mod folders;
mod organizations; mod organizations;
pub(crate) mod two_factor; pub mod two_factor;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut mod_routes = routes![ let mut mod_routes = routes![
@@ -146,10 +146,10 @@ fn hibp_breach(username: String) -> JsonResult {
username username
); );
use reqwest::{header::USER_AGENT, Client}; use reqwest::{header::USER_AGENT, blocking::Client};
if let Some(api_key) = crate::CONFIG.hibp_api_key() { if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let hibp_client = Client::builder().use_sys_proxy().build()?; let hibp_client = Client::builder().build()?;
let res = hibp_client let res = hibp_client
.get(&url) .get(&url)
@@ -172,7 +172,7 @@ fn hibp_breach(username: String) -> JsonResult {
"BreachDate": "2019-08-18T00:00:00Z", "BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z", "AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username), "Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"LogoPath": "/bwrs_static/hibp.png", "LogoPath": "bwrs_static/hibp.png",
"PwnCount": 0, "PwnCount": 0,
"DataClasses": [ "DataClasses": [
"Error - No API key set!" "Error - No API key set!"

View File

@@ -2,6 +2,7 @@ use rocket::request::Form;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
use num_traits::FromPrimitive;
use crate::api::{ use crate::api::{
EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType, EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType,
@@ -45,6 +46,10 @@ pub fn routes() -> Vec<Route> {
delete_user, delete_user,
post_delete_user, post_delete_user,
post_org_import, post_org_import,
list_policies,
list_policies_token,
get_policy,
put_policy,
] ]
} }
@@ -369,7 +374,7 @@ fn get_collection_users(org_id: String, coll_id: String, _headers: AdminHeaders,
.map(|col_user| { .map(|col_user| {
UserOrganization::find_by_user_and_org(&col_user.user_uuid, &org_id, &conn) UserOrganization::find_by_user_and_org(&col_user.user_uuid, &org_id, &conn)
.unwrap() .unwrap()
.to_json_collection_user_details(col_user.read_only) .to_json_read_only(col_user.read_only)
}) })
.collect(); .collect();
@@ -480,7 +485,11 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
let user = match User::find_by_mail(&email, &conn) { let user = match User::find_by_mail(&email, &conn) {
None => { None => {
if !CONFIG.invitations_allowed() { if !CONFIG.invitations_allowed() {
err!(format!("User email does not exist: {}", email)) err!(format!("User does not exist: {}", email))
}
if !CONFIG.is_email_domain_allowed(&email) {
err!("Email domain not eligible for invitations")
} }
if !CONFIG.mail_enabled() { if !CONFIG.mail_enabled() {
@@ -830,22 +839,13 @@ struct RelationsData {
fn post_org_import( fn post_org_import(
query: Form<OrgIdData>, query: Form<OrgIdData>,
data: JsonUpcase<ImportData>, data: JsonUpcase<ImportData>,
headers: Headers, headers: AdminHeaders,
conn: DbConn, conn: DbConn,
nt: Notify, nt: Notify,
) -> EmptyResult { ) -> EmptyResult {
let data: ImportData = data.into_inner().data; let data: ImportData = data.into_inner().data;
let org_id = query.into_inner().organization_id; let org_id = query.into_inner().organization_id;
let org_user = match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) {
Some(user) => user,
None => err!("User is not part of the organization"),
};
if org_user.atype < UserOrgType::Admin {
err!("Only admins or owners can import into an organization")
}
// Read and create the collections // Read and create the collections
let collections: Vec<_> = data let collections: Vec<_> = data
.Collections .Collections
@@ -866,6 +866,8 @@ fn post_org_import(
relations.push((relation.Key, relation.Value)); relations.push((relation.Key, relation.Value));
} }
let headers: Headers = headers.into();
// Read and create the ciphers // Read and create the ciphers
let ciphers: Vec<_> = data let ciphers: Vec<_> = data
.Ciphers .Ciphers
@@ -901,3 +903,83 @@ fn post_org_import(
let mut user = headers.user; let mut user = headers.user;
user.update_revision(&conn) user.update_revision(&conn)
} }
#[get("/organizations/<org_id>/policies")]
fn list_policies(org_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let policies = OrgPolicy::find_by_org(&org_id, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/organizations/<org_id>/policies/token?<token>")]
fn list_policies_token(org_id: String, token: String, conn: DbConn) -> JsonResult {
let invite = crate::auth::decode_invite(&token)?;
let invite_org_id = match invite.org_id {
Some(invite_org_id) => invite_org_id,
None => err!("Invalid token"),
};
if invite_org_id != org_id {
err!("Token doesn't match request organization");
}
// TODO: We receive the invite token as ?token=<>, validate it contains the org id
let policies = OrgPolicy::find_by_org(&org_id, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
"Object": "list",
"ContinuationToken": null
})))
}
#[get("/organizations/<org_id>/policies/<pol_type>")]
fn get_policy(org_id: String, pol_type: i32, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt,
None => err!("Invalid policy type"),
};
let policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
Some(p) => p,
None => OrgPolicy::new(org_id, pol_type_enum, "{}".to_string()),
};
Ok(Json(policy.to_json()))
}
#[derive(Deserialize)]
struct PolicyData {
enabled: bool,
#[serde(rename = "type")]
_type: i32,
data: Value,
}
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
fn put_policy(org_id: String, pol_type: i32, data: Json<PolicyData>, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let data: PolicyData = data.into_inner();
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt,
None => err!("Invalid policy type"),
};
let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
Some(p) => p,
None => OrgPolicy::new(org_id, pol_type_enum, "{}".to_string()),
};
policy.enabled = data.enabled;
policy.data = serde_json::to_string(&data.data)?;
policy.save(&conn)?;
Ok(Json(policy.to_json()))
}

View File

@@ -4,7 +4,7 @@ use rocket_contrib::json::Json;
use crate::api::core::two_factor::_generate_recover_code; use crate::api::core::two_factor::_generate_recover_code;
use crate::api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData}; use crate::api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData};
use crate::auth::Headers; use crate::auth::{ClientIp, Headers};
use crate::crypto; use crate::crypto;
use crate::db::{ use crate::db::{
models::{TwoFactor, TwoFactorType}, models::{TwoFactor, TwoFactorType},
@@ -20,6 +20,7 @@ pub fn routes() -> Vec<Route> {
activate_authenticator_put, activate_authenticator_put,
] ]
} }
#[post("/two-factor/get-authenticator", data = "<data>")] #[post("/two-factor/get-authenticator", data = "<data>")]
fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult { fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data; let data: PasswordData = data.into_inner().data;
@@ -53,7 +54,12 @@ struct EnableAuthenticatorData {
} }
#[post("/two-factor/authenticator", data = "<data>")] #[post("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator(data: JsonUpcase<EnableAuthenticatorData>, headers: Headers, conn: DbConn) -> JsonResult { fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data; let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash; let password_hash = data.MasterPasswordHash;
let key = data.Key; let key = data.Key;
@@ -76,7 +82,7 @@ fn activate_authenticator(data: JsonUpcase<EnableAuthenticatorData>, headers: He
} }
// Validate the token provided with the key, and save new twofactor // Validate the token provided with the key, and save new twofactor
validate_totp_code(&user.uuid, token, &key.to_uppercase(), &conn)?; validate_totp_code(&user.uuid, token, &key.to_uppercase(), &ip, &conn)?;
_generate_recover_code(&mut user, &conn); _generate_recover_code(&mut user, &conn);
@@ -88,20 +94,31 @@ fn activate_authenticator(data: JsonUpcase<EnableAuthenticatorData>, headers: He
} }
#[put("/two-factor/authenticator", data = "<data>")] #[put("/two-factor/authenticator", data = "<data>")]
fn activate_authenticator_put(data: JsonUpcase<EnableAuthenticatorData>, headers: Headers, conn: DbConn) -> JsonResult { fn activate_authenticator_put(
activate_authenticator(data, headers, conn) data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
ip: ClientIp,
conn: DbConn,
) -> JsonResult {
activate_authenticator(data, headers, ip, conn)
} }
pub fn validate_totp_code_str(user_uuid: &str, totp_code: &str, secret: &str, conn: &DbConn) -> EmptyResult { pub fn validate_totp_code_str(
user_uuid: &str,
totp_code: &str,
secret: &str,
ip: &ClientIp,
conn: &DbConn,
) -> EmptyResult {
let totp_code: u64 = match totp_code.parse() { let totp_code: u64 = match totp_code.parse() {
Ok(code) => code, Ok(code) => code,
_ => err!("TOTP code is not a number"), _ => err!("TOTP code is not a number"),
}; };
validate_totp_code(user_uuid, totp_code, secret, &conn) validate_totp_code(user_uuid, totp_code, secret, ip, &conn)
} }
pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, conn: &DbConn) -> EmptyResult { pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
use oath::{totp_raw_custom_time, HashType}; use oath::{totp_raw_custom_time, HashType};
let decoded_secret = match BASE32.decode(secret.as_bytes()) { let decoded_secret = match BASE32.decode(secret.as_bytes()) {
@@ -143,11 +160,22 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, conn: &
twofactor.save(&conn)?; twofactor.save(&conn)?;
return Ok(()); return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used as i64 { } else if generated == totp_code && time_step <= twofactor.last_used as i64 {
warn!("This or a TOTP code within {} steps back and forward has already been used!", steps); warn!(
err!(format!("Invalid TOTP code! Server time: {}", current_time.format("%F %T UTC"))); "This or a TOTP code within {} steps back and forward has already been used!",
steps
);
err!(format!(
"Invalid TOTP code! Server time: {} IP: {}",
current_time.format("%F %T UTC"),
ip.ip
));
} }
} }
// Else no valide code received, deny access // Else no valide code received, deny access
err!(format!("Invalid TOTP code! Server time: {}", current_time.format("%F %T UTC"))); err!(format!(
"Invalid TOTP code! Server time: {} IP: {}",
current_time.format("%F %T UTC"),
ip.ip
));
} }

View File

@@ -2,7 +2,6 @@ use chrono::Utc;
use data_encoding::BASE64; use data_encoding::BASE64;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json;
use crate::api::core::two_factor::_generate_recover_code; use crate::api::core::two_factor::_generate_recover_code;
use crate::api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, PasswordData}; use crate::api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, PasswordData};
@@ -21,9 +20,9 @@ pub fn routes() -> Vec<Route> {
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
struct DuoData { struct DuoData {
host: String, host: String, // Duo API hostname
ik: String, ik: String, // integration key
sk: String, sk: String, // secret key
} }
impl DuoData { impl DuoData {
@@ -187,9 +186,10 @@ fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbC
fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult { fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
const AGENT: &str = "bitwarden_rs:Duo/1.0 (Rust)"; const AGENT: &str = "bitwarden_rs:Duo/1.0 (Rust)";
use reqwest::{header::*, Client, Method}; use reqwest::{header::*, Method, blocking::Client};
use std::str::FromStr; use std::str::FromStr;
// https://duo.com/docs/authapi#api-details
let url = format!("https://{}{}", &data.host, path); let url = format!("https://{}{}", &data.host, path);
let date = Utc::now().to_rfc2822(); let date = Utc::now().to_rfc2822();
let username = &data.ik; let username = &data.ik;
@@ -268,6 +268,10 @@ fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64
} }
pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult { pub fn validate_duo_login(email: &str, response: &str, conn: &DbConn) -> EmptyResult {
// email is as entered by the user, so it needs to be normalized before
// comparison with auth_user below.
let email = &email.to_lowercase();
let split: Vec<&str> = response.split(':').collect(); let split: Vec<&str> = response.split(':').collect();
if split.len() != 2 { if split.len() != 2 {
err!("Invalid response length"); err!("Invalid response length");

View File

@@ -1,6 +1,5 @@
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json;
use crate::api::core::two_factor::_generate_recover_code; use crate::api::core::two_factor::_generate_recover_code;
use crate::api::{EmptyResult, JsonResult, JsonUpcase, PasswordData}; use crate::api::{EmptyResult, JsonResult, JsonUpcase, PasswordData};
@@ -271,10 +270,10 @@ impl EmailTokenData {
/// Takes an email address and obscures it by replacing it with asterisks except two characters. /// Takes an email address and obscures it by replacing it with asterisks except two characters.
pub fn obscure_email(email: &str) -> String { pub fn obscure_email(email: &str) -> String {
let split: Vec<&str> = email.split('@').collect(); let split: Vec<&str> = email.rsplitn(2, '@').collect();
let mut name = split[0].to_string(); let mut name = split[1].to_string();
let domain = &split[1]; let domain = &split[0];
let name_size = name.chars().count(); let name_size = name.chars().count();

View File

@@ -1,7 +1,6 @@
use data_encoding::BASE32; use data_encoding::BASE32;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json;
use serde_json::Value; use serde_json::Value;
use crate::api::{JsonResult, JsonUpcase, NumberOrString, PasswordData}; use crate::api::{JsonResult, JsonUpcase, NumberOrString, PasswordData};
@@ -12,11 +11,11 @@ use crate::db::{
DbConn, DbConn,
}; };
pub(crate) mod authenticator; pub mod authenticator;
pub(crate) mod duo; pub mod duo;
pub(crate) mod email; pub mod email;
pub(crate) mod u2f; pub mod u2f;
pub(crate) mod yubikey; pub mod yubikey;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut routes = routes![ let mut routes = routes![
@@ -39,7 +38,7 @@ pub fn routes() -> Vec<Route> {
#[get("/two-factor")] #[get("/two-factor")]
fn get_twofactor(headers: Headers, conn: DbConn) -> JsonResult { fn get_twofactor(headers: Headers, conn: DbConn) -> JsonResult {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn); let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn);
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_list).collect(); let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Ok(Json(json!({ Ok(Json(json!({
"Data": twofactors_json, "Data": twofactors_json,

View File

@@ -1,6 +1,6 @@
use once_cell::sync::Lazy;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json;
use serde_json::Value; use serde_json::Value;
use u2f::messages::{RegisterResponse, SignResponse, U2fSignRequest}; use u2f::messages::{RegisterResponse, SignResponse, U2fSignRequest};
use u2f::protocol::{Challenge, U2f}; use u2f::protocol::{Challenge, U2f};
@@ -18,10 +18,8 @@ use crate::CONFIG;
const U2F_VERSION: &str = "U2F_V2"; const U2F_VERSION: &str = "U2F_V2";
lazy_static! { static APP_ID: Lazy<String> = Lazy::new(|| format!("{}/app-id.json", &CONFIG.domain()));
static ref APP_ID: String = format!("{}/app-id.json", &CONFIG.domain()); static U2F: Lazy<U2f> = Lazy::new(|| U2f::new(APP_ID.clone()));
static ref U2F: U2f = U2f::new(APP_ID.clone());
}
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![ routes![
@@ -92,6 +90,7 @@ struct RegistrationDef {
key_handle: Vec<u8>, key_handle: Vec<u8>,
pub_key: Vec<u8>, pub_key: Vec<u8>,
attestation_cert: Option<Vec<u8>>, attestation_cert: Option<Vec<u8>>,
device_name: Option<String>,
} }
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]

View File

@@ -1,6 +1,5 @@
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json;
use serde_json::Value; use serde_json::Value;
use yubico::config::Config; use yubico::config::Config;
use yubico::verify; use yubico::verify;

View File

@@ -1,3 +1,4 @@
use once_cell::sync::Lazy;
use std::fs::{create_dir_all, remove_file, symlink_metadata, File}; use std::fs::{create_dir_all, remove_file, symlink_metadata, File};
use std::io::prelude::*; use std::io::prelude::*;
use std::net::ToSocketAddrs; use std::net::ToSocketAddrs;
@@ -7,7 +8,7 @@ use rocket::http::ContentType;
use rocket::response::Content; use rocket::response::Content;
use rocket::Route; use rocket::Route;
use reqwest::{header::HeaderMap, Client, Response, Url}; use reqwest::{Url, header::HeaderMap, blocking::Client, blocking::Response};
use rocket::http::Cookie; use rocket::http::Cookie;
@@ -16,6 +17,7 @@ use soup::prelude::*;
use crate::error::Error; use crate::error::Error;
use crate::CONFIG; use crate::CONFIG;
use crate::util::Cached;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![icon] routes![icon]
@@ -25,16 +27,14 @@ const FALLBACK_ICON: &[u8; 344] = include_bytes!("../static/fallback-icon.png");
const ALLOWED_CHARS: &str = "_-."; const ALLOWED_CHARS: &str = "_-.";
lazy_static! { static CLIENT: Lazy<Client> = Lazy::new(|| {
// Reuse the client between requests // Reuse the client between requests
static ref CLIENT: Client = Client::builder() Client::builder()
.use_sys_proxy()
.gzip(true)
.timeout(Duration::from_secs(CONFIG.icon_download_timeout())) .timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(_header_map()) .default_headers(_header_map())
.build() .build()
.unwrap(); .unwrap()
} });
fn is_valid_domain(domain: &str) -> bool { fn is_valid_domain(domain: &str) -> bool {
// Don't allow empty or too big domains or path traversal // Don't allow empty or too big domains or path traversal
@@ -53,15 +53,15 @@ fn is_valid_domain(domain: &str) -> bool {
} }
#[get("/<domain>/icon.png")] #[get("/<domain>/icon.png")]
fn icon(domain: String) -> Content<Vec<u8>> { fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
let icon_type = ContentType::new("image", "x-icon"); let icon_type = ContentType::new("image", "x-icon");
if !is_valid_domain(&domain) { if !is_valid_domain(&domain) {
warn!("Invalid domain: {:#?}", domain); warn!("Invalid domain: {:#?}", domain);
return Content(icon_type, FALLBACK_ICON.to_vec()); return Cached::long(Content(icon_type, FALLBACK_ICON.to_vec()));
} }
Content(icon_type, get_icon(&domain)) Cached::long(Content(icon_type, get_icon(&domain)))
} }
fn check_icon_domain_is_blacklisted(domain: &str) -> bool { fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
@@ -182,7 +182,7 @@ struct Icon {
} }
impl Icon { impl Icon {
fn new(priority: u8, href: String) -> Self { const fn new(priority: u8, href: String) -> Self {
Self { href, priority } Self { href, priority }
} }
} }
@@ -213,7 +213,7 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
let mut cookie_str = String::new(); let mut cookie_str = String::new();
let resp = get_page(&ssldomain).or_else(|_| get_page(&httpdomain)); let resp = get_page(&ssldomain).or_else(|_| get_page(&httpdomain));
if let Ok(mut content) = resp { if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com) // Extract the URL from the respose in case redirects occured (like @ gitlab.com)
let url = content.url().clone(); let url = content.url().clone();
@@ -235,7 +235,7 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
// 512KB should be more than enough for the HTML, though as we only really need // 512KB should be more than enough for the HTML, though as we only really need
// the HTML header, it could potentially be reduced even further // the HTML header, it could potentially be reduced even further
let limited_reader = crate::util::LimitedReader::new(&mut content, 512 * 1024); let limited_reader = content.take(512 * 1024);
let soup = Soup::from_reader(limited_reader)?; let soup = Soup::from_reader(limited_reader)?;
// Search for and filter // Search for and filter

View File

@@ -38,7 +38,7 @@ fn login(data: Form<ConnectData>, conn: DbConn, ip: ClientIp) -> JsonResult {
_check_is_some(&data.device_name, "device_name cannot be blank")?; _check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?; _check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, conn, ip) _password_login(data, conn, &ip)
} }
t => err!("Invalid type", t), t => err!("Invalid type", t),
} }
@@ -71,7 +71,7 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
}))) })))
} }
fn _password_login(data: ConnectData, conn: DbConn, ip: ClientIp) -> JsonResult { fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult {
// Validate scope // Validate scope
let scope = data.scope.as_ref().unwrap(); let scope = data.scope.as_ref().unwrap();
if scope != "api offline_access" { if scope != "api offline_access" {
@@ -127,7 +127,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: ClientIp) -> JsonResult
let (mut device, new_device) = get_device(&data, &conn, &user); let (mut device, new_device) = get_device(&data, &conn, &user);
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, &conn)?; let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, &ip, &conn)?;
if CONFIG.mail_enabled() && new_device { if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &device.updated_at, &device.name) { if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &device.updated_at, &device.name) {
@@ -197,6 +197,7 @@ fn twofactor_auth(
user_uuid: &str, user_uuid: &str,
data: &ConnectData, data: &ConnectData,
device: &mut Device, device: &mut Device,
ip: &ClientIp,
conn: &DbConn, conn: &DbConn,
) -> ApiResult<Option<String>> { ) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn); let twofactors = TwoFactor::find_by_user(user_uuid, conn);
@@ -216,8 +217,7 @@ fn twofactor_auth(
let selected_twofactor = twofactors let selected_twofactor = twofactors
.into_iter() .into_iter()
.filter(|tf| tf.atype == selected_id && tf.enabled) .find(|tf| tf.atype == selected_id && tf.enabled);
.nth(0);
use crate::api::core::two_factor as _tf; use crate::api::core::two_factor as _tf;
use crate::crypto::ct_eq; use crate::crypto::ct_eq;
@@ -226,7 +226,7 @@ fn twofactor_auth(
let mut remember = data.two_factor_remember.unwrap_or(0); let mut remember = data.two_factor_remember.unwrap_or(0);
match TwoFactorType::from_i32(selected_id) { match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => _tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, conn)?, Some(TwoFactorType::Authenticator) => _tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?,
Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?, Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?, Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::Duo) => _tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?, Some(TwoFactorType::Duo) => _tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?,
@@ -347,6 +347,7 @@ fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &DbConn) -> Api
Ok(result) Ok(result)
} }
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default)] #[derive(Debug, Clone, Default)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
struct ConnectData { struct ConnectData {
@@ -364,6 +365,7 @@ struct ConnectData {
device_identifier: Option<String>, device_identifier: Option<String>,
device_name: Option<String>, device_name: Option<String>,
device_type: Option<String>, device_type: Option<String>,
device_push_token: Option<String>, // Unused; mobile device push not yet supported.
// Needed for two-factor auth // Needed for two-factor auth
two_factor_provider: Option<i32>, two_factor_provider: Option<i32>,
@@ -391,6 +393,7 @@ impl<'f> FromForm<'f> for ConnectData {
"deviceidentifier" => form.device_identifier = Some(value), "deviceidentifier" => form.device_identifier = Some(value),
"devicename" => form.device_name = Some(value), "devicename" => form.device_name = Some(value),
"devicetype" => form.device_type = Some(value), "devicetype" => form.device_type = Some(value),
"devicepushtoken" => form.device_push_token = Some(value),
"twofactorprovider" => form.two_factor_provider = value.parse().ok(), "twofactorprovider" => form.two_factor_provider = value.parse().ok(),
"twofactortoken" => form.two_factor_token = Some(value), "twofactortoken" => form.two_factor_token = Some(value),
"twofactorremember" => form.two_factor_remember = value.parse().ok(), "twofactorremember" => form.two_factor_remember = value.parse().ok(),

View File

@@ -1,5 +1,5 @@
mod admin; mod admin;
pub(crate) mod core; pub mod core;
mod icons; mod icons;
mod identity; mod identity;
mod notifications; mod notifications;

View File

@@ -152,15 +152,19 @@ impl WSHandler {
impl Handler for WSHandler { impl Handler for WSHandler {
fn on_open(&mut self, hs: Handshake) -> ws::Result<()> { fn on_open(&mut self, hs: Handshake) -> ws::Result<()> {
// Path == "/notifications/hub?id=<id>==&access_token=<access_token>" // Path == "/notifications/hub?id=<id>==&access_token=<access_token>"
//
// We don't use `id`, and as of around 2020-03-25, the official clients
// no longer seem to pass `id` (only `access_token`).
let path = hs.request.resource(); let path = hs.request.resource();
let (_id, access_token) = match path.split('?').nth(1) { let (_id, access_token) = match path.split('?').nth(1) {
Some(params) => { Some(params) => {
let mut params_iter = params.split('&').take(2); let params_iter = params.split('&').take(2);
let mut id = None; let mut id = None;
let mut access_token = None; let mut access_token = None;
while let Some(val) = params_iter.next() {
for val in params_iter {
if val.starts_with(ID_KEY) { if val.starts_with(ID_KEY) {
id = Some(&val[ID_KEY.len()..]); id = Some(&val[ID_KEY.len()..]);
} else if val.starts_with(ACCESS_TOKEN_KEY) { } else if val.starts_with(ACCESS_TOKEN_KEY) {
@@ -170,10 +174,11 @@ impl Handler for WSHandler {
match (id, access_token) { match (id, access_token) {
(Some(a), Some(b)) => (a, b), (Some(a), Some(b)) => (a, b),
_ => return self.err("Missing id or access token"), (None, Some(b)) => ("", b), // Ignore missing `id`.
_ => return self.err("Missing access token"),
} }
} }
None => return self.err("Missing query path"), None => return self.err("Missing query parameters"),
}; };
// Validate the user // Validate the user
@@ -256,7 +261,9 @@ impl Factory for WSFactory {
// Remove handler // Remove handler
if let Some(user_uuid) = &handler.user_uuid { if let Some(user_uuid) = &handler.user_uuid {
if let Some(mut user_conn) = self.users.map.get_mut(user_uuid) { if let Some(mut user_conn) = self.users.map.get_mut(user_uuid) {
user_conn.remove_item(&handler.out); if let Some(pos) = user_conn.iter().position(|x| x == &handler.out) {
user_conn.remove(pos);
}
} }
} }
} }

View File

@@ -37,7 +37,17 @@ fn app_id() -> Cached<Content<Json<Value>>> {
{ {
"version": { "major": 1, "minor": 0 }, "version": { "major": 1, "minor": 0 },
"ids": [ "ids": [
&CONFIG.domain(), // Per <https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-appid-and-facets-v2.0-id-20180227.html#determining-the-facetid-of-a-calling-application>:
//
// "In the Web case, the FacetID MUST be the Web Origin [RFC6454]
// of the web page triggering the FIDO operation, written as
// a URI with an empty path. Default ports are omitted and any
// path component is ignored."
//
// This leaves it unclear as to whether the path must be empty,
// or whether it can be non-empty and will be ignored. To be on
// the safe side, use a proper web origin (with empty path).
&CONFIG.domain_origin(),
"ios:bundle-id:com.8bit.bitwarden", "ios:bundle-id:com.8bit.bitwarden",
"android:apk-key-hash:dUGFzUzf3lmHSLBDBIv+WaFyZMI" ] "android:apk-key-hash:dUGFzUzf3lmHSLBDBIv+WaFyZMI" ]
}] }]
@@ -68,6 +78,7 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
match filename.as_ref() { match filename.as_ref() {
"mail-github.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/mail-github.png"))), "mail-github.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/mail-github.png"))),
"logo-gray.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))), "logo-gray.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/logo-gray.png"))),
"shield-white.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/shield-white.png"))),
"error-x.svg" => Ok(Content(ContentType::SVG, include_bytes!("../static/images/error-x.svg"))), "error-x.svg" => Ok(Content(ContentType::SVG, include_bytes!("../static/images/error-x.svg"))),
"hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))), "hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
@@ -75,6 +86,6 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
"bootstrap-native-v4.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native-v4.js"))), "bootstrap-native-v4.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native-v4.js"))),
"md5.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/md5.js"))), "md5.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/md5.js"))),
"identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))), "identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
_ => err!("Image not found"), _ => err!(format!("Static file not found: {}", filename)),
} }
} }

View File

@@ -3,8 +3,10 @@
// //
use crate::util::read_file; use crate::util::read_file;
use chrono::{Duration, Utc}; use chrono::{Duration, Utc};
use once_cell::sync::Lazy;
use num_traits::FromPrimitive;
use jsonwebtoken::{self, Algorithm, Header}; use jsonwebtoken::{self, Algorithm, Header, EncodingKey, DecodingKey};
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde::ser::Serialize; use serde::ser::Serialize;
@@ -13,26 +15,24 @@ use crate::CONFIG;
const JWT_ALGORITHM: Algorithm = Algorithm::RS256; const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
lazy_static! { pub static DEFAULT_VALIDITY: Lazy<Duration> = Lazy::new(|| Duration::hours(2));
pub static ref DEFAULT_VALIDITY: Duration = Duration::hours(2); static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
static ref JWT_HEADER: Header = Header::new(JWT_ALGORITHM); pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
pub static ref JWT_LOGIN_ISSUER: String = format!("{}|login", CONFIG.domain()); static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
pub static ref JWT_INVITE_ISSUER: String = format!("{}|invite", CONFIG.domain()); static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
pub static ref JWT_DELETE_ISSUER: String = format!("{}|delete", CONFIG.domain()); static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
pub static ref JWT_VERIFYEMAIL_ISSUER: String = format!("{}|verifyemail", CONFIG.domain()); static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
pub static ref JWT_ADMIN_ISSUER: String = format!("{}|admin", CONFIG.domain()); static PRIVATE_RSA_KEY: Lazy<Vec<u8>> = Lazy::new(|| match read_file(&CONFIG.private_rsa_key()) {
static ref PRIVATE_RSA_KEY: Vec<u8> = match read_file(&CONFIG.private_rsa_key()) { Ok(key) => key,
Ok(key) => key, Err(e) => panic!("Error loading private RSA Key.\n Error: {}", e),
Err(e) => panic!("Error loading private RSA Key.\n Error: {}", e), });
}; static PUBLIC_RSA_KEY: Lazy<Vec<u8>> = Lazy::new(|| match read_file(&CONFIG.public_rsa_key()) {
static ref PUBLIC_RSA_KEY: Vec<u8> = match read_file(&CONFIG.public_rsa_key()) { Ok(key) => key,
Ok(key) => key, Err(e) => panic!("Error loading public RSA Key.\n Error: {}", e),
Err(e) => panic!("Error loading public RSA Key.\n Error: {}", e), });
};
}
pub fn encode_jwt<T: Serialize>(claims: &T) -> String { pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
match jsonwebtoken::encode(&JWT_HEADER, claims, &PRIVATE_RSA_KEY) { match jsonwebtoken::encode(&JWT_HEADER, claims, &EncodingKey::from_rsa_der(&PRIVATE_RSA_KEY)) {
Ok(token) => token, Ok(token) => token,
Err(e) => panic!("Error encoding jwt {}", e), Err(e) => panic!("Error encoding jwt {}", e),
} }
@@ -51,7 +51,7 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
let token = token.replace(char::is_whitespace, ""); let token = token.replace(char::is_whitespace, "");
jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation) jsonwebtoken::decode(&token, &DecodingKey::from_rsa_der(&PUBLIC_RSA_KEY), &validation)
.map(|d| d.claims) .map(|d| d.claims)
.map_res("Error decoding JWT") .map_res("Error decoding JWT")
} }
@@ -307,6 +307,25 @@ pub struct OrgHeaders {
pub org_user_type: UserOrgType, pub org_user_type: UserOrgType,
} }
// org_id is usually the second param ("/organizations/<org_id>")
// But there are cases where it is located in a query value.
// First check the param, if this is not a valid uuid, we will try the query value.
fn get_org_id(request: &Request) -> Option<String> {
if let Some(Ok(org_id)) = request.get_param::<String>(1) {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
}
if let Some(Ok(org_id)) = request.get_query_value::<String>("organizationId") {
if uuid::Uuid::parse_str(&org_id).is_ok() {
return Some(org_id);
}
}
None
}
impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders { impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
type Error = &'static str; type Error = &'static str;
@@ -315,9 +334,8 @@ impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
Outcome::Forward(_) => Outcome::Forward(()), Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f), Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => { Outcome::Success(headers) => {
// org_id is expected to be the second param ("/organizations/<org_id>") match get_org_id(request) {
match request.get_param::<String>(1) { Some(org_id) => {
Some(Ok(org_id)) => {
let conn = match request.guard::<DbConn>() { let conn = match request.guard::<DbConn>() {
Outcome::Success(conn) => conn, Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"), _ => err_handler!("Error getting DB"),
@@ -348,7 +366,7 @@ impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
} }
}, },
}) })
} },
_ => err_handler!("Error getting the organization id"), _ => err_handler!("Error getting the organization id"),
} }
} }
@@ -386,6 +404,16 @@ impl<'a, 'r> FromRequest<'a, 'r> for AdminHeaders {
} }
} }
impl Into<Headers> for AdminHeaders {
fn into(self) -> Headers {
Headers {
host: self.host,
device: self.device,
user: self.user
}
}
}
pub struct OwnerHeaders { pub struct OwnerHeaders {
pub host: String, pub host: String,
pub device: Device, pub device: Device,

View File

@@ -1,19 +1,23 @@
use once_cell::sync::Lazy;
use std::process::exit; use std::process::exit;
use std::sync::RwLock; use std::sync::RwLock;
use crate::error::Error; use reqwest::Url;
use crate::util::get_env;
lazy_static! { use crate::error::Error;
pub static ref CONFIG: Config = Config::load().unwrap_or_else(|e| { use crate::util::{get_env, get_env_bool};
static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
let data_folder = get_env("DATA_FOLDER").unwrap_or_else(|| String::from("data"));
get_env("CONFIG_FILE").unwrap_or_else(|| format!("{}/config.json", data_folder))
});
pub static CONFIG: Lazy<Config> = Lazy::new(|| {
Config::load().unwrap_or_else(|e| {
println!("Error loading config:\n\t{:?}\n", e); println!("Error loading config:\n\t{:?}\n", e);
exit(12) exit(12)
}); })
pub static ref CONFIG_FILE: String = { });
let data_folder = get_env("DATA_FOLDER").unwrap_or_else(|| String::from("data"));
get_env("CONFIG_FILE").unwrap_or_else(|| format!("{}/config.json", data_folder))
};
}
pub type Pass = String; pub type Pass = String;
@@ -23,13 +27,13 @@ macro_rules! make_config {
$group:ident $(: $group_enabled:ident)? { $group:ident $(: $group_enabled:ident)? {
$( $(
$(#[doc = $doc:literal])+ $(#[doc = $doc:literal])+
$name:ident : $ty:ty, $editable:literal, $none_action:ident $(, $default:expr)?; $name:ident : $ty:ident, $editable:literal, $none_action:ident $(, $default:expr)?;
)+}, )+},
)+) => { )+) => {
pub struct Config { inner: RwLock<Inner> } pub struct Config { inner: RwLock<Inner> }
struct Inner { struct Inner {
templates: Handlebars, templates: Handlebars<'static>,
config: ConfigItems, config: ConfigItems,
_env: ConfigBuilder, _env: ConfigBuilder,
@@ -50,7 +54,7 @@ macro_rules! make_config {
let mut builder = ConfigBuilder::default(); let mut builder = ConfigBuilder::default();
$($( $($(
builder.$name = get_env(&stringify!($name).to_uppercase()); builder.$name = make_config! { @getenv &stringify!($name).to_uppercase(), $ty };
)+)+ )+)+
builder builder
@@ -108,6 +112,8 @@ macro_rules! make_config {
)+)+ )+)+
config.domain_set = _domain_set; config.domain_set = _domain_set;
config.signups_domains_whitelist = config.signups_domains_whitelist.trim().to_lowercase();
config config
} }
} }
@@ -129,7 +135,6 @@ macro_rules! make_config {
(inner._env.build(), inner.config.clone()) (inner._env.build(), inner.config.clone())
}; };
fn _get_form_type(rust_type: &str) -> &'static str { fn _get_form_type(rust_type: &str) -> &'static str {
match rust_type { match rust_type {
"Pass" => "password", "Pass" => "password",
@@ -189,6 +194,10 @@ macro_rules! make_config {
let f: &dyn Fn(&ConfigItems) -> _ = &$default_fn; let f: &dyn Fn(&ConfigItems) -> _ = &$default_fn;
f($config) f($config)
}}; }};
( @getenv $name:expr, bool ) => { get_env_bool($name) };
( @getenv $name:expr, $ty:ident ) => { get_env($name) };
} }
//STRUCTURE: //STRUCTURE:
@@ -236,17 +245,26 @@ make_config! {
domain: String, true, def, "http://localhost".to_string(); domain: String, true, def, "http://localhost".to_string();
/// Domain Set |> Indicates if the domain is set by the admin. Otherwise the default will be used. /// Domain Set |> Indicates if the domain is set by the admin. Otherwise the default will be used.
domain_set: bool, false, def, false; domain_set: bool, false, def, false;
/// Domain origin |> Domain URL origin (in https://example.com:8443/path, https://example.com:8443 is the origin)
domain_origin: String, false, auto, |c| extract_url_origin(&c.domain);
/// Domain path |> Domain URL path (in https://example.com:8443/path, /path is the path)
domain_path: String, false, auto, |c| extract_url_path(&c.domain);
/// Enable web vault /// Enable web vault
web_vault_enabled: bool, false, def, true; web_vault_enabled: bool, false, def, true;
/// HIBP Api Key |> HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key /// HIBP Api Key |> HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
hibp_api_key: Pass, true, option; hibp_api_key: Pass, true, option;
/// Per-user attachment limit (KB) |> Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more
user_attachment_limit: i64, true, option;
/// Per-organization attachment limit (KB) |> Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
org_attachment_limit: i64, true, option;
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from /// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0, /// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
/// otherwise it will delete them and they won't be downloaded again. /// otherwise it will delete them and they won't be downloaded again.
disable_icon_download: bool, true, def, false; disable_icon_download: bool, true, def, false;
/// Allow new signups |> Controls if new users can register. Note that while this is disabled, users could still be invited /// Allow new signups |> Controls whether new users can register. Users can be invited by the bitwarden_rs admin even if this is disabled
signups_allowed: bool, true, def, true; signups_allowed: bool, true, def, true;
/// Require email verification on signups. This will prevent logins from succeeding until the address has been verified /// Require email verification on signups. This will prevent logins from succeeding until the address has been verified
signups_verify: bool, true, def, false; signups_verify: bool, true, def, false;
@@ -254,9 +272,9 @@ make_config! {
signups_verify_resend_time: u64, true, def, 3_600; signups_verify_resend_time: u64, true, def, 3_600;
/// If signups require email verification, limit how many emails are automatically sent when login is attempted (0 means no limit) /// If signups require email verification, limit how many emails are automatically sent when login is attempted (0 means no limit)
signups_verify_resend_limit: u32, true, def, 6; signups_verify_resend_limit: u32, true, def, 6;
/// Allow signups only from this list of comma-separated domains /// Email domain whitelist |> Allow signups only from this list of comma-separated domains, even when signups are otherwise disabled
signups_domains_whitelist: String, true, def, "".to_string(); signups_domains_whitelist: String, true, def, "".to_string();
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are disabled /// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
invitations_allowed: bool, true, def, true; invitations_allowed: bool, true, def, true;
/// Password iterations |> Number of server-side passwords hashing iterations. /// Password iterations |> Number of server-side passwords hashing iterations.
/// The changes only apply when a user changes their password. Not recommended to lower the value /// The changes only apply when a user changes their password. Not recommended to lower the value
@@ -267,6 +285,9 @@ make_config! {
/// Admin page token |> The token used to authenticate in this very same page. Changing it here won't deauthorize the current session /// Admin page token |> The token used to authenticate in this very same page. Changing it here won't deauthorize the current session
admin_token: Pass, true, option; admin_token: Pass, true, option;
/// Invitation organization name |> Name shown in the invitation emails that don't come from a specific organization
invitation_org_name: String, true, def, "Bitwarden_RS".to_string();
}, },
/// Advanced settings /// Advanced settings
@@ -295,7 +316,7 @@ make_config! {
/// Disable authenticator time drifted codes to be valid |> Enabling this only allows the current TOTP code to be valid /// Disable authenticator time drifted codes to be valid |> Enabling this only allows the current TOTP code to be valid
/// TOTP codes of the previous and next 30 seconds will be invalid. /// TOTP codes of the previous and next 30 seconds will be invalid.
authenticator_disable_time_drift: bool, true, def, false; authenticator_disable_time_drift: bool, true, def, false;
/// Require new device emails |> When a user logs in an email is required to be sent. /// Require new device emails |> When a user logs in an email is required to be sent.
/// If sending the email fails the login attempt will fail. /// If sending the email fails the login attempt will fail.
@@ -319,6 +340,9 @@ make_config! {
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front /// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
disable_admin_token: bool, true, def, false; disable_admin_token: bool, true, def, false;
/// Allowed iframe ancestors (Know the risks!) |> Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
allowed_iframe_ancestors: String, true, def, String::new();
}, },
/// Yubikey settings /// Yubikey settings
@@ -400,9 +424,20 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("`DATABASE_URL` should start with postgresql: when using the PostgreSQL server") err!("`DATABASE_URL` should start with postgresql: when using the PostgreSQL server")
} }
let dom = cfg.domain.to_lowercase();
if !dom.starts_with("http://") && !dom.starts_with("https://") {
err!("DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'");
}
let whitelist = &cfg.signups_domains_whitelist;
if !whitelist.is_empty() && whitelist.split(',').any(|d| d.trim().is_empty()) {
err!("`SIGNUPS_DOMAINS_WHITELIST` contains empty tokens");
}
if let Some(ref token) = cfg.admin_token { if let Some(ref token) = cfg.admin_token {
if token.trim().is_empty() { if token.trim().is_empty() && !cfg.disable_admin_token {
err!("`ADMIN_TOKEN` is enabled but has an empty value. To enable the admin page without token, use `DISABLE_ADMIN_TOKEN`") println!("[WARNING] `ADMIN_TOKEN` is enabled but has an empty value, so the admin page will be disabled.");
println!("[WARNING] To enable the admin page without a token, use `DISABLE_ADMIN_TOKEN`.");
} }
} }
@@ -442,6 +477,29 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
Ok(()) Ok(())
} }
/// Extracts an RFC 6454 web origin from a URL.
fn extract_url_origin(url: &str) -> String {
match Url::parse(url) {
Ok(u) => u.origin().ascii_serialization(),
Err(e) => {
println!("Error validating domain: {}", e);
String::new()
}
}
}
/// Extracts the path from a URL.
/// All trailing '/' chars are trimmed, even if the path is a lone '/'.
fn extract_url_path(url: &str) -> String {
match Url::parse(url) {
Ok(u) => u.path().trim_end_matches('/').to_string(),
Err(_) => {
// We already print it in the method above, no need to do it again
String::new()
}
}
}
impl Config { impl Config {
pub fn load() -> Result<Self, Error> { pub fn load() -> Result<Self, Error> {
// Loading from env and file // Loading from env and file
@@ -500,13 +558,30 @@ impl Config {
self.update_config(builder) self.update_config(builder)
} }
pub fn can_signup_user(&self, email: &str) -> bool { /// Tests whether an email's domain is allowed. A domain is allowed if it
/// is in signups_domains_whitelist, or if no whitelist is set (so there
/// are no domain restrictions in effect).
pub fn is_email_domain_allowed(&self, email: &str) -> bool {
let e: Vec<&str> = email.rsplitn(2, '@').collect(); let e: Vec<&str> = email.rsplitn(2, '@').collect();
if e.len() != 2 || e[0].is_empty() || e[1].is_empty() { if e.len() != 2 || e[0].is_empty() || e[1].is_empty() {
warn!("Failed to parse email address '{}'", email); warn!("Failed to parse email address '{}'", email);
return false; return false;
} }
self.signups_domains_whitelist().split(',').any(|d| d == e[0]) let email_domain = e[0].to_lowercase();
let whitelist = self.signups_domains_whitelist();
whitelist.is_empty() || whitelist.split(',').any(|d| d.trim() == email_domain)
}
/// Tests whether signup is allowed for an email address, taking into
/// account the signups_allowed and signups_domains_whitelist settings.
pub fn is_signup_allowed(&self, email: &str) -> bool {
if !self.signups_domains_whitelist().is_empty() {
// The whitelist setting overrides the signups_allowed setting.
self.is_email_domain_allowed(email)
} else {
self.signups_allowed()
}
} }
pub fn delete_user_config(&self) -> Result<(), Error> { pub fn delete_user_config(&self) -> Result<(), Error> {
@@ -561,6 +636,13 @@ impl Config {
} }
} }
/// Tests whether the admin token is set to a non-empty value.
pub fn is_admin_token_set(&self) -> bool {
let token = self.admin_token();
token.is_some() && !token.unwrap().trim().is_empty()
}
pub fn render_template<T: serde::ser::Serialize>( pub fn render_template<T: serde::ser::Serialize>(
&self, &self,
name: &str, name: &str,
@@ -568,7 +650,7 @@ impl Config {
) -> Result<String, crate::error::Error> { ) -> Result<String, crate::error::Error> {
if CONFIG.reload_templates() { if CONFIG.reload_templates() {
warn!("RELOADING TEMPLATES"); warn!("RELOADING TEMPLATES");
let hb = load_templates(CONFIG.templates_folder().as_ref()); let hb = load_templates(CONFIG.templates_folder());
hb.render(name, data).map_err(Into::into) hb.render(name, data).map_err(Into::into)
} else { } else {
let hb = &CONFIG.inner.read().unwrap().templates; let hb = &CONFIG.inner.read().unwrap().templates;
@@ -577,17 +659,18 @@ impl Config {
} }
} }
use handlebars::{ use handlebars::{Context, Handlebars, Helper, HelperResult, Output, RenderContext, RenderError, Renderable};
Context, Handlebars, Helper, HelperDef, HelperResult, Output, RenderContext, RenderError, Renderable,
};
fn load_templates(path: &str) -> Handlebars { fn load_templates<P>(path: P) -> Handlebars<'static>
where
P: AsRef<std::path::Path>,
{
let mut hb = Handlebars::new(); let mut hb = Handlebars::new();
// Error on missing params // Error on missing params
hb.set_strict_mode(true); hb.set_strict_mode(true);
// Register helpers // Register helpers
hb.register_helper("case", Box::new(CaseHelper)); hb.register_helper("case", Box::new(case_helper));
hb.register_helper("jsesc", Box::new(JsEscapeHelper)); hb.register_helper("jsesc", Box::new(js_escape_helper));
macro_rules! reg { macro_rules! reg {
($name:expr) => {{ ($name:expr) => {{
@@ -613,10 +696,14 @@ fn load_templates(path: &str) -> Handlebars {
reg!("email/verify_email", ".html"); reg!("email/verify_email", ".html");
reg!("email/welcome", ".html"); reg!("email/welcome", ".html");
reg!("email/welcome_must_verify", ".html"); reg!("email/welcome_must_verify", ".html");
reg!("email/smtp_test", ".html");
reg!("admin/base"); reg!("admin/base");
reg!("admin/login"); reg!("admin/login");
reg!("admin/page"); reg!("admin/settings");
reg!("admin/users");
reg!("admin/organizations");
reg!("admin/diagnostics");
// And then load user templates to overwrite the defaults // And then load user templates to overwrite the defaults
// Use .hbs extension for the files // Use .hbs extension for the files
@@ -626,48 +713,44 @@ fn load_templates(path: &str) -> Handlebars {
hb hb
} }
pub struct CaseHelper; fn case_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
r: &'reg Handlebars,
ctx: &'rc Context,
rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param = h
.param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?;
let value = param.value().clone();
impl HelperDef for CaseHelper { if h.params().iter().skip(1).any(|x| x.value() == &value) {
fn call<'reg: 'rc, 'rc>( h.template().map(|t| t.render(r, ctx, rc, out)).unwrap_or(Ok(()))
&self, } else {
h: &Helper<'reg, 'rc>,
r: &'reg Handlebars,
ctx: &Context,
rc: &mut RenderContext<'reg>,
out: &mut dyn Output,
) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?;
let value = param.value().clone();
if h.params().iter().skip(1).any(|x| x.value() == &value) {
h.template().map(|t| t.render(r, ctx, rc, out)).unwrap_or(Ok(()))
} else {
Ok(())
}
}
}
pub struct JsEscapeHelper;
impl HelperDef for JsEscapeHelper {
fn call<'reg: 'rc, 'rc>(
&self,
h: &Helper<'reg, 'rc>,
_: &'reg Handlebars,
_: &Context,
_: &mut RenderContext<'reg>,
out: &mut dyn Output,
) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
let value =
param.value().as_str().ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
let escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
let quoted_value = format!("&quot;{}&quot;", escaped_value);
out.write(&quoted_value)?;
Ok(()) Ok(())
} }
} }
fn js_escape_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
_r: &'reg Handlebars,
_ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param = h
.param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
let value = param
.value()
.as_str()
.ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
let escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
let quoted_value = format!("&quot;{}&quot;", escaped_value);
out.write(&quoted_value)?;
Ok(())
}

View File

@@ -6,7 +6,7 @@ use crate::error::Error;
use ring::{digest, hmac, pbkdf2}; use ring::{digest, hmac, pbkdf2};
use std::num::NonZeroU32; use std::num::NonZeroU32;
static DIGEST_ALG: &digest::Algorithm = &digest::SHA256; static DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN; const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN;
pub fn hash_password(secret: &[u8], salt: &[u8], iterations: u32) -> Vec<u8> { pub fn hash_password(secret: &[u8], salt: &[u8], iterations: u32) -> Vec<u8> {
@@ -29,7 +29,7 @@ pub fn verify_password_hash(secret: &[u8], salt: &[u8], previous: &[u8], iterati
pub fn hmac_sign(key: &str, data: &str) -> String { pub fn hmac_sign(key: &str, data: &str) -> String {
use data_encoding::HEXLOWER; use data_encoding::HEXLOWER;
let key = hmac::SigningKey::new(&digest::SHA1, key.as_bytes()); let key = hmac::Key::new(hmac::HMAC_SHA1_FOR_LEGACY_USE_ONLY, key.as_bytes());
let signature = hmac::sign(&key, data.as_bytes()); let signature = hmac::sign(&key, data.as_bytes());
HEXLOWER.encode(signature.as_ref()) HEXLOWER.encode(signature.as_ref())

View File

@@ -76,7 +76,8 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
type Error = (); type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<DbConn, ()> { fn from_request(request: &'a Request<'r>) -> request::Outcome<DbConn, ()> {
let pool = request.guard::<State<Pool>>()?; // https://github.com/SergioBenitez/Rocket/commit/e3c1a4ad3ab9b840482ec6de4200d30df43e357c
let pool = try_outcome!(request.guard::<State<Pool>>());
match pool.get() { match pool.get() {
Ok(conn) => Outcome::Success(DbConn(conn)), Ok(conn) => Outcome::Success(DbConn(conn)),
Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())), Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())),

View File

@@ -5,6 +5,7 @@ use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "attachments"] #[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")] #[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[primary_key(id)] #[primary_key(id)]
pub struct Attachment { pub struct Attachment {
@@ -17,7 +18,7 @@ pub struct Attachment {
/// Local methods /// Local methods
impl Attachment { impl Attachment {
pub fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32) -> Self { pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32) -> Self {
Self { Self {
id, id,
cipher_uuid, cipher_uuid,
@@ -49,9 +50,8 @@ impl Attachment {
} }
} }
use crate::db::schema::attachments; use crate::db::schema::{attachments, ciphers};
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -118,4 +118,46 @@ impl Attachment {
.load::<Self>(&**conn) .load::<Self>(&**conn)
.expect("Error loading attachments") .expect("Error loading attachments")
} }
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(&**conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}
pub fn count_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
}
pub fn size_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(&**conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first(&**conn)
.ok()
.unwrap_or(0)
}
} }

View File

@@ -7,6 +7,7 @@ use super::{
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "ciphers"] #[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")] #[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]
@@ -33,6 +34,7 @@ pub struct Cipher {
pub favorite: bool, pub favorite: bool,
pub password_history: Option<String>, pub password_history: Option<String>,
pub deleted_at: Option<NaiveDateTime>,
} }
/// Local methods /// Local methods
@@ -57,13 +59,13 @@ impl Cipher {
data: String::new(), data: String::new(),
password_history: None, password_history: None,
deleted_at: None,
} }
} }
} }
use crate::db::schema::*; use crate::db::schema::*;
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -80,7 +82,19 @@ impl Cipher {
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let mut data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null); // Get the data or a default empty value to avoid issues with the mobile apps
let mut data_json: Value = serde_json::from_str(&self.data).unwrap_or_else(|_| json!({
"Fields":null,
"Name": self.name,
"Notes":null,
"Password":null,
"PasswordHistory":null,
"PasswordRevisionDate":null,
"Response":null,
"Totp":null,
"Uris":null,
"Username":null
}));
// TODO: ******* Backwards compat start ********** // TODO: ******* Backwards compat start **********
// To remove backwards compatibility, just remove this entire section // To remove backwards compatibility, just remove this entire section
@@ -95,6 +109,7 @@ impl Cipher {
"Id": self.uuid, "Id": self.uuid,
"Type": self.atype, "Type": self.atype,
"RevisionDate": format_date(&self.updated_at), "RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"FolderId": self.get_folder_uuid(&user_uuid, &conn), "FolderId": self.get_folder_uuid(&user_uuid, &conn),
"Favorite": self.favorite, "Favorite": self.favorite,
"OrganizationId": self.organization_uuid, "OrganizationId": self.organization_uuid,
@@ -340,12 +355,30 @@ impl Cipher {
.load::<Self>(&**conn).expect("Error loading ciphers") .load::<Self>(&**conn).expect("Error loading ciphers")
} }
pub fn count_owned_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
ciphers::table ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid)) .filter(ciphers::organization_uuid.eq(org_uuid))
.load::<Self>(&**conn).expect("Error loading ciphers") .load::<Self>(&**conn).expect("Error loading ciphers")
} }
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
}
pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
folders_ciphers::table.inner_join(ciphers::table) folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid)) .filter(folders_ciphers::folder_uuid.eq(folder_uuid))

View File

@@ -35,7 +35,6 @@ impl Collection {
use crate::db::schema::*; use crate::db::schema::*;
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;

View File

@@ -5,6 +5,7 @@ use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "devices"] #[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]
pub struct Device { pub struct Device {
@@ -76,7 +77,6 @@ impl Device {
let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect(); let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect();
let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect(); let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client // Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJWTClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER}; use crate::auth::{encode_jwt, LoginJWTClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
let claims = LoginJWTClaims { let claims = LoginJWTClaims {
@@ -107,7 +107,6 @@ impl Device {
use crate::db::schema::devices; use crate::db::schema::devices;
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;

View File

@@ -63,7 +63,6 @@ impl FolderCipher {
use crate::db::schema::{folders, folders_ciphers}; use crate::db::schema::{folders, folders_ciphers};
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;

View File

@@ -7,6 +7,7 @@ mod user;
mod collection; mod collection;
mod organization; mod organization;
mod two_factor; mod two_factor;
mod org_policy;
pub use self::attachment::Attachment; pub use self::attachment::Attachment;
pub use self::cipher::Cipher; pub use self::cipher::Cipher;
@@ -17,3 +18,4 @@ pub use self::organization::Organization;
pub use self::organization::{UserOrgStatus, UserOrgType, UserOrganization}; pub use self::organization::{UserOrgStatus, UserOrgType, UserOrganization};
pub use self::two_factor::{TwoFactor, TwoFactorType}; pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::user::{Invitation, User}; pub use self::user::{Invitation, User};
pub use self::org_policy::{OrgPolicy, OrgPolicyType};

141
src/db/models/org_policy.rs Normal file
View File

@@ -0,0 +1,141 @@
use diesel::prelude::*;
use serde_json::Value;
use crate::api::EmptyResult;
use crate::db::schema::org_policies;
use crate::db::DbConn;
use crate::error::MapResult;
use super::Organization;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "org_policies"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct OrgPolicy {
pub uuid: String,
pub org_uuid: String,
pub atype: i32,
pub enabled: bool,
pub data: String,
}
#[allow(dead_code)]
#[derive(num_derive::FromPrimitive)]
pub enum OrgPolicyType {
TwoFactorAuthentication = 0,
MasterPassword = 1,
PasswordGenerator = 2,
}
/// Local methods
impl OrgPolicy {
pub fn new(org_uuid: String, atype: OrgPolicyType, data: String) -> Self {
Self {
uuid: crate::util::get_uuid(),
org_uuid,
atype: atype as i32,
enabled: false,
data,
}
}
pub fn to_json(&self) -> Value {
let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null);
json!({
"Id": self.uuid,
"OrganizationId": self.org_uuid,
"Type": self.atype,
"Data": data_json,
"Enabled": self.enabled,
"Object": "policy",
})
}
}
/// Database methods
impl OrgPolicy {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
// We need to make sure we're not going to violate the unique constraint on org_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(
org_policies::table
.filter(org_policies::org_uuid.eq(&self.org_uuid))
.filter(org_policies::atype.eq(&self.atype)),
)
.execute(&**conn)
.map_res("Error deleting org_policy for insert")?;
diesel::insert_into(org_policies::table)
.values(self)
.on_conflict(org_policies::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving org_policy")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::replace_into(org_policies::table)
.values(&*self)
.execute(&**conn)
.map_res("Error saving org_policy")
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(org_policies::table.filter(org_policies::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error deleting org_policy")
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.load::<Self>(&**conn)
.expect("Error loading org_policy")
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
use crate::db::schema::users_organizations;
org_policies::table
.left_join(
users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.select(org_policies::all_columns)
.load::<Self>(&**conn)
.expect("Error loading org_policy")
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.filter(org_policies::atype.eq(atype))
.first::<Self>(&**conn)
.ok()
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(&**conn)
.map_res("Error deleting org_policy")
}
/*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(&**conn)
.map_res("Error deleting twofactors")
}*/
}

View File

@@ -1,7 +1,8 @@
use serde_json::Value; use serde_json::Value;
use std::cmp::Ordering; use std::cmp::Ordering;
use num_traits::FromPrimitive;
use super::{CollectionUser, User}; use super::{CollectionUser, User, OrgPolicy};
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "organizations"] #[table_name = "organizations"]
@@ -33,6 +34,7 @@ pub enum UserOrgStatus {
} }
#[derive(Copy, Clone, PartialEq, Eq)] #[derive(Copy, Clone, PartialEq, Eq)]
#[derive(num_derive::FromPrimitive)]
pub enum UserOrgType { pub enum UserOrgType {
Owner = 0, Owner = 0,
Admin = 1, Admin = 1,
@@ -135,16 +137,6 @@ impl UserOrgType {
_ => None, _ => None,
} }
} }
pub fn from_i32(i: i32) -> Option<Self> {
match i {
0 => Some(UserOrgType::Owner),
1 => Some(UserOrgType::Admin),
2 => Some(UserOrgType::User),
3 => Some(UserOrgType::Manager),
_ => None,
}
}
} }
/// Local methods /// Local methods
@@ -170,11 +162,12 @@ impl Organization {
"UseEvents": false, "UseEvents": false,
"UseGroups": false, "UseGroups": false,
"UseTotp": true, "UseTotp": true,
"UsePolicies": true,
"BusinessName": null, "BusinessName": null,
"BusinessAddress1": null, "BusinessAddress1": null,
"BusinessAddress2": null, "BusinessAddress2": null,
"BusinessAddress3": null, "BusinessAddress3": null,
"BusinessCountry": null, "BusinessCountry": null,
"BusinessTaxNumber": null, "BusinessTaxNumber": null,
@@ -205,7 +198,6 @@ impl UserOrganization {
use crate::db::schema::{ciphers_collections, organizations, users_collections, users_organizations}; use crate::db::schema::{ciphers_collections, organizations, users_collections, users_organizations};
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -250,6 +242,7 @@ impl Organization {
Cipher::delete_all_by_organization(&self.uuid, &conn)?; Cipher::delete_all_by_organization(&self.uuid, &conn)?;
Collection::delete_all_by_organization(&self.uuid, &conn)?; Collection::delete_all_by_organization(&self.uuid, &conn)?;
UserOrganization::delete_all_by_organization(&self.uuid, &conn)?; UserOrganization::delete_all_by_organization(&self.uuid, &conn)?;
OrgPolicy::delete_all_by_organization(&self.uuid, &conn)?;
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid))) diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
.execute(&**conn) .execute(&**conn)
@@ -262,12 +255,16 @@ impl Organization {
.first::<Self>(&**conn) .first::<Self>(&**conn)
.ok() .ok()
} }
pub fn get_all(conn: &DbConn) -> Vec<Self> {
organizations::table.load::<Self>(&**conn).expect("Error loading organizations")
}
} }
impl UserOrganization { impl UserOrganization {
pub fn to_json(&self, conn: &DbConn) -> Value { pub fn to_json(&self, conn: &DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).unwrap(); let org = Organization::find_by_uuid(&self.org_uuid, conn).unwrap();
json!({ json!({
"Id": self.org_uuid, "Id": self.org_uuid,
"Name": org.name, "Name": org.name,
@@ -280,6 +277,9 @@ impl UserOrganization {
"UseEvents": false, "UseEvents": false,
"UseGroups": false, "UseGroups": false,
"UseTotp": true, "UseTotp": true,
"UsePolicies": true,
"UseApi": false,
"SelfHost": true,
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side "MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
@@ -310,7 +310,7 @@ impl UserOrganization {
}) })
} }
pub fn to_json_collection_user_details(&self, read_only: bool) -> Value { pub fn to_json_read_only(&self, read_only: bool) -> Value {
json!({ json!({
"Id": self.uuid, "Id": self.uuid,
"ReadOnly": read_only "ReadOnly": read_only
@@ -437,6 +437,15 @@ impl UserOrganization {
.expect("Error loading user organizations") .expect("Error loading user organizations")
} }
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> { pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> {
users_organizations::table users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid)) .filter(users_organizations::org_uuid.eq(org_uuid))

View File

@@ -1,4 +1,3 @@
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use serde_json::Value; use serde_json::Value;
@@ -23,7 +22,7 @@ pub struct TwoFactor {
} }
#[allow(dead_code)] #[allow(dead_code)]
#[derive(FromPrimitive)] #[derive(num_derive::FromPrimitive)]
pub enum TwoFactorType { pub enum TwoFactorType {
Authenticator = 0, Authenticator = 0,
Email = 1, Email = 1,
@@ -60,7 +59,7 @@ impl TwoFactor {
}) })
} }
pub fn to_json_list(&self) -> Value { pub fn to_json_provider(&self) -> Value {
json!({ json!({
"Enabled": self.enabled, "Enabled": self.enabled,
"Type": self.atype, "Type": self.atype,
@@ -73,6 +72,13 @@ impl TwoFactor {
impl TwoFactor { impl TwoFactor {
#[cfg(feature = "postgresql")] #[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult { pub fn save(&self, conn: &DbConn) -> EmptyResult {
// We need to make sure we're not going to violate the unique constraint on user_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(&self.user_uuid)).filter(twofactor::atype.eq(&self.atype)))
.execute(&**conn)
.map_res("Error deleting twofactor for insert")?;
diesel::insert_into(twofactor::table) diesel::insert_into(twofactor::table)
.values(self) .values(self)
.on_conflict(twofactor::uuid) .on_conflict(twofactor::uuid)

View File

@@ -6,6 +6,7 @@ use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users"] #[table_name = "users"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)] #[primary_key(uuid)]
pub struct User { pub struct User {
pub uuid: String, pub uuid: String,
@@ -120,7 +121,6 @@ impl User {
use super::{Cipher, Device, Folder, TwoFactor, UserOrgType, UserOrganization}; use super::{Cipher, Device, Folder, TwoFactor, UserOrgType, UserOrganization};
use crate::db::schema::{invitations, users}; use crate::db::schema::{invitations, users};
use crate::db::DbConn; use crate::db::DbConn;
use diesel;
use diesel::prelude::*; use diesel::prelude::*;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -274,7 +274,7 @@ pub struct Invitation {
} }
impl Invitation { impl Invitation {
pub fn new(email: String) -> Self { pub const fn new(email: String) -> Self {
Self { email } Self { email }
} }
@@ -319,10 +319,9 @@ impl Invitation {
} }
pub fn take(mail: &str, conn: &DbConn) -> bool { pub fn take(mail: &str, conn: &DbConn) -> bool {
CONFIG.invitations_allowed() match Self::find_by_mail(mail, &conn) {
&& match Self::find_by_mail(mail, &conn) { Some(invitation) => invitation.delete(&conn).is_ok(),
Some(invitation) => invitation.delete(&conn).is_ok(), None => false,
None => false, }
}
} }
} }

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
favorite -> Bool, favorite -> Bool,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Datetime>,
} }
} }
@@ -77,6 +78,16 @@ table! {
} }
} }
table! {
org_policies (uuid) {
uuid -> Varchar,
org_uuid -> Varchar,
atype -> Integer,
enabled -> Bool,
data -> Text,
}
}
table! { table! {
organizations (uuid) { organizations (uuid) {
uuid -> Varchar, uuid -> Varchar,
@@ -155,6 +166,7 @@ joinable!(devices -> users (user_uuid));
joinable!(folders -> users (user_uuid)); joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -170,6 +182,7 @@ allow_tables_to_appear_in_same_query!(
folders, folders,
folders_ciphers, folders_ciphers,
invitations, invitations,
org_policies,
organizations, organizations,
twofactor, twofactor,
users, users,

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
favorite -> Bool, favorite -> Bool,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>,
} }
} }
@@ -77,6 +78,16 @@ table! {
} }
} }
table! {
org_policies (uuid) {
uuid -> Text,
org_uuid -> Text,
atype -> Integer,
enabled -> Bool,
data -> Text,
}
}
table! { table! {
organizations (uuid) { organizations (uuid) {
uuid -> Text, uuid -> Text,
@@ -155,6 +166,7 @@ joinable!(devices -> users (user_uuid));
joinable!(folders -> users (user_uuid)); joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -170,6 +182,7 @@ allow_tables_to_appear_in_same_query!(
folders, folders,
folders_ciphers, folders_ciphers,
invitations, invitations,
org_policies,
organizations, organizations,
twofactor, twofactor,
users, users,

View File

@@ -22,6 +22,7 @@ table! {
data -> Text, data -> Text,
favorite -> Bool, favorite -> Bool,
password_history -> Nullable<Text>, password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>,
} }
} }
@@ -77,6 +78,16 @@ table! {
} }
} }
table! {
org_policies (uuid) {
uuid -> Text,
org_uuid -> Text,
atype -> Integer,
enabled -> Bool,
data -> Text,
}
}
table! { table! {
organizations (uuid) { organizations (uuid) {
uuid -> Text, uuid -> Text,
@@ -155,6 +166,7 @@ joinable!(devices -> users (user_uuid));
joinable!(folders -> users (user_uuid)); joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -170,6 +182,7 @@ allow_tables_to_appear_in_same_query!(
folders, folders,
folders_ciphers, folders_ciphers,
invitations, invitations,
org_policies,
organizations, organizations,
twofactor, twofactor,
users, users,

View File

@@ -7,7 +7,6 @@ macro_rules! make_error {
( $( $name:ident ( $ty:ty ): $src_fn:expr, $usr_msg_fun:expr ),+ $(,)? ) => { ( $( $name:ident ( $ty:ty ): $src_fn:expr, $usr_msg_fun:expr ),+ $(,)? ) => {
const BAD_REQUEST: u16 = 400; const BAD_REQUEST: u16 = 400;
#[derive(Display)]
pub enum ErrorKind { $($name( $ty )),+ } pub enum ErrorKind { $($name( $ty )),+ }
pub struct Error { message: String, error: ErrorKind, error_code: u16 } pub struct Error { message: String, error: ErrorKind, error_code: u16 }
@@ -47,7 +46,12 @@ use std::time::SystemTimeError as TimeErr;
use u2f::u2ferror::U2fError as U2fErr; use u2f::u2ferror::U2fError as U2fErr;
use yubico::yubicoerror::YubicoError as YubiErr; use yubico::yubicoerror::YubicoError as YubiErr;
#[derive(Display, Serialize)] use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr;
use lettre::message::mime::FromStrError as FromStrErr;
use lettre::transport::smtp::error::Error as SmtpErr;
#[derive(Serialize)]
pub struct Empty {} pub struct Empty {}
// Error struct // Error struct
@@ -73,6 +77,11 @@ make_error! {
ReqError(ReqErr): _has_source, _api_error, ReqError(ReqErr): _has_source, _api_error,
RegexError(RegexErr): _has_source, _api_error, RegexError(RegexErr): _has_source, _api_error,
YubiError(YubiErr): _has_source, _api_error, YubiError(YubiErr): _has_source, _api_error,
LetreError(LettreErr): _has_source, _api_error,
AddressError(AddrErr): _has_source, _api_error,
SmtpError(SmtpErr): _has_source, _api_error,
FromStrError(FromStrErr): _has_source, _api_error,
} }
// This is implemented by hand because NoneError doesn't implement neither Display nor Error // This is implemented by hand because NoneError doesn't implement neither Display nor Error
@@ -116,7 +125,7 @@ impl Error {
self self
} }
pub fn with_code(mut self, code: u16) -> Self { pub const fn with_code(mut self, code: u16) -> Self {
self.error_code = code; self.error_code = code;
self self
} }
@@ -144,7 +153,7 @@ impl<S> MapResult<S> for Option<S> {
} }
} }
fn _has_source<T>(e: T) -> Option<T> { const fn _has_source<T>(e: T) -> Option<T> {
Some(e) Some(e)
} }
fn _no_source<T, S>(_: T) -> Option<S> { fn _no_source<T, S>(_: T) -> Option<S> {
@@ -209,6 +218,18 @@ macro_rules! err {
}}; }};
} }
#[macro_export]
macro_rules! err_discard {
($msg:expr, $data:expr) => {{
std::io::copy(&mut $data.open(), &mut std::io::sink()).ok();
return Err(crate::error::Error::new($msg, $msg));
}};
($usr_msg:expr, $log_value:expr, $data:expr) => {{
std::io::copy(&mut $data.open(), &mut std::io::sink()).ok();
return Err(crate::error::Error::new($usr_msg, $log_value));
}};
}
#[macro_export] #[macro_export]
macro_rules! err_json { macro_rules! err_json {
($expr:expr, $log_value:expr) => {{ ($expr:expr, $log_value:expr) => {{

View File

@@ -1,11 +1,11 @@
use lettre::smtp::authentication::Credentials; use std::str::FromStr;
use lettre::smtp::authentication::Mechanism as SmtpAuthMechanism;
use lettre::smtp::ConnectionReuseParameters; use lettre::message::{header, Mailbox, Message, MultiPart, SinglePart};
use lettre::{ClientSecurity, ClientTlsParameters, SmtpClient, SmtpTransport, Transport}; use lettre::transport::smtp::authentication::{Credentials, Mechanism as SmtpAuthMechanism};
use lettre_email::{EmailBuilder, MimeMultipartType, PartBuilder}; use lettre::{Address, SmtpTransport, Tls, TlsParameters, Transport};
use native_tls::{Protocol, TlsConnector}; use native_tls::{Protocol, TlsConnector};
use percent_encoding::{percent_encode, NON_ALPHANUMERIC}; use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use quoted_printable::encode_to_str;
use crate::api::EmptyResult; use crate::api::EmptyResult;
use crate::auth::{encode_jwt, generate_delete_claims, generate_invite_claims, generate_verify_email_claims}; use crate::auth::{encode_jwt, generate_delete_claims, generate_invite_claims, generate_verify_email_claims};
@@ -22,42 +22,40 @@ fn mailer() -> SmtpTransport {
.build() .build()
.unwrap(); .unwrap();
let params = ClientTlsParameters::new(host.clone(), tls); let params = TlsParameters::new(host.clone(), tls);
if CONFIG.smtp_explicit_tls() { if CONFIG.smtp_explicit_tls() {
ClientSecurity::Wrapper(params) Tls::Wrapper(params)
} else { } else {
ClientSecurity::Required(params) Tls::Required(params)
} }
} else { } else {
ClientSecurity::None Tls::None
}; };
use std::time::Duration; use std::time::Duration;
let smtp_client = SmtpClient::new((host.as_str(), CONFIG.smtp_port()), client_security).unwrap(); let smtp_client = SmtpTransport::builder(host).port(CONFIG.smtp_port()).tls(client_security);
let smtp_client = match (&CONFIG.smtp_username(), &CONFIG.smtp_password()) { let smtp_client = match (CONFIG.smtp_username(), CONFIG.smtp_password()) {
(Some(user), Some(pass)) => smtp_client.credentials(Credentials::new(user.clone(), pass.clone())), (Some(user), Some(pass)) => smtp_client.credentials(Credentials::new(user, pass)),
_ => smtp_client, _ => smtp_client,
}; };
let smtp_client = match &CONFIG.smtp_auth_mechanism() { let smtp_client = match CONFIG.smtp_auth_mechanism() {
Some(auth_mechanism_json) => { Some(mechanism) => {
let auth_mechanism = serde_json::from_str::<SmtpAuthMechanism>(&auth_mechanism_json); let correct_mechanism = format!("\"{}\"", crate::util::upcase_first(mechanism.trim_matches('"')));
match auth_mechanism {
Ok(auth_mechanism) => smtp_client.authentication_mechanism(auth_mechanism), // TODO: Allow more than one mechanism
match serde_json::from_str::<SmtpAuthMechanism>(&correct_mechanism) {
Ok(auth_mechanism) => smtp_client.authentication(vec![auth_mechanism]),
_ => panic!("Failure to parse mechanism. Is it proper Json? Eg. `\"Plain\"` not `Plain`"), _ => panic!("Failure to parse mechanism. Is it proper Json? Eg. `\"Plain\"` not `Plain`"),
} }
} }
_ => smtp_client, _ => smtp_client,
}; };
smtp_client smtp_client.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout()))).build()
.smtp_utf8(true)
.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout())))
.connection_reuse(ConnectionReuseParameters::NoReuse)
.transport()
} }
fn get_text(template_name: &'static str, data: serde_json::Value) -> Result<(String, String, String), Error> { fn get_text(template_name: &'static str, data: serde_json::Value) -> Result<(String, String, String), Error> {
@@ -92,7 +90,7 @@ pub fn send_password_hint(address: &str, hint: Option<String>) -> EmptyResult {
let (subject, body_html, body_text) = get_text(template_name, json!({ "hint": hint, "url": CONFIG.domain() }))?; let (subject, body_html, body_text) = get_text(template_name, json!({ "hint": hint, "url": CONFIG.domain() }))?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult { pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
@@ -109,7 +107,7 @@ pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult { pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
@@ -126,7 +124,7 @@ pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_welcome(address: &str) -> EmptyResult { pub fn send_welcome(address: &str) -> EmptyResult {
@@ -137,7 +135,7 @@ pub fn send_welcome(address: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult { pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
@@ -153,7 +151,7 @@ pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_invite( pub fn send_invite(
@@ -185,7 +183,7 @@ pub fn send_invite(
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult { pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
@@ -198,7 +196,7 @@ pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str)
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult { pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
@@ -210,7 +208,7 @@ pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult { pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult {
@@ -229,7 +227,7 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTime, de
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_token(address: &str, token: &str) -> EmptyResult { pub fn send_token(address: &str, token: &str) -> EmptyResult {
@@ -241,7 +239,7 @@ pub fn send_token(address: &str, token: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
} }
pub fn send_change_email(address: &str, token: &str) -> EmptyResult { pub fn send_change_email(address: &str, token: &str) -> EmptyResult {
@@ -253,43 +251,62 @@ pub fn send_change_email(address: &str, token: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(&address, &subject, &body_html, &body_text) send_email(address, &subject, &body_html, &body_text)
}
pub fn send_test(address: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/smtp_test",
json!({
"url": CONFIG.domain(),
}),
)?;
send_email(address, &subject, &body_html, &body_text)
} }
fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) -> EmptyResult { fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) -> EmptyResult {
let html = PartBuilder::new() let address_split: Vec<&str> = address.rsplitn(2, '@').collect();
.body(encode_to_str(body_html)) if address_split.len() != 2 {
.header(("Content-Type", "text/html; charset=utf-8")) err!("Invalid email address (no @)");
.header(("Content-Transfer-Encoding", "quoted-printable")) }
.build();
let text = PartBuilder::new() let domain_puny = match idna::domain_to_ascii_strict(address_split[0]) {
.body(encode_to_str(body_text)) Ok(d) => d,
.header(("Content-Type", "text/plain; charset=utf-8")) Err(_) => err!("Can't convert email domain to ASCII representation"),
.header(("Content-Transfer-Encoding", "quoted-printable")) };
.build();
let alternative = PartBuilder::new() let address = format!("{}@{}", address_split[1], domain_puny);
.message_type(MimeMultipartType::Alternative)
.child(text)
.child(html);
let email = EmailBuilder::new() let data = MultiPart::mixed()
.to(address) .multipart(
.from((CONFIG.smtp_from().as_str(), CONFIG.smtp_from_name().as_str())) MultiPart::alternative()
.singlepart(
SinglePart::quoted_printable()
.header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(body_text),
)
.multipart(
MultiPart::related().singlepart(
SinglePart::quoted_printable()
.header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(body_html),
)
// .singlepart(SinglePart::base64() -- Inline files would go here
),
)
// .singlepart(SinglePart::base64() -- Attachments would go here
;
let email = Message::builder()
.to(Mailbox::new(None, Address::from_str(&address)?))
.from(Mailbox::new(
Some(CONFIG.smtp_from_name()),
Address::from_str(&CONFIG.smtp_from())?,
))
.subject(subject) .subject(subject)
.child(alternative.build()) .multipart(data)?;
.build()
.map_err(|e| Error::new("Error building email", e.to_string()))?;
let mut transport = mailer(); let _ = mailer().send(&email)?;
Ok(())
let result = transport
.send(email.into())
.map_err(|e| Error::new("Error sending email", e.to_string()))
.and(Ok(()));
// Explicitly close the connection, in case of error
transport.close();
result
} }

View File

@@ -1,7 +1,7 @@
#![feature(proc_macro_hygiene, vec_remove_item, try_trait, ip)] #![forbid(unsafe_code)]
#![feature(proc_macro_hygiene, try_trait, ip)]
#![recursion_limit = "256"] #![recursion_limit = "256"]
#[cfg(feature = "openssl")]
extern crate openssl; extern crate openssl;
#[macro_use] #[macro_use]
extern crate rocket; extern crate rocket;
@@ -15,18 +15,13 @@ extern crate log;
extern crate diesel; extern crate diesel;
#[macro_use] #[macro_use]
extern crate diesel_migrations; extern crate diesel_migrations;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate derive_more;
#[macro_use]
extern crate num_derive;
use std::{ use std::{
fs::create_dir_all, fs::create_dir_all,
path::Path, path::Path,
process::{exit, Command}, process::{exit, Command},
str::FromStr, str::FromStr,
panic, thread, fmt // For panic logging
}; };
#[macro_use] #[macro_use]
@@ -42,7 +37,28 @@ mod util;
pub use config::CONFIG; pub use config::CONFIG;
pub use error::{Error, MapResult}; pub use error::{Error, MapResult};
use structopt::StructOpt;
// Used for catching panics and log them to file instead of stderr
use backtrace::Backtrace;
struct Shim(Backtrace);
impl fmt::Debug for Shim {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
write!(fmt, "\n{:?}", self.0)
}
}
#[derive(Debug, StructOpt)]
#[structopt(name = "bitwarden_rs", about = "A Bitwarden API server written in Rust")]
struct Opt {
/// Prints the app version
#[structopt(short, long)]
version: bool,
}
fn main() { fn main() {
parse_args();
launch_info(); launch_info();
use log::LevelFilter as LF; use log::LevelFilter as LF;
@@ -64,18 +80,33 @@ fn main() {
launch_rocket(extra_debug); launch_rocket(extra_debug);
} }
fn parse_args() {
let opt = Opt::from_args();
if opt.version {
if let Some(version) = option_env!("BWRS_VERSION") {
println!("bitwarden_rs {}", version);
} else {
println!("bitwarden_rs (Version info from Git not present)");
}
exit(0);
}
}
fn launch_info() { fn launch_info() {
println!("/--------------------------------------------------------------------\\"); println!("/--------------------------------------------------------------------\\");
println!("| Starting Bitwarden_RS |"); println!("| Starting Bitwarden_RS |");
if let Some(version) = option_env!("GIT_VERSION") { if let Some(version) = option_env!("BWRS_VERSION") {
println!("|{:^68}|", format!("Version {}", version)); println!("|{:^68}|", format!("Version {}", version));
} }
println!("|--------------------------------------------------------------------|"); println!("|--------------------------------------------------------------------|");
println!("| This is an *unofficial* Bitwarden implementation, DO NOT use the |"); println!("| This is an *unofficial* Bitwarden implementation, DO NOT use the |");
println!("| official channels to report bugs/features, regardless of client. |"); println!("| official channels to report bugs/features, regardless of client. |");
println!("| Report URL: https://github.com/dani-garcia/bitwarden_rs/issues/new |"); println!("| Send usage/configuration questions or feature requests to: |");
println!("| https://bitwardenrs.discourse.group/ |");
println!("| Report suspected bugs/issues in the software itself at: |");
println!("| https://github.com/dani-garcia/bitwarden_rs/issues/new |");
println!("\\--------------------------------------------------------------------/\n"); println!("\\--------------------------------------------------------------------/\n");
} }
@@ -121,6 +152,44 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
logger.apply()?; logger.apply()?;
// Catch panics and log them instead of default output to StdErr
panic::set_hook(Box::new(|info| {
let backtrace = Backtrace::new();
let thread = thread::current();
let thread = thread.name().unwrap_or("unnamed");
let msg = match info.payload().downcast_ref::<&'static str>() {
Some(s) => *s,
None => match info.payload().downcast_ref::<String>() {
Some(s) => &**s,
None => "Box<Any>",
},
};
match info.location() {
Some(location) => {
error!(
target: "panic", "thread '{}' panicked at '{}': {}:{}{:?}",
thread,
msg,
location.file(),
location.line(),
Shim(backtrace)
);
}
None => {
error!(
target: "panic",
"thread '{}' panicked at '{}'{:?}",
thread,
msg,
Shim(backtrace)
)
}
}
}));
Ok(()) Ok(())
} }
@@ -177,7 +246,9 @@ fn check_rsa_keys() {
info!("JWT keys don't exist, checking if OpenSSL is available..."); info!("JWT keys don't exist, checking if OpenSSL is available...");
Command::new("openssl").arg("version").status().unwrap_or_else(|_| { Command::new("openssl").arg("version").status().unwrap_or_else(|_| {
info!("Can't create keys because OpenSSL is not available, make sure it's installed and available on the PATH"); info!(
"Can't create keys because OpenSSL is not available, make sure it's installed and available on the PATH"
);
exit(1); exit(1);
}); });
@@ -250,23 +321,35 @@ mod migrations {
let connection = crate::db::get_connection().expect("Can't connect to DB"); let connection = crate::db::get_connection().expect("Can't connect to DB");
use std::io::stdout; use std::io::stdout;
// Disable Foreign Key Checks during migration
use diesel::RunQueryDsl;
#[cfg(feature = "postgres")]
diesel::sql_query("SET CONSTRAINTS ALL DEFERRED").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
#[cfg(feature = "mysql")]
diesel::sql_query("SET FOREIGN_KEY_CHECKS = 0").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
#[cfg(feature = "sqlite")]
diesel::sql_query("PRAGMA defer_foreign_keys = ON").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut stdout()).expect("Can't run migrations"); embedded_migrations::run_with_output(&connection, &mut stdout()).expect("Can't run migrations");
} }
} }
fn launch_rocket(extra_debug: bool) { fn launch_rocket(extra_debug: bool) {
// Create Rocket object, this stores current log level and sets it's own // Create Rocket object, this stores current log level and sets its own
let rocket = rocket::ignite(); let rocket = rocket::ignite();
// If addding more base paths here, consider also adding them to let basepath = &CONFIG.domain_path();
// If adding more paths here, consider also adding them to
// crate::utils::LOGGED_ROUTES to make sure they appear in the log // crate::utils::LOGGED_ROUTES to make sure they appear in the log
let rocket = rocket let rocket = rocket
.mount("/", api::web_routes()) .mount(&[basepath, "/"].concat(), api::web_routes())
.mount("/api", api::core_routes()) .mount(&[basepath, "/api"].concat(), api::core_routes())
.mount("/admin", api::admin_routes()) .mount(&[basepath, "/admin"].concat(), api::admin_routes())
.mount("/identity", api::identity_routes()) .mount(&[basepath, "/identity"].concat(), api::identity_routes())
.mount("/icons", api::icons_routes()) .mount(&[basepath, "/icons"].concat(), api::icons_routes())
.mount("/notifications", api::notifications_routes()) .mount(&[basepath, "/notifications"].concat(), api::notifications_routes())
.manage(db::init_pool()) .manage(db::init_pool())
.manage(api::start_notification_server()) .manage(api::start_notification_server())
.attach(util::AppHeaders()) .attach(util::AppHeaders())

View File

@@ -108,7 +108,9 @@
"microsoftonline.com", "microsoftonline.com",
"office365.com", "office365.com",
"microsoftstore.com", "microsoftstore.com",
"xbox.com" "xbox.com",
"azure.com",
"windowsazure.com"
], ],
"Excluded": false "Excluded": false
}, },
@@ -126,8 +128,7 @@
"Type": 12, "Type": 12,
"Domains": [ "Domains": [
"overture.com", "overture.com",
"yahoo.com", "yahoo.com"
"flickr.com"
], ],
"Excluded": false "Excluded": false
}, },
@@ -192,7 +193,6 @@
"amazon.it", "amazon.it",
"amazon.com.au", "amazon.com.au",
"amazon.co.nz", "amazon.co.nz",
"amazon.co.jp",
"amazon.in" "amazon.in"
], ],
"Excluded": false "Excluded": false
@@ -775,5 +775,71 @@
"customercontrolpanel.de" "customercontrolpanel.de"
], ],
"Excluded": false "Excluded": false
},
{
"Type": 76,
"Domains": [
"docusign.com",
"docusign.net"
],
"Excluded": false
},
{
"Type": 77,
"Domains": [
"envato.com",
"themeforest.net",
"codecanyon.net",
"videohive.net",
"audiojungle.net",
"graphicriver.net",
"photodune.net",
"3docean.net"
],
"Excluded": false
},
{
"Type": 78,
"Domains": [
"x10hosting.com",
"x10premium.com"
],
"Excluded": false
},
{
"Type": 79,
"Domains": [
"dnsomatic.com",
"opendns.com",
"umbrella.com"
],
"Excluded": false
},
{
"Type": 80,
"Domains": [
"cagreatamerica.com",
"canadaswonderland.com",
"carowinds.com",
"cedarfair.com",
"cedarpoint.com",
"dorneypark.com",
"kingsdominion.com",
"knotts.com",
"miadventure.com",
"schlitterbahn.com",
"valleyfair.com",
"visitkingsisland.com",
"worldsoffun.com"
],
"Excluded": false
},
{
"Type": 81,
"Domains": [
"ubnt.com",
"ui.com"
],
"Excluded": false
} }
] ]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,67 +1,127 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="robots" content="noindex,nofollow" />
<title>Bitwarden_rs Admin Panel</title> <title>Bitwarden_rs Admin Panel</title>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" />
<link rel="stylesheet" href="/bwrs_static/bootstrap.css" />
<script src="/bwrs_static/bootstrap-native-v4.js"></script>
<script src="/bwrs_static/md5.js"></script>
<script src="/bwrs_static/identicon.js"></script>
<style> <style>
body { body {
padding-top: 70px; padding-top: 75px;
} }
@media (max-width:768px) {
body {
padding-top: 190px;
}
.container {
max-width: 100%;
}
}
img { img {
width: 48px; width: 48px;
height: 48px; height: 48px;
} }
.navbar img {
height: 24px;
width: auto;
}
</style> </style>
<script src="{{urlpath}}/bwrs_static/md5.js"></script>
<script src="{{urlpath}}/bwrs_static/identicon.js"></script>
<script>
function reload() { window.location.reload(); }
function msg(text, reload_page = true) {
text && alert(text);
reload_page && reload();
}
function identicon(email) {
const data = new Identicon(md5(email), { size: 48, format: 'svg' });
return "data:image/svg+xml;base64," + data.toString();
}
function toggleVis(input_id) {
const elem = document.getElementById(input_id);
const type = elem.getAttribute("type");
if (type === "text") {
elem.setAttribute("type", "password");
} else {
elem.setAttribute("type", "text");
}
return false;
}
function _post(url, successMsg, errMsg, body, reload_page = true) {
fetch(url, {
method: 'POST',
body: body,
mode: "same-origin",
credentials: "same-origin",
headers: { "Content-Type": "application/json" }
}).then( resp => {
if (resp.ok) { msg(successMsg, reload_page); return Promise.reject({error: false}); }
respStatus = resp.status;
respStatusText = resp.statusText;
return resp.text();
}).then( respText => {
try {
const respJson = JSON.parse(respText);
return respJson ? respJson.ErrorModel.Message : "Unknown error";
} catch (e) {
return Promise.reject({body:respStatus + ' - ' + respStatusText, error: true});
}
}).then( apiMsg => {
msg(errMsg + "\n" + apiMsg, reload_page);
}).catch( e => {
if (e.error === false) { return true; }
else { msg(errMsg + "\n" + e.body, reload_page); }
});
}
</script>
</head> </head>
<body class="bg-light"> <body class="bg-light">
<nav class="navbar navbar-expand-sm navbar-dark bg-dark fixed-top shadow"> <nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top">
<a class="navbar-brand" href="#">Bitwarden_rs</a> <div class="container">
<div class="navbar-collapse"> <a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Bitwarden_rs Admin</a>
<ul class="navbar-nav"> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse"
<li class="nav-item active"> aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation">
<a class="nav-link" href="/admin">Admin Panel</a> <span class="navbar-toggler-icon"></span>
</li> </button>
<li class="nav-item"> <div class="collapse navbar-collapse" id="navbarCollapse">
<a class="nav-link" href="/">Vault</a> <ul class="navbar-nav mr-auto">
</li> {{#if logged_in}}
</ul> <li class="nav-item">
<a class="nav-link" href="{{urlpath}}/admin">Settings</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{urlpath}}/admin/users/overview">Users</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{urlpath}}/admin/organizations/overview">Organizations</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{urlpath}}/admin/diagnostics">Diagnostics</a>
</li>
{{/if}}
<li class="nav-item">
<a class="nav-link" href="{{urlpath}}/">Vault</a>
</li>
</ul>
{{#if logged_in}}
<a class="btn btn-sm btn-secondary" href="{{urlpath}}/admin/logout">Log Out</a>
{{/if}}
</div>
</div> </div>
<ul class="navbar-nav">
{{#if version}}
<li class="nav-item">
<span class="navbar-text mr-2">Version: {{version}}</span>
</li>
{{/if}}
{{#if logged_in}}
<li class="nav-item">
<a class="nav-link" href="/admin/logout">Log Out</a>
</li>
{{/if}}
</ul>
</nav> </nav>
{{> (page_content) }} {{> (page_content) }}
</body>
<!-- This script needs to be at the bottom, else it will fail! -->
<script>
// get current URL path and assign 'active' class to the correct nav-item
(function () {
var pathname = window.location.pathname;
if (pathname === "") return;
var navItem = document.querySelectorAll('.navbar-nav .nav-item a[href="'+pathname+'"]');
if (navItem.length === 1) {
navItem[0].parentElement.className = navItem[0].parentElement.className + ' active';
}
})();
</script>
<!-- This script needs to be at the bottom, else it will fail! -->
<script src="{{urlpath}}/bwrs_static/bootstrap-native-v4.js"></script>
</body>
</html> </html>

View File

@@ -0,0 +1,150 @@
<main class="container">
<div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-2">Diagnostics</h6>
<h3>Version</h3>
<div class="row">
<div class="col-md">
<dl class="row">
<dt class="col-sm-5">Server Installed
<span class="badge badge-success d-none" id="server-success" title="Latest version is installed.">Ok</span>
<span class="badge badge-warning d-none" id="server-warning" title="There seems to be an update available.">Update</span>
<span class="badge badge-info d-none" id="server-branch" title="This is a branched version.">Branched</span>
</dt>
<dd class="col-sm-7">
<span id="server-installed">{{version}}</span>
</dd>
<dt class="col-sm-5">Server Latest
<span class="badge badge-danger d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span>
</dd>
<dt class="col-sm-5">Web Installed
<span class="badge badge-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge badge-warning d-none" id="web-warning" title="There seems to be an update available.">Update</span>
</dt>
<dd class="col-sm-7">
<span id="web-installed">{{diagnostics.web_vault_version}}</span>
</dd>
<dt class="col-sm-5">Web Latest
<span class="badge badge-danger d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="web-latest">{{diagnostics.latest_web_build}}</span>
</dd>
</dl>
</div>
</div>
<h3>Checks</h3>
<div class="row">
<div class="col-md">
<dl class="row">
<dt class="col-sm-5">DNS (github.com)
<span class="badge badge-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge badge-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
</dt>
<dd class="col-sm-7">
<span id="dns-resolved">{{diagnostics.dns_resolved}}</span>
</dd>
<dt class="col-sm-5">Date & Time (UTC)
<span class="badge badge-success d-none" id="time-success" title="Time offsets seem to be correct.">Ok</span>
<span class="badge badge-danger d-none" id="time-warning" title="Time offsets are too mouch at drift.">Error</span>
</dt>
<dd class="col-sm-7">
<span id="time-server" class="d-block"><b>Server:</b> <span id="time-server-string">{{diagnostics.server_time}}</span></span>
<span id="time-browser" class="d-block"><b>Browser:</b> <span id="time-browser-string"></span></span>
</dd>
</dl>
</div>
</div>
</div>
</main>
<script>
(() => {
const d = new Date();
const year = d.getUTCFullYear();
const month = String(d.getUTCMonth()+1).padStart(2, '0');
const day = String(d.getUTCDate()).padStart(2, '0');
const hour = String(d.getUTCHours()).padStart(2, '0');
const minute = String(d.getUTCMinutes()).padStart(2, '0');
const seconds = String(d.getUTCSeconds()).padStart(2, '0');
const browserUTC = year + '-' + month + '-' + day + ' ' + hour + ':' + minute + ':' + seconds;
document.getElementById("time-browser-string").innerText = browserUTC;
const serverUTC = document.getElementById("time-server-string").innerText;
const timeDrift = (Date.parse(serverUTC) - Date.parse(browserUTC)) / 1000;
if (timeDrift > 30 || timeDrift < -30) {
document.getElementById('time-warning').classList.remove('d-none');
} else {
document.getElementById('time-success').classList.remove('d-none');
}
// Check if the output is a valid IP
const isValidIp = value => (/^(?:(?:^|\.)(?:2(?:5[0-5]|[0-4]\d)|1?\d?\d)){4}$/.test(value) ? true : false);
if (isValidIp(document.getElementById('dns-resolved').innerText)) {
document.getElementById('dns-success').classList.remove('d-none');
} else {
document.getElementById('dns-warning').classList.remove('d-none');
}
let serverInstalled = document.getElementById('server-installed').innerText;
let serverLatest = document.getElementById('server-latest').innerText;
let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', '');
if (serverInstalled.indexOf('-') !== -1 && serverLatest !== '-' && serverLatestCommit !== '-') {
document.getElementById('server-latest-commit').classList.remove('d-none');
}
const webInstalled = document.getElementById('web-installed').innerText;
const webLatest = document.getElementById('web-latest').innerText;
checkVersions('server', serverInstalled, serverLatest, serverLatestCommit);
checkVersions('web', webInstalled, webLatest);
function checkVersions(platform, installed, latest, commit=null) {
if (installed === '-' || latest === '-') {
document.getElementById(platform + '-failed').classList.remove('d-none');
return;
}
// Only check basic versions, no commit revisions
if (commit === null || installed.indexOf('-') === -1) {
if (installed !== latest) {
document.getElementById(platform + '-warning').classList.remove('d-none');
} else {
document.getElementById(platform + '-success').classList.remove('d-none');
}
} else {
// Check if this is a branched version.
const branchRegex = /(?:\s)\((.*?)\)/;
const branchMatch = installed.match(branchRegex);
if (branchMatch !== null) {
document.getElementById(platform + '-branch').classList.remove('d-none');
}
// This will remove branch info and check if there is a commit hash
const installedRegex = /(\d+\.\d+\.\d+)-(\w+)/;
const instMatch = installed.match(installedRegex);
// It could be that a new tagged version has the same commit hash.
// In this case the version is the same but only the number is different
if (instMatch !== null) {
if (instMatch[2] === commit) {
// The commit hashes are the same, so latest version is installed
document.getElementById(platform + '-success').classList.remove('d-none');
return;
}
}
if (installed === latest) {
document.getElementById(platform + '-success').classList.remove('d-none');
} else {
document.getElementById(platform + '-warning').classList.remove('d-none');
}
}
}
})();
</script>

View File

@@ -0,0 +1,51 @@
<main class="container">
<div id="organizations-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-0">Organizations</h6>
<div class="table-responsive-xl small">
<table class="table table-sm table-striped table-hover">
<thead>
<tr>
<th style="width: 24px;" colspan="2">Organization</th>
<th>Users</th>
<th>Items</th>
<th>Attachments</th>
</tr>
</thead>
<tbody>
{{#each organizations}}
<tr>
<td><img class="rounded identicon" data-src="{{Id}}"></td>
<td>
<strong>{{Name}}</strong>
<span class="mr-2">({{BillingEmail}})</span>
<span class="d-block">
<span class="badge badge-success">{{Id}}</span>
</span>
</td>
<td>
<span class="d-block">{{user_count}}</span>
</td>
<td>
<span class="d-block">{{cipher_count}}</span>
</td>
<td>
<span class="d-block"><strong>Amount:</strong> {{attachment_count}}</span>
{{#if attachment_count}}
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
{{/if}}
</td>
</tr>
{{/each}}
</tbody>
</table>
</div>
</div>
</main>
<script>
document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src);
});
</script>

View File

@@ -1,68 +1,4 @@
<main class="container"> <main class="container">
<div id="users-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-0">Registered Users</h6>
<div id="users-list">
{{#each users}}
<div class="media pt-3">
<img class="mr-2 rounded identicon" data-src="{{Email}}">
<div class="media-body pb-3 mb-0 small border-bottom">
<div class="row justify-content-between">
<div class="col">
<strong>{{Name}}</strong>
{{#if TwoFactorEnabled}}
<span class="badge badge-success ml-2">2FA</span>
{{/if}}
{{#case _Status 1}}
<span class="badge badge-warning ml-2">Invited</span>
{{/case}}
<span class="d-block">{{Email}}</span>
</div>
<div class="col">
<strong> Organizations:</strong>
<span class="d-block">
{{#each Organizations}}
<span class="badge badge-primary" data-orgtype="{{Type}}">{{Name}}</span>
{{/each}}
</span>
</div>
<div style="flex: 0 0 300px; font-size: 90%; text-align: right; padding-right: 15px">
{{#if TwoFactorEnabled}}
<a class="mr-2" href="#" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</a>
{{/if}}
<a class="mr-2" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
<a class="mr-2" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
</div>
</div>
</div>
</div>
{{/each}}
</div>
<div class="mt-3">
<button type="button" class="btn btn-sm btn-link" onclick="updateRevisions();"
title="Force all clients to fetch new data next time they connect. Useful after restoring a backup to remove any stale data.">
Force clients to resync
</button>
<button type="button" class="btn btn-sm btn-primary float-right" onclick="reload();">Reload users</button>
</div>
</div>
<div id="invite-form-block" class="align-items-center p-3 mb-3 text-white-50 bg-secondary rounded shadow">
<div>
<h6 class="mb-0 text-white">Invite User</h6>
<small>Email:</small>
<form class="form-inline" id="invite-form" onsubmit="inviteUser(); return false;">
<input type="email" class="form-control w-50 mr-2" id="email-invite" placeholder="Enter email">
<button type="submit" class="btn btn-primary">Invite</button>
</form>
</div>
</div>
<div id="config-block" class="align-items-center p-3 mb-3 bg-secondary rounded shadow"> <div id="config-block" class="align-items-center p-3 mb-3 bg-secondary rounded shadow">
<div> <div>
<h6 class="text-white mb-3">Configuration</h6> <h6 class="text-white mb-3">Configuration</h6>
@@ -71,6 +7,7 @@
them to avoid confusion. This does not apply to the read-only section, which can only be set through the them to avoid confusion. This does not apply to the read-only section, which can only be set through the
environment. environment.
</div> </div>
<form class="form accordion" id="config-form" onsubmit="saveConfig(); return false;"> <form class="form accordion" id="config-form" onsubmit="saveConfig(); return false;">
{{#each config}} {{#each config}}
{{#if groupdoc}} {{#if groupdoc}}
@@ -110,6 +47,17 @@
</div> </div>
{{/if}} {{/if}}
{{/each}} {{/each}}
{{#case group "smtp"}}
<div class="form-group row pt-3 border-top" title="Send a test email to given email address">
<label for="smtp-test-email" class="col-sm-3 col-form-label">Test SMTP</label>
<div class="col-sm-8 input-group">
<input class="form-control" id="smtp-test-email" type="email" placeholder="Enter test email">
<div class="input-group-append">
<button type="button" class="btn btn-outline-primary" onclick="smtpTest(); return false;">Send test email</button>
</div>
</div>
</div>
{{/case}}
</div> </div>
</div> </div>
{{/if}} {{/if}}
@@ -190,74 +138,12 @@
</style> </style>
<script> <script>
function reload() { window.location.reload(); } function smtpTest() {
function msg(text) { text && alert(text); reload(); } test_email = document.getElementById("smtp-test-email");
function identicon(email) { data = JSON.stringify({ "email": test_email.value });
const data = new Identicon(md5(email), { size: 48, format: 'svg' }); _post("{{urlpath}}/admin/test/smtp/",
return "data:image/svg+xml;base64," + data.toString(); "SMTP Test email sent correctly",
} "Error sending SMTP test email", data, false);
function toggleVis(input_id) {
const elem = document.getElementById(input_id);
const type = elem.getAttribute("type");
if (type === "text") {
elem.setAttribute("type", "password");
} else {
elem.setAttribute("type", "text");
}
return false;
}
function _post(url, successMsg, errMsg, body) {
fetch(url, {
method: 'POST',
body: body,
mode: "same-origin",
credentials: "same-origin",
headers: { "Content-Type": "application/json" }
}).then(e => {
if (e.ok) { return msg(successMsg); }
e.json().then(json => {
const msg = json ? json.ErrorModel.Message : "Unknown error";
msg(errMsg + ": " + msg);
});
}).catch(e => { msg(errMsg + ": Unknown error") });
}
function deleteUser(id, mail) {
var input_mail = prompt("To delete user '" + mail + "', please type the email below")
if (input_mail != null) {
if (input_mail == mail) {
_post("/admin/users/" + id + "/delete",
"User deleted correctly",
"Error deleting user");
} else {
alert("Wrong email, please try again")
}
}
return false;
}
function remove2fa(id) {
_post("/admin/users/" + id + "/remove-2fa",
"2FA removed correctly",
"Error removing 2FA");
return false;
}
function deauthUser(id) {
_post("/admin/users/" + id + "/deauth",
"Sessions deauthorized correctly",
"Error deauthorizing sessions");
return false;
}
function updateRevisions() {
_post("/admin/users/update_revision",
"Success, clients will sync next time they connect",
"Error forcing clients to sync");
return false;
}
function inviteUser() {
inv = document.getElementById("email-invite");
data = JSON.stringify({ "email": inv.value });
inv.value = "";
_post("/admin/invite/", "User invited correctly",
"Error inviting user", data);
return false; return false;
} }
function getFormData() { function getFormData() {
@@ -278,7 +164,7 @@
} }
function saveConfig() { function saveConfig() {
data = JSON.stringify(getFormData()); data = JSON.stringify(getFormData());
_post("/admin/config/", "Config saved correctly", _post("{{urlpath}}/admin/config/", "Config saved correctly",
"Error saving config", data); "Error saving config", data);
return false; return false;
} }
@@ -286,7 +172,7 @@
var input = prompt("This will remove all user configurations, and restore the defaults and the " + var input = prompt("This will remove all user configurations, and restore the defaults and the " +
"values set by the environment. This operation could be dangerous. Type 'DELETE' to proceed:"); "values set by the environment. This operation could be dangerous. Type 'DELETE' to proceed:");
if (input === "DELETE") { if (input === "DELETE") {
_post("/admin/config/delete", _post("{{urlpath}}/admin/config/delete",
"Config deleted correctly", "Config deleted correctly",
"Error deleting config"); "Error deleting config");
} else { } else {
@@ -296,9 +182,9 @@
return false; return false;
} }
function backupDatabase() { function backupDatabase() {
_post("/admin/config/backup_db", _post("{{urlpath}}/admin/config/backup_db",
"Backup created successfully", "Backup created successfully",
"Error creating backup"); "Error creating backup", null, false);
return false; return false;
} }
function masterCheck(check_id, inputs_query) { function masterCheck(check_id, inputs_query) {
@@ -314,26 +200,9 @@
onChange(); // Trigger the event initially onChange(); // Trigger the event initially
checkbox.addEventListener("change", onChange); checkbox.addEventListener("change", onChange);
} }
let OrgTypes = { // These are formatted because otherwise the
"0": { "name": "Owner", "color": "orange" },
"1": { "name": "Admin", "color": "blueviolet" },
"2": { "name": "User", "color": "blue" },
"3": { "name": "Manager", "color": "green" },
};
document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src);
});
document.querySelectorAll("[data-orgtype]").forEach(function (e, i) {
let orgtype = OrgTypes[e.dataset.orgtype];
e.style.backgroundColor = orgtype.color;
e.title = orgtype.name;
});
// These are formatted because otherwise the
// VSCode formatter breaks But they still work // VSCode formatter breaks But they still work
// {{#each config}} {{#if grouptoggle}} // {{#each config}} {{#if grouptoggle}}
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input"); masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
// {{/if}} {{/each}} // {{/if}} {{/each}}
</script> </script>

View File

@@ -0,0 +1,143 @@
<main class="container">
<div id="users-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-0">Registered Users</h6>
<div class="table-responsive-xl small">
<table class="table table-sm table-striped table-hover">
<thead>
<tr>
<th style="width: 24px;">User</th>
<th></th>
<th style="width:60px; min-width: 60px;">Items</th>
<th>Attachments</th>
<th style="min-width: 140px;">Organizations</th>
<th style="width: 140px; min-width: 140px;">Actions</th>
</tr>
</thead>
<tbody>
{{#each users}}
<tr>
<td><img class="mr-2 rounded identicon" data-src="{{Email}}"></td>
<td>
<strong>{{Name}}</strong>
<span class="d-block">{{Email}}</span>
<span class="d-block">
{{#if TwoFactorEnabled}}
<span class="badge badge-success mr-2" title="2FA is enabled">2FA</span>
{{/if}}
{{#case _Status 1}}
<span class="badge badge-warning mr-2" title="User is invited">Invited</span>
{{/case}}
{{#if EmailVerified}}
<span class="badge badge-success mr-2" title="Email has been verified">Verified</span>
{{/if}}
</span>
</td>
<td>
<span class="d-block">{{cipher_count}}</span>
</td>
<td>
<span class="d-block"><strong>Amount:</strong> {{attachment_count}}</span>
{{#if attachment_count}}
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
{{/if}}
</td>
<td>
{{#each Organizations}}
<span class="badge badge-primary" data-orgtype="{{Type}}">{{Name}}</span>
{{/each}}
</td>
<td style="font-size: 90%; text-align: right; padding-right: 15px">
{{#if TwoFactorEnabled}}
<a class="d-block" href="#" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</a>
{{/if}}
<a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
<a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
</td>
</tr>
{{/each}}
</tbody>
</table>
</div>
<div class="mt-3">
<button type="button" class="btn btn-sm btn-danger" onclick="updateRevisions();"
title="Force all clients to fetch new data next time they connect. Useful after restoring a backup to remove any stale data.">
Force clients to resync
</button>
<button type="button" class="btn btn-sm btn-primary float-right" onclick="reload();">Reload users</button>
</div>
</div>
<div id="invite-form-block" class="align-items-center p-3 mb-3 text-white-50 bg-secondary rounded shadow">
<div>
<h6 class="mb-0 text-white">Invite User</h6>
<small>Email:</small>
<form class="form-inline" id="invite-form" onsubmit="inviteUser(); return false;">
<input type="email" class="form-control w-50 mr-2" id="email-invite" placeholder="Enter email">
<button type="submit" class="btn btn-primary">Invite</button>
</form>
</div>
</div>
</main>
<script>
function deleteUser(id, mail) {
var input_mail = prompt("To delete user '" + mail + "', please type the email below")
if (input_mail != null) {
if (input_mail == mail) {
_post("{{urlpath}}/admin/users/" + id + "/delete",
"User deleted correctly",
"Error deleting user");
} else {
alert("Wrong email, please try again")
}
}
return false;
}
function remove2fa(id) {
_post("{{urlpath}}/admin/users/" + id + "/remove-2fa",
"2FA removed correctly",
"Error removing 2FA");
return false;
}
function deauthUser(id) {
_post("{{urlpath}}/admin/users/" + id + "/deauth",
"Sessions deauthorized correctly",
"Error deauthorizing sessions");
return false;
}
function updateRevisions() {
_post("{{urlpath}}/admin/users/update_revision",
"Success, clients will sync next time they connect",
"Error forcing clients to sync");
return false;
}
function inviteUser() {
inv = document.getElementById("email-invite");
data = JSON.stringify({ "email": inv.value });
inv.value = "";
_post("{{urlpath}}/admin/invite/", "User invited correctly",
"Error inviting user", data);
return false;
}
let OrgTypes = {
"0": { "name": "Owner", "color": "orange" },
"1": { "name": "Admin", "color": "blueviolet" },
"2": { "name": "User", "color": "blue" },
"3": { "name": "Manager", "color": "green" },
};
document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src);
});
document.querySelectorAll("[data-orgtype]").forEach(function (e, i) {
let orgtype = OrgTypes[e.dataset.orgtype];
e.style.backgroundColor = orgtype.color;
e.title = orgtype.name;
});
</script>

View File

@@ -87,7 +87,7 @@ Your Email Change
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">

View File

@@ -87,7 +87,7 @@ Delete Your Account
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">

View File

@@ -1,8 +1,8 @@
Invitation accepted Invitation to {{{org_name}}} accepted
<!----------------> <!---------------->
<html> <html>
<p> <p>
Your invitation for <b>{{email}}</b> to join <b>{{org_name}}</b> was accepted. Your invitation for <b>{{email}}</b> to join <b>{{org_name}}</b> was accepted.
Please <a href="{{url}}">log in</a> to the bitwarden_rs server and confirm them from the organization management page. Please <a href="{{url}}/">log in</a> to the bitwarden_rs server and confirm them from the organization management page.
</p> </p>
</html> </html>

View File

@@ -1,4 +1,4 @@
Invitation accepted Invitation to {{{org_name}}} accepted
<!----------------> <!---------------->
<html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<head> <head>
@@ -87,7 +87,7 @@ Invitation accepted
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">
@@ -101,7 +101,7 @@ Invitation accepted
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top"> <td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Please <a href="{{url}}">log in</a> to the bitwarden_rs server and confirm them from the organization management page. Please <a href="{{url}}/">log in</a> to the bitwarden_rs server and confirm them from the organization management page.
</td> </td>
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">

View File

@@ -1,8 +1,8 @@
Invitation to {{org_name}} confirmed Invitation to {{{org_name}}} confirmed
<!----------------> <!---------------->
<html> <html>
<p> <p>
Your invitation to join <b>{{org_name}}</b> was confirmed. Your invitation to join <b>{{org_name}}</b> was confirmed.
It will now appear under the Organizations the next time you <a href="{{url}}">log in</a> to the web vault. It will now appear under the Organizations the next time you <a href="{{url}}/">log in</a> to the web vault.
</p> </p>
</html> </html>

View File

@@ -1,4 +1,4 @@
Invitation to {{org_name}} confirmed Invitation to {{{org_name}}} confirmed
<!----------------> <!---------------->
<html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<head> <head>
@@ -87,7 +87,7 @@ Invitation to {{org_name}} confirmed
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">
@@ -102,7 +102,7 @@ Invitation to {{org_name}} confirmed
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top"> <td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
Any collections and logins being shared with you by this organization will now appear in your Bitwarden vault. <br> Any collections and logins being shared with you by this organization will now appear in your Bitwarden vault. <br>
<a href="{{url}}">Log in</a> <a href="{{url}}/">Log in</a>
</td> </td>
</tr> </tr>
</table> </table>

View File

@@ -1,4 +1,4 @@
New Device Logged In From {{device}} New Device Logged In From {{{device}}}
<!----------------> <!---------------->
<html> <html>
<p> <p>
@@ -9,6 +9,6 @@ New Device Logged In From {{device}}
Device Type: {{device}} Device Type: {{device}}
You can deauthorize all devices that have access to your account from the You can deauthorize all devices that have access to your account from the
<a href="{{url}}">web vault</a> under Settings > My Account > Deauthorize Sessions. <a href="{{url}}/">web vault</a> under Settings > My Account > Deauthorize Sessions.
</p> </p>
</html> </html>

View File

@@ -1,4 +1,4 @@
New Device Logged In From {{device}} New Device Logged In From {{{device}}}
<!----------------> <!---------------->
<html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns="http://www.w3.org/1999/xhtml" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<head> <head>
@@ -87,7 +87,7 @@ New Device Logged In From {{device}}
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">
@@ -116,7 +116,7 @@ New Device Logged In From {{device}}
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top"> <td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
You can deauthorize all devices that have access to your account from the <a href="{{url}}">web vault</a> under Settings > My Account > Deauthorize Sessions. You can deauthorize all devices that have access to your account from the <a href="{{url}}/">web vault</a> under Settings > My Account > Deauthorize Sessions.
</td> </td>
</tr> </tr>
</table> </table>

View File

@@ -87,7 +87,7 @@ Sorry, you have no password hint...
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">

View File

@@ -3,7 +3,7 @@ Your master password hint
You (or someone) recently requested your master password hint. You (or someone) recently requested your master password hint.
Your hint is: "{{hint}}" Your hint is: "{{hint}}"
Log in: <a href="{{url}}">Web Vault</a> Log in: <a href="{{url}}/">Web Vault</a>
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted. If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted.

View File

@@ -87,7 +87,7 @@ Your master password hint
</tr> </tr>
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top"> <td class="container" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;" valign="top">
<table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: 600px;"> <table cellpadding="0" cellspacing="0" class="container-table" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both !important; color: #333; display: block !important; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto; max-width: 600px !important; width: max-content;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top"> <td class="content" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; display: block; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 0; line-height: 0; margin: 0 auto; max-width: 600px; padding-bottom: 20px;" valign="top">
<table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white"> <table class="main" width="100%" cellpadding="0" cellspacing="0" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; margin: 0; -webkit-text-size-adjust: none; border: 1px solid #e9e9e9; border-radius: 3px;" bgcolor="white">
@@ -102,7 +102,7 @@ Your master password hint
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top"> <td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Your hint is: "{{hint}}"<br style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;" /> Your hint is: "{{hint}}"<br style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;" />
Log in: <a href="{{url}}">Web Vault</a> Log in: <a href="{{url}}/">Web Vault</a>
</td> </td>
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">

View File

@@ -1,4 +1,4 @@
Join {{org_name}} Join {{{org_name}}}
<!----------------> <!---------------->
<html> <html>
<p> <p>

Some files were not shown because too many files have changed in this diff Show More