Compare commits

...

89 Commits

Author SHA1 Message Date
Daniel García
9323c57f49 Remove debug print 2021-02-07 00:22:39 +01:00
Daniel García
85e3c73525 Basic experimental ldap import support with the official directory connector 2021-02-06 20:15:42 +01:00
Daniel García
a74bc2e58f Update web vault to 2.18.1b 2021-02-06 16:49:49 +01:00
Daniel García
0680638933 Update dependencies 2021-02-06 16:49:28 +01:00
Daniel García
46d31ee5f7 Merge pull request #1356 from BlackDex/fix-config-bug
Fixed small buggy in validation
2021-02-03 23:50:49 +01:00
BlackDex
e794b397d3 Fixed small buggy in validation 2021-02-03 23:47:48 +01:00
Daniel García
d41350050b Merge pull request #1353 from BlackDex/admin-interface
Extra features for admin interface.
2021-02-03 22:50:15 +01:00
Mathijs van Veluw
4cd5b06b7f Merge branch 'master' into admin-interface 2021-02-03 22:41:59 +01:00
Daniel García
cd768439d2 Merge pull request #1329 from BlackDex/misc-updates
JSON Response updates and small fixes
2021-02-03 22:37:59 +01:00
Mathijs van Veluw
9e5fd2d576 Merge branch 'master' into admin-interface 2021-02-03 22:22:33 +01:00
Mathijs van Veluw
ecb46f591c Merge branch 'master' into misc-updates 2021-02-03 22:22:06 +01:00
Daniel García
d62d53aa8e Merge pull request #1341 from BlackDex/dep-update
Updated dependencies and small mail fixes
2021-02-03 22:19:18 +01:00
Daniel García
2c515ab13c Merge pull request #1355 from jjlin/global-domains
Sync global_domains.json with upstream
2021-02-03 22:17:57 +01:00
Jeremy Lin
83d556ff0c Sync global_domains.json to bitwarden/server@cf84453 (Disney, Sony) 2021-02-03 12:22:03 -08:00
Jeremy Lin
678d313836 global_domains.py: allow syncing to a specific Git ref 2021-02-03 12:20:44 -08:00
BlackDex
705d840ea3 Extra features for admin interface.
- Able to modify the user type per organization
- Able to remove a whole organization
- Added podman detection
- Only show web-vault update when not running a containerized
  bitwarden_rs

Solves #936
2021-02-03 18:43:54 +01:00
BlackDex
7dff8c01dd JSON Response updates and small fixes
Updated several json response models.
Also fixed a few small bugs.

ciphers.rs:
  - post_ciphers_create:
    * Prevent cipher creation to organization without a collection.
  - update_cipher_from_data:
    * ~~Fixed removal of user_uuid which prevent user-owned shared-cipher to be not editable anymore when set to read-only.~~
    * Cleanup the json_data by removing the `Response` key/values from several objects.
  - delete_all:
    * Do not delete all Collections during the Purge of an Organization (same as upstream).

cipher.rs:
  - Cipher::to_json:
    * Updated json response to match upstream.
    * Return empty json object if there is no type_data instead of values which should not be set for the type_data.

organizations.rs:
  * Added two new endpoints to prevent Javascript errors regarding tax

organization.rs:
  - Organization::to_json:
    * Updated response model to match upstream
  - UserOrganization::to_json:
    * Updated response model to match upstream

collection.rs:
  - Collection::{to_json, to_json_details}:
    * Updated the json response model, and added a detailed version used during the sync
  - hide_passwords_for_user:
    * Added this function to return if the passwords should be hidden or not for the user at the specific collection (used by `to_json_details`)

Update 1: Some small changes after comments from @jjlin.
Update 2: Fixed vault purge by user to make sure the cipher is not part of an organization.

Resolves #971
Closes #990, Closes #991
2021-01-31 21:46:37 +01:00
BlackDex
5860679624 Updated dependencies and small mail fixes
- Updated rust nightly
- Updated depenencies
- Removed unicode support for regex (less dependencies)
- Fixed dependency and nightly changes/deprications
- Some mail changes for less spam point triggering
2021-01-31 20:07:42 +01:00
Daniel García
4628e4519d Update web vault to 2.18.1 2021-01-27 16:08:11 +01:00
Mathijs van Veluw
b884fd20a1 Merge pull request #1333 from jjlin/fix-manager-access
Fix collection access issues for owner/admin users
2021-01-27 08:07:20 +01:00
Jeremy Lin
67c657003d Fix collection access issues for owner/admin users
The implementation of the `Manager` user type (#1242) introduced a regression
whereby owner/admin users are incorrectly denied access to certain collection
APIs if their access control for collections isn't set to "access all".

Owner/admin users should always have full access to collection APIs, per
https://bitwarden.com/help/article/user-types-access-control/#access-control:

> Assigning Admins and Owners to Collections via Access Control will only
> impact which Collections appear readily in the Filters section of their
> Vault. Admins and Owners will always be able to access "un-assigned"
> Collections via the Organization view.
2021-01-26 22:35:09 -08:00
Daniel García
580c1bbc7d Update web vault to 2.18.0 2021-01-25 12:27:57 +01:00
Daniel García
2b6383d243 Merge pull request #1327 from jjlin/dockerfile-cleanup
Dockerfile.j2: clean up web-vault section
2021-01-25 12:24:04 +01:00
Daniel García
f27455a26f Merge pull request #1328 from jjlin/restore-rev-date
Add cipher response to restore operations
2021-01-25 12:23:00 +01:00
Jeremy Lin
1d4f900e48 Add cipher response to restore operations
This matches changes in the upstream Bitwarden server and clients.

Upstream PR: https://github.com/bitwarden/server/pull/1072
2021-01-24 21:57:32 -08:00
Jeremy Lin
c5ca588a6f Dockerfile.j2: clean up web-vault section 2021-01-24 17:26:25 -08:00
Daniel García
06888251e3 Merge pull request #1326 from jjlin/personal-ownership
Add support for the Personal Ownership policy
2021-01-24 14:09:12 +01:00
Daniel García
1a6e4cf4e4 Merge pull request #1321 from mkilchhofer/feature/improve_shutdown_behavior
Improve shutdown behavior (on kubernetes and allow CTRL+C)
2021-01-24 14:06:15 +01:00
Jeremy Lin
9f86196a9d Add support for the Personal Ownership policy
Upstream refs:

* https://github.com/bitwarden/server/pull/1013
* https://bitwarden.com/help/article/policies/#personal-ownership
2021-01-23 20:50:06 -08:00
Marco Kilchhofer
1e31043fb3 Improve shutdown behavior (on kubernetes) 2021-01-22 11:50:24 +01:00
Daniel García
85adcf1ae5 Merge pull request #1316 from BlackDex/admin-interface
Updated the admin interface
2021-01-19 21:58:21 +01:00
Daniel García
9abb4d2873 Merge pull request #1314 from jjlin/image-labels
Add `org.opencontainers` labels to Docker images
2021-01-19 21:53:27 +01:00
BlackDex
235ff44736 Updated the admin interface
Mostly updated the admin interface, also some small other items.

- Added more diagnostic information to (hopefully) decrease issue
  reporting, or at least solve them quicker.
- Added an option to generate a support string which can be used to
  copy/paste on the forum or during the creation of an issue. It will
try to hide the sensitive information automatically.
- Changed the `Created At` and `Last Active` info to be in a column and
  able to sort them in the users overview.
- Some small layout changes.
- Updated javascript and css files to the latest versions available.
- Decreased the png file sizes using `oxipng`
- Updated target='_blank' links to have rel='noreferrer' to prevent
  javascript window.opener modifications.
2021-01-19 17:55:21 +01:00
Jeremy Lin
9c2d741749 Add org.opencontainers labels to Docker images 2021-01-18 01:10:41 -08:00
Daniel García
37cc0c34cf Merge pull request #1304 from jjlin/buildx
Use Docker Buildx for multi-arch builds
2021-01-12 21:51:33 +01:00
Jeremy Lin
5633b6ac94 Use Docker Buildx for multi-arch builds
The bitwarden_rs code is still cross-compiled exactly as before, but Docker
Buildx is used to rewrite the resulting Docker images with correct platform
metadata (reflecting the target platform instead of the build platform).
Buildx also now handles building and pushing the multi-arch manifest lists.
2021-01-09 02:33:36 -08:00
Daniel García
175f2aeace Merge pull request #1270 from BlackDex/update-ci
Updated Github Actions, Fixed Dockerfile
2020-12-17 18:22:46 +01:00
BlackDex
feefe69094 Updated Github Actions, Fixed Dockerfile
- Updated the Github Actions to build just one binary with all DB
  Backends.

- Created a hadolint workflow to check and verify Dockerfiles.
- Fixed current hadolint errors.
- Fixed a bug in the Dockerfile.j2 which prevented the correct libraries
  and tools to be installed on the Alpine images.

- Deleted travis.yml since that is not used anymore
2020-12-16 19:31:39 +01:00
Daniel García
46df3ee7cd Updated insecure ws dependency and general dep updates 2020-12-15 22:23:12 +01:00
Daniel García
bb945ad01b Merge pull request #1243 from BlackDex/fix-key-rotate
Fix Key Rotation during password change.
2020-12-14 20:56:31 +01:00
BlackDex
de86aa671e Fix Key Rotation during password change
When ticking the 'Also rotate my account's encryption key' box, the key
rotated ciphers are posted after the change of password.

During the password change the security stamp was reseted which made
the posted key's return an invalid auth. This reset is needed to prevent other clients from still being able to read/write.

This fixes this by adding a new database column which stores a stamp exception which includes the allowed route and the current security stamp before it gets reseted.
When the security stamp check fails it will check if there is a stamp exception and tries to match the route and security stamp.

Currently it only allows for one exception. But if needed we could expand it by using a Vec<UserStampException> and change the functions accordingly.

fixes #1240
2020-12-14 19:58:23 +01:00
Daniel García
e38771bbbd Merge pull request #1267 from jjlin/datetime-cleanup
Clean up datetime output and code
2020-12-14 18:36:39 +01:00
Daniel García
a3f9a8d7dc Merge pull request #1265 from jjlin/cipher-rev-date
Fix stale data check failure when cloning a cipher
2020-12-14 18:35:17 +01:00
Daniel García
4b6bc6ef66 Merge pull request #1266 from BlackDex/icon-user-agent
Small update on favicon downloading
2020-12-14 18:34:07 +01:00
Jeremy Lin
455a23361f Clean up datetime output and code
* For clarity, add `UTC` suffix for datetimes in the `Diagnostics` admin tab.
* Format datetimes in the local timezone in the `Users` admin tab.
* Refactor some datetime code and add doc comments.
2020-12-13 19:49:22 -08:00
BlackDex
1a8ec04733 Small update on favicon downloading
- Changed the user-agent, which caused at least one site to stall the
  connection (Same happens on icons.bitwarden.com)
- Added default_header creation to the lazy static CLIENT
- Added referer passing, which is checked by some sites
- Some small other changes
2020-12-10 23:13:24 +01:00
Jeremy Lin
4e60df7a08 Fix stale data check failure when cloning a cipher 2020-12-10 00:17:34 -08:00
Daniel García
219a9d9f5e Merge pull request #1262 from BlackDex/icon-fixes
Updated icon downloading.
2020-12-08 18:05:05 +01:00
BlackDex
48baf723a4 Updated icon downloading
- Added more checks to prevent panics (Removed unwrap)
- Try do download from base domain or add www when the provided domain
  fails
- Added some more domain validation checks to prevent errors
- Added the ICON_BLACKLIST_REGEX to a Lazy Static HashMap which
  speeds-up the checks!
- Validate the Regex before starting/config change.
- Some cleanups
- Disabled some noisy debugging from 2 crates.
2020-12-08 17:34:18 +01:00
Daniel García
6530904883 Update web vault version to 2.17.1 2020-12-08 16:43:19 +01:00
Daniel García
d15d24f4ff Merge pull request #1242 from BlackDex/allow-manager-role
Adding Manager Role support
2020-12-08 16:11:55 +01:00
Daniel García
8d992d637e Merge pull request #1257 from jjlin/cipher-rev-date
Validate cipher updates with revision date
2020-12-08 15:59:21 +01:00
Daniel García
6ebc83c3b7 Merge pull request #1247 from janost/admin-disable-user
Implement admin ability to enable/disable users
2020-12-08 15:43:56 +01:00
Daniel García
b32f4451ee Merge branch 'master' into admin-disable-user 2020-12-08 15:42:37 +01:00
Daniel García
99142c7552 Merge pull request #1252 from BlackDex/update-dependencies-20201203
Updated dependencies and Dockerfiles
2020-12-08 15:33:41 +01:00
Daniel García
db710bb931 Merge pull request #1245 from janost/user-last-login
Show last active it on admin users page
2020-12-08 15:31:25 +01:00
Jeremy Lin
a9e9a397d8 Validate cipher updates with revision date
Prevent clients from updating a cipher if the local copy is stale.
Validation is only performed when the client provides its last known
revision date; this date isn't provided when using older clients,
or when the operation doesn't involve updating an existing cipher.

Upstream PR: https://github.com/bitwarden/server/pull/994
2020-12-07 19:34:00 -08:00
BlackDex
d46a6ac687 Updated dependencies and Dockerfiles
- Updated crates
- Updated rust-toolchain
- Updated Dockerfile to use latest rust 1.48 version
- Updated AMD64 Alpine to use same version as rust-toolchain and support
  PostgreSQL.
- Updated Rocket to the commit right before they updated hyper.
  Until that update there were some crates updated and some small fixes.
  After that build fails and we probably need to make some changes
(which is probably something already done in the async branch)
2020-12-04 13:38:42 +01:00
janost
1eb5495802 Show latest active device as last active on admin page 2020-12-03 17:07:32 +01:00
BlackDex
7cf8809d77 Adding Manager Role support
This has been requested a few times (#1136 & #246 & forum), and there already were two
(1:1 duplicate) PR's (#1222 & #1223) which needed some changes and no
followups or further comments unfortunally.

This PR adds two auth headers.
- ManagerHeaders
  Checks if the user-type is Manager or higher and if the manager is
part of that collection or not.
- ManagerHeadersLoose
  Check if the user-type is Manager or higher, but does not check if the
user is part of the collection, needed for a few features like
retreiving all the users of an org.

I think this is the safest way to implement this instead of having to
check this within every function which needs this manually.

Also some extra checks if a manager has access to all collections or
just a selection.

fixes #1136
2020-12-02 22:50:51 +01:00
janost
043aa27aa3 Implement admin ability to enable/disable users 2020-11-30 23:12:56 +01:00
Daniel García
9824d94a1c Merge pull request #1244 from janost/read-config-from-files
Read config vars from files
2020-11-29 15:28:13 +01:00
janost
e8ef76b8f9 Read config vars from files 2020-11-29 02:31:49 +01:00
Daniel García
be1ddb4203 Merge pull request #1234 from janost/fix-failed-auth-log
Log proper namespace in the err!() macro
2020-11-27 18:49:46 +01:00
janost
caddf21fca Log proper namespace in the err!() macro 2020-11-22 00:09:45 +01:00
Daniel García
5379329ef7 Merge pull request #1229 from BlackDex/email-fixes
Email fixes
2020-11-18 16:16:27 +01:00
BlackDex
6faaeaae66 Updated email processing.
- Added an option to enable smtp debugging via SMTP_DEBUG. This will
  trigger a trace of the smtp commands sent/received to/from the mail
server. Useful when troubleshooting.
- Added two options to ignore invalid certificates which either do not
  match at all, or only doesn't match the hostname.
- Updated lettre to the latest alpha.4 version.
2020-11-18 12:07:08 +01:00
BlackDex
3fed323385 Fixed plain/text email format
plain/text emails should not contain html elements like <p> <a> etc..
This triggers some spamfilters and increases the spam score.

Also added the github link into the text only emails since this also
triggers spamfilters to increase the score since the url/link count is
different between the multipart messages.
2020-11-18 12:04:16 +01:00
BlackDex
58a928547d Updated admin settings page.
- Added check if settings are changed but not saved when sending test
  email.
- Added some styling to emphasize some risks settings.
- Fixed alignment of elements when the label has multiple lines.
2020-11-18 12:00:25 +01:00
Daniel García
558410c5bd Merge pull request #1220 from jameshurst/master
Return 404 instead of fallback icon
2020-11-14 14:17:53 +01:00
Daniel García
0dc0decaa7 Merge pull request #1212 from BlackDex/dotenv-warnings
Added error handling during dotenv loading
2020-11-14 14:11:56 +01:00
BlackDex
d11d663c5c Added error handling during dotenv loading
Some issue people report are because of misconfiguration or bad .env
files. To mittigate this i added error handling for this.

- Panic/Quit on a LineParse error, which indicates bad .env file format.
- Emits a info message when there is no .env file found.
- Emits a warning message when there is a .env file, but not no
  permissions.
- Emits a warning on every other message not specifically catched.
2020-11-12 13:40:26 +01:00
James Hurst
771233176f Fix for negcached icons 2020-11-09 22:06:11 -05:00
James Hurst
ed70b07d81 Return 404 instead of fallback icon 2020-11-09 20:47:26 -05:00
Daniel García
e25fc7083d Merge pull request #1219 from aveao/master
Ensure that a user is actually in an org when applying policies
2020-11-07 23:29:12 +01:00
Ave
fa364c3f2c Ensure that a user is actually in an org when applying policies 2020-11-08 01:14:17 +03:00
Daniel García
b5f9fe4d3b Fix #1206 2020-11-07 23:03:02 +01:00
Daniel García
013d4c28b2 Try to fix #1218 2020-11-07 23:01:56 +01:00
Daniel García
63acc8619b Update dependencies 2020-11-07 23:01:04 +01:00
Daniel García
ec920b5756 Merge pull request #1199 from jjlin/delete-admin
Add missing admin endpoints for deleting ciphers
2020-10-23 14:18:41 +02:00
Jeremy Lin
95caaf2a40 Add missing admin endpoints for deleting ciphers
This fixes the inability to bulk-delete ciphers from org vault views.
2020-10-23 03:42:22 -07:00
Mathijs van Veluw
7099f8bee8 Merge pull request #1198 from fabianvansteen/patch-1
Correction of verify_email error message
2020-10-23 11:40:39 +02:00
Fabian van Steen
b41a0d840c Correction of verify_email error message 2020-10-23 10:30:25 +02:00
Daniel García
c577ade90e Updated dependencies 2020-10-15 23:44:35 +02:00
Daniel García
257b143df1 Remove some duplicate code in Dockerfile with the help of some variables 2020-10-11 17:27:15 +02:00
Daniel García
34ee326ce9 Merge pull request #1178 from BlackDex/update-azure-pipelines
Updated the azure-pipelines.yml for multidb
2020-10-11 17:25:15 +02:00
Daniel García
090104ce1b Merge pull request #1181 from BlackDex/update-issue-template
Updated bug-report to note to update first
2020-10-11 17:24:42 +02:00
BlackDex
3305d5dc92 Updated bug-report to note to update first 2020-10-11 15:58:31 +02:00
BlackDex
5bdcfe128d Updated the azure-pipelines.yml for multidb
Updated the azure-pipelines.yml to build multidb now.
- Updated to Ubuntu 18.04 (Closer matches the docker builds)
- Added some really needed apt packages to be sure that they are
  installed
- Now run cargo test using all database backeds in one go.
2020-10-08 18:48:05 +02:00
90 changed files with 3134 additions and 1596 deletions

View File

@@ -104,7 +104,8 @@
## Icon blacklist Regex ## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service. ## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details ## Useful to hide other servers in the local network. Check the WIKI for more details
# ICON_BLACKLIST_REGEX=192\.168\.1\.[0-9].*^ ## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted. ## Any IP which is not defined as a global IP will be blacklisted.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block ## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
@@ -242,9 +243,9 @@
# SMTP_HOST=smtp.domain.tld # SMTP_HOST=smtp.domain.tld
# SMTP_FROM=bitwarden-rs@domain.tld # SMTP_FROM=bitwarden-rs@domain.tld
# SMTP_FROM_NAME=Bitwarden_RS # SMTP_FROM_NAME=Bitwarden_RS
# SMTP_PORT=587 # SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. # SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default.
# SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. # SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here.
# SMTP_USERNAME=username # SMTP_USERNAME=username
# SMTP_PASSWORD=password # SMTP_PASSWORD=password
# SMTP_TIMEOUT=15 # SMTP_TIMEOUT=15
@@ -259,6 +260,22 @@
## but might need to be changed in case it trips some anti-spam filters ## but might need to be changed in case it trips some anti-spam filters
# HELO_NAME= # HELO_NAME=
## SMTP debugging
## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
# SMTP_DEBUG=false
## Accept Invalid Hostnames
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
## Accept Invalid Certificates
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
## If the Certificate is valid but the hostname doesn't match, please use SMTP_ACCEPT_INVALID_HOSTNAMES instead.
# SMTP_ACCEPT_INVALID_CERTS=false
## Require new device emails. When a user logs in an email is required to be sent. ## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!! ## If sending the email fails the login attempt will fail!!
# REQUIRE_DEVICE_EMAIL=false # REQUIRE_DEVICE_EMAIL=false

View File

@@ -6,27 +6,36 @@ labels: ''
assignees: '' assignees: ''
--- ---
<!--
# ###
NOTE: Please update to the latest version of bitwarden_rs before reporting an issue!
This saves you and us a lot of time and troubleshooting.
See: https://github.com/dani-garcia/bitwarden_rs/issues/1180
# ###
-->
<!-- <!--
Please fill out the following template to make solving your problem easier and faster for us. Please fill out the following template to make solving your problem easier and faster for us.
This is only a guideline. If you think that parts are unneccessary for your issue, feel free to remove them. This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them.
Remember to hide/obfuscate personal and confidential information, Remember to hide/obfuscate personal and confidential information,
such as names, global IP/DNS adresses and especially passwords, if neccessary. such as names, global IP/DNS addresses and especially passwords, if necessary.
--> -->
### Subject of the issue ### Subject of the issue
<!-- Describe your issue here.--> <!-- Describe your issue here.-->
### Your environment ### Your environment
<!-- The version number, obtained from the logs or the admin page --> <!-- The version number, obtained from the logs or the admin diagnostics page -->
* Bitwarden_rs version: <!-- Remember to check your issue on the latest version first! -->
* Bitwarden_rs version:
<!-- How the server was installed: Docker image / package / built from source --> <!-- How the server was installed: Docker image / package / built from source -->
* Install method: * Install method:
* Clients used: <!-- if applicable --> * Clients used: <!-- if applicable -->
* Reverse proxy and version: <!-- if applicable --> * Reverse proxy and version: <!-- if applicable -->
* Version of mysql/postgresql: <!-- if applicable --> * Version of mysql/postgresql: <!-- if applicable -->
* Other relevant information: * Other relevant information:
### Steps to reproduce ### Steps to reproduce
<!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults) <!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults)

125
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,125 @@
name: Build
on:
push:
# Ignore when there are only changes done too one of these paths
paths-ignore:
- "**.md"
- "**.txt"
- "azure-pipelines.yml"
- "docker/**"
- "hooks/**"
- "tools/**"
jobs:
build:
strategy:
fail-fast: false
matrix:
channel:
- nightly
# - stable
target-triple:
- x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
include:
- target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu
features: "sqlite,mysql,postgresql"
channel: nightly
os: ubuntu-18.04
ext:
# - target-triple: x86_64-unknown-linux-gnu
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,mysql,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: nightly
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }}
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
# End Checkout the repo
# Install musl-tools when needed
- name: Install musl tools
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends musl-dev musl-tools cmake
if: matrix.target-triple == 'x86_64-unknown-linux-musl'
# End Install musl-tools when needed
# Install dependencies
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
if: startsWith( matrix.os, 'ubuntu' )
# End Install dependencies
# Enable Rust Caching
- uses: Swatinem/rust-cache@v1
# End Enable Rust Caching
# Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@v1
with:
profile: minimal
target: ${{ matrix.target-triple }}
# End Uses the rust-toolchain file to determine version
# Run cargo tests (In release mode to speed up cargo build afterwards)
- name: '`cargo test --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1
with:
command: test
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}
# End Run cargo tests
# Build the binary
- name: '`cargo build --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1
with:
command: build
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}
# End Build the binary
# Upload artifact to Github Actions
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }}
# End Upload artifact to Github Actions
## This is not used at the moment
## We could start using this when we can build static binaries
# Upload to github actions release
# - name: Release
# uses: Shopify/upload-to-release@1
# if: startsWith(github.ref, 'refs/tags/')
# with:
# name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }}
# path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }}
# repo-token: ${{ secrets.GITHUB_TOKEN }}
# End Upload to github actions release

34
.github/workflows/hadolint.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Hadolint
on:
pull_request:
# Ignore when there are only changes done too one of these paths
paths:
- "docker/**"
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-20.04
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
# End Checkout the repo
# Download hadolint
- name: Download hadolint
shell: bash
run: |
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint
env:
HADOLINT_VERSION: 1.19.0
# End Download hadolint
# Test Dockerfiles
- name: Run hadolint
shell: bash
run: git ls-files --exclude='docker/*/Dockerfile*' --ignored | xargs hadolint
# End Test Dockerfiles

View File

@@ -1,148 +0,0 @@
name: Workflow
on:
push:
paths-ignore:
- "**.md"
#pull_request:
# paths-ignore:
# - "**.md"
jobs:
build:
name: Build
strategy:
fail-fast: false
matrix:
db-backend: [sqlite, mysql, postgresql]
target:
- x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
# - x86_64-apple-darwin
# - x86_64-pc-windows-msvc
include:
- target: x86_64-unknown-linux-gnu
os: ubuntu-latest
ext:
# - target: x86_64-unknown-linux-musl
# os: ubuntu-latest
# ext:
# - target: x86_64-apple-darwin
# os: macOS-latest
# ext:
# - target: x86_64-pc-windows-msvc
# os: windows-latest
# ext: .exe
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v1
# - name: Cache choco cache
# uses: actions/cache@v1.0.3
# if: matrix.os == 'windows-latest'
# with:
# path: ~\AppData\Local\Temp\chocolatey
# key: ${{ runner.os }}-choco-cache-${{ matrix.db-backend }}
- name: Cache vcpkg installed
uses: actions/cache@v1.0.3
if: matrix.os == 'windows-latest'
with:
path: $VCPKG_ROOT/installed
key: ${{ runner.os }}-vcpkg-cache-${{ matrix.db-backend }}
env:
VCPKG_ROOT: 'C:\vcpkg'
- name: Cache vcpkg downloads
uses: actions/cache@v1.0.3
if: matrix.os == 'windows-latest'
with:
path: $VCPKG_ROOT/downloads
key: ${{ runner.os }}-vcpkg-cache-${{ matrix.db-backend }}
env:
VCPKG_ROOT: 'C:\vcpkg'
# - name: Cache homebrew
# uses: actions/cache@v1.0.3
# if: matrix.os == 'macOS-latest'
# with:
# path: ~/Library/Caches/Homebrew
# key: ${{ runner.os }}-brew-cache
# - name: Cache apt
# uses: actions/cache@v1.0.3
# if: matrix.os == 'ubuntu-latest'
# with:
# path: /var/cache/apt/archives
# key: ${{ runner.os }}-apt-cache
# Install dependencies
- name: Install dependencies macOS
run: brew update; brew install openssl sqlite libpq mysql
if: matrix.os == 'macOS-latest'
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install --no-install-recommends openssl sqlite libpq-dev libmysql++-dev
if: matrix.os == 'ubuntu-latest'
- name: Install dependencies Windows
run: vcpkg integrate install; vcpkg install sqlite3:x64-windows openssl:x64-windows libpq:x64-windows libmysql:x64-windows
if: matrix.os == 'windows-latest'
env:
VCPKG_ROOT: 'C:\vcpkg'
# End Install dependencies
# Install rust nightly toolchain
- name: Cache cargo registry
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo index
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/git
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo build
uses: actions/cache@v1.0.3
with:
path: target
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
- name: Install latest nightly
uses: actions-rs/toolchain@v1.0.5
with:
# Uses rust-toolchain to determine version
profile: minimal
target: ${{ matrix.target }}
# Build
- name: Build Win
if: matrix.os == 'windows-latest'
run: cargo.exe build --features ${{ matrix.db-backend }} --release --target ${{ matrix.target }}
env:
RUSTFLAGS: -Ctarget-feature=+crt-static
VCPKG_ROOT: 'C:\vcpkg'
- name: Build macOS / Ubuntu
if: matrix.os == 'macOS-latest' || matrix.os == 'ubuntu-latest'
run: cargo build --verbose --features ${{ matrix.db-backend }} --release --target ${{ matrix.target }}
# Test
- name: Run tests
run: cargo test --features ${{ matrix.db-backend }}
# Upload & Release
- name: Upload artifact
uses: actions/upload-artifact@v1.0.0
with:
name: bitwarden_rs-${{ matrix.db-backend }}-${{ matrix.target }}${{ matrix.ext }}
path: target/${{ matrix.target }}/release/bitwarden_rs${{ matrix.ext }}
- name: Release
uses: Shopify/upload-to-release@1.0.0
if: startsWith(github.ref, 'refs/tags/')
with:
name: bitwarden_rs-${{ matrix.db-backend }}-${{ matrix.target }}${{ matrix.ext }}
path: target/${{ matrix.target }}/release/bitwarden_rs${{ matrix.ext }}
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,21 +0,0 @@
dist: xenial
env:
global:
- HADOLINT_VERSION=1.17.1
language: rust
rust: nightly
cache: cargo
before_install:
- sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint
- sudo chmod +rx /usr/local/bin/hadolint
- rustup set profile minimal
# Nothing to install
install: true
script:
- git ls-files --exclude='Dockerfile*' --ignored | xargs --max-lines=1 hadolint
- cargo test --features "sqlite"
- cargo test --features "mysql"

1195
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -32,27 +32,26 @@ rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev" rocket_contrib = "0.5.0-dev"
# HTTP client # HTTP client
reqwest = { version = "0.10.8", features = ["blocking", "json"] } reqwest = { version = "0.11.0", features = ["blocking", "json"] }
# multipart/form-data support # multipart/form-data support
multipart = { version = "0.17.0", features = ["server"], default-features = false } multipart = { version = "0.17.1", features = ["server"], default-features = false }
# WebSockets library # WebSockets library
ws = "0.9.1" ws = { version = "0.10.0", package = "parity-ws" }
# MessagePack library # MessagePack library
rmpv = "0.4.5" rmpv = "0.4.7"
# Concurrent hashmap implementation # Concurrent hashmap implementation
chashmap = "2.2.2" chashmap = "2.2.2"
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = "1.0.115" serde = { version = "1.0.123", features = ["derive"] }
serde_derive = "1.0.115" serde_json = "1.0.62"
serde_json = "1.0.57"
# Logging # Logging
log = "0.4.11" log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] } fern = { version = "0.6.0", features = ["syslog-4"] }
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
@@ -63,22 +62,22 @@ diesel_migrations = "1.4.0"
libsqlite3-sys = { version = "0.18.0", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.18.0", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = "0.7.3" rand = "0.8.3"
ring = "0.16.15" ring = "0.16.20"
# UUID generation # UUID generation
uuid = { version = "0.8.1", features = ["v4"] } uuid = { version = "0.8.2", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = "0.4.15" chrono = "0.4.19"
chrono-tz = "0.5.3" chrono-tz = "0.5.3"
time = "0.2.18" time = "0.2.25"
# TOTP library # TOTP library
oath = "0.10.2" oath = "0.10.2"
# Data encoding library # Data encoding library
data-encoding = "2.3.0" data-encoding = "2.3.2"
# JWT library # JWT library
jsonwebtoken = "7.2.0" jsonwebtoken = "7.2.0"
@@ -87,51 +86,51 @@ jsonwebtoken = "7.2.0"
u2f = "0.2.0" u2f = "0.2.0"
# Yubico Library # Yubico Library
yubico = { version = "0.9.1", features = ["online-tokio"], default-features = false } yubico = { version = "0.9.2", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust # A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false } dotenv = { version = "0.15.0", default-features = false }
# Lazy initialization # Lazy initialization
once_cell = "1.4.1" once_cell = "1.5.2"
# Numerical libraries # Numerical libraries
num-traits = "0.2.12" num-traits = "0.2.14"
num-derive = "0.3.2" num-derive = "0.3.3"
# Email libraries # Email libraries
lettre = { version = "0.10.0-alpha.2", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname"], default-features = false } lettre = { version = "0.10.0-alpha.5", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
newline-converter = "0.1.0" newline-converter = "0.1.0"
# Template library # Template library
handlebars = { version = "3.4.0", features = ["dir_source"] } handlebars = { version = "3.5.2", features = ["dir_source"] }
# For favicon extraction from main website # For favicon extraction from main website
soup = "0.5.0" soup = "0.5.0"
regex = "1.3.9" regex = { version = "1.4.3", features = ["std", "perf"], default-features = false }
data-url = "0.1.0" data-url = "0.1.0"
# Used by U2F, JWT and Postgres # Used by U2F, JWT and Postgres
openssl = "0.10.30" openssl = "0.10.32"
# URL encoding library # URL encoding library
percent-encoding = "2.1.0" percent-encoding = "2.1.0"
# Punycode conversion # Punycode conversion
idna = "0.2.0" idna = "0.2.1"
# CLI argument parsing # CLI argument parsing
structopt = "0.3.17" structopt = "0.3.21"
# Logging panics to logfile instead stderr only # Logging panics to logfile instead stderr only
backtrace = "0.3.50" backtrace = "0.3.56"
# Macro ident concatenation # Macro ident concatenation
paste = "1.0.0" paste = "1.0.4"
[patch.crates-io] [patch.crates-io]
# Use newest ring # Use newest ring
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' } rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' } rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
# For favicon extraction from main website # For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '7f1bd6ce1c2fde599a757302a843a60e714c5f72' } data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '540ede02d0771824c0c80ff9f57fe8eff38b1291' }

View File

@@ -1,5 +1,5 @@
pool: pool:
vmImage: 'Ubuntu-16.04' vmImage: 'Ubuntu-18.04'
steps: steps:
- script: | - script: |
@@ -10,16 +10,13 @@ steps:
- script: | - script: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y libmysql++-dev sudo apt-get install -y --no-install-recommends build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
displayName: Install libmysql displayName: 'Install build libraries.'
- script: | - script: |
rustc -Vv rustc -Vv
cargo -V cargo -V
displayName: Query rust and cargo versions displayName: Query rust and cargo versions
- script : cargo test --features "sqlite" - script : cargo test --features "sqlite,mysql,postgresql"
displayName: 'Test project with sqlite backend' displayName: 'Test project with sqlite, mysql and postgresql backends'
- script : cargo test --features "mysql"
displayName: 'Test project with mysql backend'

33
docker/Dockerfile.buildx Normal file
View File

@@ -0,0 +1,33 @@
# The cross-built images have the build arch (`amd64`) embedded in the image
# manifest, rather than the target arch. For example:
#
# $ docker inspect bitwardenrs/server:latest-armv7 | jq -r '.[]|.Architecture'
# amd64
#
# Recent versions of Docker have started printing a warning when the image's
# claimed arch doesn't match the host arch. For example:
#
# WARNING: The requested image's platform (linux/amd64) does not match the
# detected host platform (linux/arm/v7) and no specific platform was requested
#
# The image still works fine, but the spurious warning creates confusion.
#
# Docker doesn't seem to provide a way to directly set the arch of an image
# at build time. To resolve the build vs. target arch discrepancy, we use
# Docker Buildx to build a new set of images with the correct target arch.
#
# Docker Buildx uses this Dockerfile to build an image for each requested
# platform. Since the Dockerfile basically consists of a single `FROM`
# instruction, we're effectively telling Buildx to build a platform-specific
# image by simply copying the existing cross-built image and setting the
# correct target arch as a side effect.
#
# References:
#
# - https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images
# - https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
# - https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
#
ARG LOCAL_REPO
ARG DOCKER_TAG
FROM ${LOCAL_REPO}:${DOCKER_TAG}-${TARGETARCH}${TARGETVARIANT}

View File

@@ -1,64 +1,89 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.46" %} {% set build_stage_base_image = "rust:1.48" %}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "amd64" in target_file %} {% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2020-10-02" %} {% set build_stage_base_image = "clux/muslrust:nightly-2021-01-25" %}
{% set runtime_stage_base_image = "alpine:3.12" %} {% set runtime_stage_base_image = "alpine:3.12" %}
{% set package_arch_name = "" %} {% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "arm32v7" in target_file %} {% elif "armv7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %} {% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.12" %} {% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.12" %}
{% set package_arch_name = "" %} {% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% endif %} {% endif %}
{% elif "amd64" in target_file %} {% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %} {% set runtime_stage_base_image = "debian:buster-slim" %}
{% set package_arch_name = "" %} {% elif "arm64" in target_file %}
{% elif "arm64v8" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %} {% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set package_arch_name = "arm64" %} {% set package_arch_name = "arm64" %}
{% elif "arm32v6" in target_file %} {% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %} {% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set package_arch_name = "armel" %} {% set package_arch_name = "armel" %}
{% elif "arm32v7" in target_file %} {% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %} {% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set package_arch_name = "armhf" %} {% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
{% set package_cross_compiler = "arm-linux-gnueabihf" %}
{% endif %} {% endif %}
{% set package_arch_prefix = ":" + package_arch_name %} {% if package_arch_name is defined %}
{% if package_arch_name == "" %} {% set package_arch_prefix = ":" + package_arch_name %}
{% else %}
{% set package_arch_prefix = "" %} {% set package_arch_prefix = "" %}
{% endif %} {% endif %}
{% if package_arch_target is defined %}
{% set package_arch_target_param = " --target=" + package_arch_target %}
{% else %}
{% set package_arch_target_param = "" %}
{% endif %}
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
{% set vault_image_hash = "sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303" %} {% set vault_version = "2.18.1b" %}
{% raw %} {% set vault_image_digest = "sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb" %}
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # The web-vault digest specifies a particular web-vault build on Docker Hub.
# It can be viewed in multiple ways: # Using the digest instead of the tag name provides better security,
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # as the digest of an image is immutable, whereas a tag name can later
# - From the console, with the following commands: # be changed to point to a malicious image.
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
{% endraw %} # click the tag name to view the digest of the image it currently points to.
FROM bitwardenrs/web-vault@{{ vault_image_hash }} as vault # - From the command line:
# $ docker pull bitwardenrs/web-vault:v{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" bitwardenrs/web-vault:v{{ vault_version }}
# [bitwardenrs/web-vault@{{ vault_image_digest }}]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" bitwardenrs/web-vault@{{ vault_image_digest }}
# [bitwardenrs/web-vault:v{{ vault_version }}]
#
FROM bitwardenrs/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build FROM {{ build_stage_base_image }} as build
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
# Alpine only works on SQlite {% if "amd64" in target_file %}
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
{% set features = "sqlite,postgresql" %}
{% else %}
# Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite ARG DB=sqlite
{% set features = "sqlite" %}
{% endif %}
{% else %} {% else %}
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
{% set features = "sqlite,mysql,postgresql" %}
{% endif %} {% endif %}
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@@ -68,7 +93,6 @@ RUN rustup set profile minimal
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
{% elif "arm" in target_file %} {% elif "arm" in target_file %}
# Install required build libs for {{ package_arch_name }} architecture. # Install required build libs for {{ package_arch_name }} architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
@@ -85,44 +109,19 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
libmariadb-dev{{ package_arch_prefix }} \ libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }} libmariadb-dev-compat{{ package_arch_prefix }}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-{{ package_cross_compiler }} \
&& mkdir -p ~/.cargo \
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif -%} {% endif -%}
{% if "arm64v8" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "arm32v6" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "arm32v7" in target_file and "alpine" not in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif %}
{% if "amd64" in target_file and "alpine" not in target_file %} {% if "amd64" in target_file and "alpine" not in target_file %}
# Install DB packages # Install DB packages
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
@@ -148,71 +147,31 @@ COPY ./build.rs ./build.rs
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client) # We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version. # We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
# What we can do is a force install, because nothing important is overlapping each other. # What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y libmariadb3:amd64 && \ RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
mkdir -pv /tmp/dpkg && \
cd /tmp/dpkg && \
apt-get download libmariadb-dev-compat:amd64 && \ apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i *.deb && \ dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rf /tmp/dpkg rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic. # For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so. # The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time. # This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it. # Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% endif -%} {% endif -%}
{% if "arm64v8" in target_file %} {% endif %}
RUN ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so {% if package_arch_target is defined %}
RUN rustup target add {{ package_arch_target }}
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu
{% elif "arm32v6" in target_file %}
RUN ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi
{% elif "arm32v7" in target_file %}
RUN ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
{% endif -%}
{% else -%}
{% if "amd64" in target_file %}
RUN rustup target add x86_64-unknown-linux-musl
{% elif "arm32v7" in target_file %}
RUN rustup target add armv7-unknown-linux-musleabihf
{% endif %}
{% endif %} {% endif %}
# Builds your dependencies and removes the # Builds your dependencies and removes the
# dummy project, except the target folder # dummy project, except the target folder
# This folder contains the compiled dependencies # This folder contains the compiled dependencies
{% if "alpine" in target_file %} RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "amd64" in target_file %}
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
{% elif "arm32v7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
{% endif %}
{% elif "alpine" not in target_file %}
{% if "amd64" in target_file %}
RUN cargo build --features ${DB} --release
{% elif "arm64v8" in target_file %}
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
{% elif "arm32v6" in target_file %}
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
{% elif "arm32v7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
{% endif %}
{% endif %}
RUN find . -not -path "./target*" -delete RUN find . -not -path "./target*" -delete
# Copies the complete project # Copies the complete project
@@ -224,22 +183,10 @@ RUN touch src/main.rs
# Builds again, this time it'll just be # Builds again, this time it'll just be
# your actual source files being built # your actual source files being built
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "amd64" in target_file %} {% if "armv7" in target_file %}
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl RUN musl-strip target/{{ package_arch_target }}/release/bitwarden_rs
{% elif "arm32v7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/bitwarden_rs
{% endif %}
{% elif "alpine" not in target_file %}
{% if "amd64" in target_file %}
RUN cargo build --features ${DB} --release
{% elif "arm64v8" in target_file %}
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
{% elif "arm32v6" in target_file %}
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
{% elif "arm32v7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
{% endif %} {% endif %}
{% endif %} {% endif %}
@@ -264,11 +211,14 @@ RUN [ "cross-build-start" ]
RUN apk add --no-cache \ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
{% if "sqlite" in target_file %} dumb-init \
{% if "sqlite" in features %}
sqlite \ sqlite \
{% elif "mysql" in target_file %} {% endif %}
{% if "mysql" in features %}
mariadb-connector-c \ mariadb-connector-c \
{% elif "postgresql" in target_file %} {% endif %}
{% if "postgresql" in features %}
postgresql-libs \ postgresql-libs \
{% endif %} {% endif %}
ca-certificates ca-certificates
@@ -278,14 +228,12 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \
sqlite3 \ sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
{% endif %} {% endif %}
{% if "alpine" in target_file and "arm32v7" in target_file %}
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
{% endif %}
RUN mkdir /data RUN mkdir /data
{% if "amd64" not in target_file %} {% if "amd64" not in target_file %}
@@ -301,22 +249,10 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
{% if "alpine" in target_file %} {% if package_arch_target is defined %}
{% if "amd64" in target_file %} COPY --from=build /app/target/{{ package_arch_target }}/release/bitwarden_rs .
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs . {% else %}
{% elif "arm32v7" in target_file %} COPY --from=build /app/target/release/bitwarden_rs .
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/bitwarden_rs .
{% endif %}
{% elif "alpine" not in target_file %}
{% if "arm64v8" in target_file %}
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
{% elif "arm32v6" in target_file %}
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
{% elif "arm32v7" in target_file %}
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
{% else %}
COPY --from=build app/target/release/bitwarden_rs .
{% endif %}
{% endif %} {% endif %}
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
@@ -326,8 +262,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
{% if "alpine" in target_file and "arm32v7" in target_file %} ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["catatonit", "/start.sh"]
{% else %}
CMD ["/start.sh"] CMD ["/start.sh"]
{% endif %}

View File

@@ -1,24 +1,31 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.46 as build FROM rust:1.48 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -78,6 +85,7 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \
sqlite3 \ sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
@@ -92,7 +100,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs . COPY --from=build /app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -101,5 +109,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,27 +1,34 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2020-10-02 as build FROM clux/muslrust:nightly-2021-01-25 as build
# Alpine only works on SQlite # Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite ARG DB=sqlite,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@@ -32,7 +39,6 @@ RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
@@ -75,6 +81,9 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
RUN apk add --no-cache \ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
dumb-init \
sqlite \
postgresql-libs \
ca-certificates ca-certificates
RUN mkdir /data RUN mkdir /data
@@ -95,5 +104,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,24 +1,31 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.46 as build FROM rust:1.48 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -70,12 +77,10 @@ COPY ./build.rs ./build.rs
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client) # We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version. # We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other. # What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y libmariadb3:amd64 && \ RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
mkdir -pv /tmp/dpkg && \
cd /tmp/dpkg && \
apt-get download libmariadb-dev-compat:amd64 && \ apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i *.deb && \ dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rf /tmp/dpkg rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic. # For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so. # The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
@@ -123,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \
sqlite3 \ sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
@@ -149,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,24 +1,31 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.46 as build FROM rust:1.48 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -70,12 +77,10 @@ COPY ./build.rs ./build.rs
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client) # We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version. # We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other. # What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y libmariadb3:amd64 && \ RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
mkdir -pv /tmp/dpkg && \
cd /tmp/dpkg && \
apt-get download libmariadb-dev-compat:amd64 && \ apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i *.deb && \ dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rf /tmp/dpkg rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic. # For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so. # The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
@@ -123,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \
sqlite3 \ sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
@@ -149,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,24 +1,31 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.46 as build FROM rust:1.48 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -70,12 +77,10 @@ COPY ./build.rs ./build.rs
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client) # We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version. # We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other. # What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y libmariadb3:amd64 && \ RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
mkdir -pv /tmp/dpkg && \
cd /tmp/dpkg && \
apt-get download libmariadb-dev-compat:amd64 && \ apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i *.deb && \ dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rf /tmp/dpkg rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic. # For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so. # The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
@@ -123,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \ openssl \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \
sqlite3 \ sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
@@ -149,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,26 +1,33 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build: # Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable. # Using the digest instead of the tag name provides better security,
# It can be viewed in multiple ways: # as the digest of an image is immutable, whereas a tag name can later
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there. # be changed to point to a malicious image.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.16.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.16.1
# #
# - To do the opposite, and get the tag from the hash, you can do: # To verify the current digest for a given tag name:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 # - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
FROM bitwardenrs/web-vault@sha256:e40228f94cead5e50af6575fb39850a002dad146dab6836e5da5663e6d214303 as vault # click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build FROM messense/rust-musl-cross:armv7-musleabihf as build
# Alpine only works on SQlite # Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds. # Build time options to avoid dpkg warnings and help with reproducible builds.
@@ -32,7 +39,6 @@ RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
@@ -78,8 +84,9 @@ RUN [ "cross-build-start" ]
RUN apk add --no-cache \ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
dumb-init \
sqlite \
ca-certificates ca-certificates
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
RUN mkdir /data RUN mkdir /data
@@ -102,5 +109,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR / WORKDIR /
CMD ["catatonit", "/start.sh"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -10,7 +10,7 @@ Docker Hub hooks provide these predefined [environment variables](https://docs.d
* `DOCKER_TAG`: the Docker repository tag being built. * `DOCKER_TAG`: the Docker repository tag being built.
* `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO:DOCKER_TAG`.) * `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO:DOCKER_TAG`.)
The current multi-arch image build relies on the original bitwarden_rs Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/database/OS combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point. The current multi-arch image build relies on the original bitwarden_rs Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/distro combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point.
## References ## References

View File

@@ -1,19 +1,16 @@
# The default Debian-based images support these arches for all database connections # The default Debian-based images support these arches for all database backends.
#
# Other images (Alpine-based) currently
# support only a subset of these.
arches=( arches=(
amd64 amd64
arm32v6 armv6
arm32v7 armv7
arm64v8 arm64
) )
if [[ "${DOCKER_TAG}" == *alpine ]]; then if [[ "${DOCKER_TAG}" == *alpine ]]; then
# The Alpine build currently only works for amd64. # The Alpine image build currently only works for certain arches.
os_suffix=.alpine distro_suffix=.alpine
arches=( arches=(
amd64 amd64
arm32v7 armv7
) )
fi fi

View File

@@ -4,11 +4,42 @@ echo ">>> Building images..."
source ./hooks/arches.sh source ./hooks/arches.sh
if [[ -z "${SOURCE_COMMIT}" ]]; then
# This var is typically predefined by Docker Hub, but it won't be
# when testing locally.
SOURCE_COMMIT="$(git rev-parse HEAD)"
fi
# Construct a version string in the style of `build.rs`.
GIT_EXACT_TAG="$(git describe --tags --abbrev=0 --exact-match 2>/dev/null)"
if [[ -n "${GIT_EXACT_TAG}" ]]; then
SOURCE_VERSION="${GIT_EXACT_TAG}"
else
GIT_LAST_TAG="$(git describe --tags --abbrev=0)"
SOURCE_VERSION="${GIT_LAST_TAG}-${SOURCE_COMMIT:0:8}"
fi
LABELS=(
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
org.opencontainers.image.created="$(date --utc --iso-8601=seconds)"
org.opencontainers.image.documentation="https://github.com/dani-garcia/bitwarden_rs/wiki"
org.opencontainers.image.licenses="GPL-3.0-only"
org.opencontainers.image.revision="${SOURCE_COMMIT}"
org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}"
org.opencontainers.image.url="https://hub.docker.com/r/${DOCKER_REPO#*/}"
org.opencontainers.image.version="${SOURCE_VERSION}"
)
LABEL_ARGS=()
for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}")
done
set -ex set -ex
for arch in "${arches[@]}"; do for arch in "${arches[@]}"; do
docker build \ docker build \
"${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \ -t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${os_suffix} \ -f docker/${arch}/Dockerfile${distro_suffix} \
. .
done done

28
hooks/pre_build Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -ex
# If requested, print some environment info for troubleshooting.
if [[ -n "${DOCKER_HUB_DEBUG}" ]]; then
id
pwd
df -h
env
docker info
docker version
fi
# Install build dependencies.
deps=(
jq
)
apt-get update
apt-get install -y "${deps[@]}"
# Docker Hub uses a shallow clone and doesn't fetch tags, which breaks some
# Git operations that we perform later, so fetch the complete history and
# tags first. Note that if the build is cached, the clone may have been
# unshallowed already; if so, unshallowing will fail, so skip it.
if [[ -f .git/shallow ]]; then
git fetch --unshallow --tags
fi

View File

@@ -1,117 +1,138 @@
#!/bin/bash #!/bin/bash
echo ">>> Pushing images..."
export DOCKER_CLI_EXPERIMENTAL=enabled
declare -A annotations=(
[amd64]="--os linux --arch amd64"
[arm32v6]="--os linux --arch arm --variant v6"
[arm32v7]="--os linux --arch arm --variant v7"
[arm64v8]="--os linux --arch arm64 --variant v8"
)
source ./hooks/arches.sh source ./hooks/arches.sh
export DOCKER_CLI_EXPERIMENTAL=enabled
# Join a list of args with a single char.
# Ref: https://stackoverflow.com/a/17841619
join() { local IFS="$1"; shift; echo "$*"; }
set -ex set -ex
declare -A images echo ">>> Starting local Docker registry..."
# Docker Buildx's `docker-container` driver is needed for multi-platform
# builds, but it can't access existing images on the Docker host (like the
# cross-compiled ones we just built). Those images first need to be pushed to
# a registry -- Docker Hub could be used, but since it's not trivial to clean
# up those intermediate images on Docker Hub, it's easier to just run a local
# Docker registry, which gets cleaned up automatically once the build job ends.
#
# https://docs.docker.com/registry/deploying/
# https://hub.docker.com/_/registry
#
# Use host networking so the buildx container can access the registry via
# localhost.
#
docker run -d --name registry --network host registry:2 # defaults to port 5000
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
LOCAL_REGISTRY="localhost:5000"
REPO="${DOCKER_REPO#*/}"
LOCAL_REPO="${LOCAL_REGISTRY}/${REPO}"
echo ">>> Pushing images to local registry..."
for arch in ${arches[@]}; do for arch in ${arches[@]}; do
images[$arch]="${DOCKER_REPO}:${DOCKER_TAG}-${arch}" docker_image="${DOCKER_REPO}:${DOCKER_TAG}-${arch}"
local_image="${LOCAL_REPO}:${DOCKER_TAG}-${arch}"
docker tag "${docker_image}" "${local_image}"
docker push "${local_image}"
done done
# Push the images that were just built; manifest list creation fails if the echo ">>> Setting up Docker Buildx..."
# images (manifests) referenced don't already exist in the Docker registry.
for image in "${images[@]}"; do
docker push "${image}"
done
manifest_lists=("${DOCKER_REPO}:${DOCKER_TAG}") # Same as earlier, use host networking so the buildx container can access the
# registry via localhost.
#
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
#
docker buildx create --name builder --use --driver-opt network=host
# If the Docker tag starts with a version number, assume the latest release is echo ">>> Running Docker Buildx..."
# being pushed. Add an extra manifest (`latest` or `alpine`, as appropriate)
tags=("${DOCKER_REPO}:${DOCKER_TAG}")
# If the Docker tag starts with a version number, assume the latest release
# is being pushed. Add an extra tag (`latest` or `alpine`, as appropriate)
# to make it easier for users to track the latest release. # to make it easier for users to track the latest release.
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
if [[ "${DOCKER_TAG}" == *alpine ]]; then if [[ "${DOCKER_TAG}" == *alpine ]]; then
manifest_lists+=(${DOCKER_REPO}:alpine) tags+=(${DOCKER_REPO}:alpine)
else else
manifest_lists+=(${DOCKER_REPO}:latest) tags+=(${DOCKER_REPO}:latest)
# Add an extra `latest-arm32v6` tag; Docker can't seem to properly
# auto-select that image on Armv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Add this tag only for the SQLite image, as the MySQL and PostgreSQL
# builds don't currently work on non-amd64 arches.
#
# TODO: Also add an `alpine-arm32v6` tag if multi-arch support for
# Alpine-based bitwarden_rs images is implemented before this Docker
# issue is fixed.
if [[ ${DOCKER_REPO} == *server ]]; then
docker tag "${DOCKER_REPO}:${DOCKER_TAG}-arm32v6" "${DOCKER_REPO}:latest-arm32v6"
docker push "${DOCKER_REPO}:latest-arm32v6"
fi
fi fi
fi fi
for manifest_list in "${manifest_lists[@]}"; do tag_args=()
# Create the (multi-arch) manifest list of arch-specific images. for tag in "${tags[@]}"; do
docker manifest create ${manifest_list} ${images[@]} tag_args+=(--tag "${tag}")
# Make sure each image manifest is annotated with the correct arch info.
# Docker does not auto-detect the arch of each cross-compiled image, so
# everything would appear as `linux/amd64` otherwise.
for arch in "${arches[@]}"; do
docker manifest annotate ${annotations[$arch]} ${manifest_list} ${images[$arch]}
done
# Push the manifest list.
docker manifest push --purge ${manifest_list}
done done
# Avoid logging credentials and tokens. # Docker Buildx takes a list of target platforms (OS/arch/variant), so map
set +ex # the arch list to a platform list (assuming the OS is always `linux`).
declare -A arch_to_platform=(
# Delete the arch-specific tags, if credentials for doing so are available. [amd64]="linux/amd64"
# Note that `DOCKER_PASSWORD` must be the actual user password. Passing a JWT [armv6]="linux/arm/v6"
# obtained using a personal access token results in a 403 error with [armv7]="linux/arm/v7"
# {"detail": "access to the resource is forbidden with personal access token"} [arm64]="linux/arm64"
if [[ -z "${DOCKER_USERNAME}" || -z "${DOCKER_PASSWORD}" ]]; then )
exit 0 platforms=()
fi
# Given a JSON input on stdin, extract the string value associated with the
# specified key. This avoids an extra dependency on a tool like `jq`.
extract() {
local key="$1"
# Extract "<key>":"<val>" (assumes key/val won't contain double quotes).
# The colon may have whitespace on either side.
grep -o "\"${key}\"[[:space:]]*:[[:space:]]*\"[^\"]\+\"" |
# Extract just <val> by deleting the last '"', and then greedily deleting
# everything up to '"'.
sed -e 's/"$//' -e 's/.*"//'
}
echo ">>> Getting API token..."
jwt=$(curl -sS -X POST \
-H "Content-Type: application/json" \
-d "{\"username\":\"${DOCKER_USERNAME}\",\"password\": \"${DOCKER_PASSWORD}\"}" \
"https://hub.docker.com/v2/users/login" |
extract 'token')
# Strip the registry portion from `index.docker.io/user/repo`.
repo="${DOCKER_REPO#*/}"
for arch in ${arches[@]}; do for arch in ${arches[@]}; do
# Don't delete the `arm32v6` tag; Docker can't seem to properly platforms+=("${arch_to_platform[$arch]}")
# auto-select that image on Armv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
if [[ ${arch} == 'arm32v6' ]]; then
continue
fi
tag="${DOCKER_TAG}-${arch}"
echo ">>> Deleting '${repo}:${tag}'..."
curl -sS -X DELETE \
-H "Authorization: Bearer ${jwt}" \
"https://hub.docker.com/v2/repositories/${repo}/tags/${tag}/"
done done
platforms="$(join "," "${platforms[@]}")"
# Run the build, pushing the resulting images and multi-arch manifest list to
# Docker Hub. The Dockerfile is read from stdin to avoid sending any build
# context, which isn't needed here since the actual cross-compiled images
# have already been built.
docker buildx build \
--network host \
--build-arg LOCAL_REPO="${LOCAL_REPO}" \
--build-arg DOCKER_TAG="${DOCKER_TAG}" \
--platform "${platforms}" \
"${tag_args[@]}" \
--push \
- < ./docker/Dockerfile.buildx
# Add an extra arch-specific tag for `arm32v6`; Docker can't seem to properly
# auto-select that image on ARMv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Note that we use `arm32v6` instead of `armv6` to be consistent with the
# existing bitwarden_rs tags, which adhere to the naming conventions of the
# Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6).
# Unfortunately, these per-arch repo names aren't always consistent with the
# corresponding platform (OS/arch/variant) IDs, particularly in the case of
# 32-bit ARM arches (e.g., `linux/arm/v6` is used, not `linux/arm32/v6`).
#
# TODO: It looks like this issue should be fixed starting in Docker 20.10.0,
# so this step can be removed once fixed versions are in wider distribution.
#
# Tags:
#
# testing => testing-arm32v6
# testing-alpine => <ignored>
# x.y.z => x.y.z-arm32v6, latest-arm32v6
# x.y.z-alpine => <ignored>
#
if [[ "${DOCKER_TAG}" != *alpine ]]; then
image="${DOCKER_REPO}":"${DOCKER_TAG}"
# Fetch the multi-arch manifest list and find the digest of the armv6 image.
filter='.manifests|.[]|select(.platform.architecture=="arm" and .platform.variant=="v6")|.digest'
digest="$(docker manifest inspect "${image}" | jq -r "${filter}")"
# Pull the armv6 image by digest, retag it, and repush it.
docker pull "${DOCKER_REPO}"@"${digest}"
docker tag "${DOCKER_REPO}"@"${digest}" "${image}"-arm32v6
docker push "${image}"-arm32v6
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
docker tag "${image}"-arm32v6 "${DOCKER_REPO}:latest"-arm32v6
docker push "${DOCKER_REPO}:latest"-arm32v6
fi
fi

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT 1;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT true;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT 1;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@@ -1 +1 @@
nightly-2020-07-11 nightly-2021-01-25

View File

@@ -1,8 +1,9 @@
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde_json::Value; use serde_json::Value;
use std::process::Command; use std::{env, process::Command, time::Duration};
use reqwest::{blocking::Client, header::USER_AGENT};
use rocket::{ use rocket::{
http::{Cookie, Cookies, SameSite}, http::{Cookie, Cookies, SameSite},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request}, request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
@@ -12,13 +13,13 @@ use rocket::{
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, JsonResult}, api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp}, auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder, config::ConfigBuilder,
db::{backup_database, models::*, DbConn, DbConnType}, db::{backup_database, models::*, DbConn, DbConnType},
error::{Error, MapResult}, error::{Error, MapResult},
mail, mail,
util::get_display_size, util::{format_naive_datetime_local, get_display_size},
CONFIG, CONFIG,
}; };
@@ -36,7 +37,10 @@ pub fn routes() -> Vec<Route> {
logout, logout,
delete_user, delete_user,
deauth_user, deauth_user,
disable_user,
enable_user,
remove_2fa, remove_2fa,
update_user_org_type,
update_revision_users, update_revision_users,
post_config, post_config,
delete_config, delete_config,
@@ -44,10 +48,22 @@ pub fn routes() -> Vec<Route> {
test_smtp, test_smtp,
users_overview, users_overview,
organizations_overview, organizations_overview,
delete_organization,
diagnostics, diagnostics,
get_diagnostics_config
] ]
} }
static DB_TYPE: Lazy<&str> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| match t {
DbConnType::sqlite => "SQLite",
DbConnType::mysql => "MySQL",
DbConnType::postgresql => "PostgreSQL",
})
.unwrap_or("Unknown")
});
static CAN_BACKUP: Lazy<bool> = Lazy::new(|| { static CAN_BACKUP: Lazy<bool> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url()) DbConnType::from_url(&CONFIG.database_url())
.map(|t| t == DbConnType::sqlite) .map(|t| t == DbConnType::sqlite)
@@ -291,14 +307,22 @@ fn get_users_json(_token: AdminToken, conn: DbConn) -> JsonResult {
#[get("/users/overview")] #[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> { fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn); let users = User::get_all(&conn);
let dt_fmt = "%Y-%m-%d %H:%M:%S %Z";
let users_json: Vec<Value> = users.iter() let users_json: Vec<Value> = users.iter()
.map(|u| { .map(|u| {
let mut usr = u.to_json(&conn); let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn)); usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn)); usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32)); usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt));
usr["last_active"] = match u.last_active(&conn) {
Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)),
None => json!("Never")
};
usr usr
}).collect(); })
.collect();
let text = AdminTemplateData::users(users_json).render()?; let text = AdminTemplateData::users(users_json).render()?;
Ok(Html(text)) Ok(Html(text))
@@ -319,6 +343,24 @@ fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
user.save(&conn) user.save(&conn)
} }
#[post("/users/<uuid>/disable")]
fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
Device::delete_all_by_user(&user.uuid, &conn)?;
user.reset_security_stamp();
user.enabled = false;
user.save(&conn)
}
#[post("/users/<uuid>/enable")]
fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
user.enabled = true;
user.save(&conn)
}
#[post("/users/<uuid>/remove-2fa")] #[post("/users/<uuid>/remove-2fa")]
fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult { fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?; let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
@@ -327,6 +369,41 @@ fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
user.save(&conn) user.save(&conn)
} }
#[derive(Deserialize, Debug)]
struct UserOrgTypeData {
user_type: NumberOrString,
user_uuid: String,
org_uuid: String,
}
#[post("/users/org_type", data = "<data>")]
fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn) {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid type"),
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there are at least another owner
let num_owners = UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).len();
if num_owners <= 1 {
err!("Can't change the type of the last owner")
}
}
user_to_edit.atype = new_type as i32;
user_to_edit.save(&conn)
}
#[post("/users/update_revision")] #[post("/users/update_revision")]
fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult { fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn) User::update_all_revisions(&conn)
@@ -335,19 +412,27 @@ fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
#[get("/organizations/overview")] #[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> { fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn); let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations.iter().map(|o| { let organizations_json: Vec<Value> = organizations.iter()
.map(|o| {
let mut org = o.to_json(); let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn)); org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn)); org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn)); org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32)); org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org org
}).collect(); })
.collect();
let text = AdminTemplateData::organizations(organizations_json).render()?; let text = AdminTemplateData::organizations(organizations_json).render()?;
Ok(Html(text)) Ok(Html(text))
} }
#[post("/organizations/<uuid>/delete")]
fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).map_res("Organization doesn't exist")?;
org.delete(&conn)
}
#[derive(Deserialize)] #[derive(Deserialize)]
struct WebVaultVersion { struct WebVaultVersion {
version: String, version: String,
@@ -364,77 +449,110 @@ struct GitCommit {
} }
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> { fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
use reqwest::{blocking::Client, header::USER_AGENT};
use std::time::Duration;
let github_api = Client::builder().build()?; let github_api = Client::builder().build()?;
Ok( Ok(github_api
github_api.get(url) .get(url)
.timeout(Duration::from_secs(10)) .timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS") .header(USER_AGENT, "Bitwarden_RS")
.send()? .send()?
.error_for_status()? .error_for_status()?
.json::<T>()? .json::<T>()?)
) }
fn has_http_access() -> bool {
let http_access = Client::builder().build().unwrap();
match http_access
.head("https://github.com/dani-garcia/bitwarden_rs")
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()
{
Ok(r) => r.status().is_success(),
_ => false,
}
} }
#[get("/diagnostics")] #[get("/diagnostics")]
fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> { fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
use std::net::ToSocketAddrs;
use chrono::prelude::*;
use crate::util::read_file_string; use crate::util::read_file_string;
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let vault_version_path = format!("{}/{}", CONFIG.web_vault_folder(), "version.json"); let vault_version_path = format!("{}/{}", CONFIG.web_vault_folder(), "version.json");
let vault_version_str = read_file_string(&vault_version_path)?; let vault_version_str = read_file_string(&vault_version_path)?;
let web_vault_version: WebVaultVersion = serde_json::from_str(&vault_version_str)?; let web_vault_version: WebVaultVersion = serde_json::from_str(&vault_version_str)?;
let github_ips = ("github.com", 0).to_socket_addrs().map(|mut i| i.next()); // Execute some environment checks
let (dns_resolved, dns_ok) = match github_ips { let running_within_docker = std::path::Path::new("/.dockerenv").exists() || std::path::Path::new("/run/.containerenv").exists();
Ok(Some(a)) => (a.ip().to_string(), true), let has_http_access = has_http_access();
_ => ("Could not resolve domain name.".to_string(), false), let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
|| env::var_os("HTTPS_PROXY").is_some()
|| env::var_os("https_proxy").is_some();
// Check if we are able to resolve DNS entries
let dns_resolved = match ("github.com", 0).to_socket_addrs().map(|mut i| i.next()) {
Ok(Some(a)) => a.ip().to_string(),
_ => "Could not resolve domain name.".to_string(),
}; };
// If the DNS Check failed, do not even attempt to check for new versions since we were not able to resolve github.com // If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
let (latest_release, latest_commit, latest_web_build) = if dns_ok { // TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already.
let (latest_release, latest_commit, latest_web_build) = if has_http_access {
( (
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bitwarden_rs/releases/latest") { match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bitwarden_rs/releases/latest") {
Ok(r) => r.tag_name, Ok(r) => r.tag_name,
_ => "-".to_string() _ => "-".to_string(),
}, },
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/bitwarden_rs/commits/master") { match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/bitwarden_rs/commits/master") {
Ok(mut c) => { Ok(mut c) => {
c.sha.truncate(8); c.sha.truncate(8);
c.sha c.sha
}
_ => "-".to_string(),
}, },
_ => "-".to_string() // Do not fetch the web-vault version when running within Docker.
}, // The web-vault version is embedded within the container it self, and should not be updated manually
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") { if running_within_docker {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(), "-".to_string()
_ => "-".to_string() } else {
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(),
}
}, },
) )
} else { } else {
("-".to_string(), "-".to_string(), "-".to_string()) ("-".to_string(), "-".to_string(), "-".to_string())
}; };
// Run the date check as the last item right before filling the json.
// This should ensure that the time difference between the browser and the server is as minimal as possible.
let dt = Utc::now();
let server_time = dt.format("%Y-%m-%d %H:%M:%S").to_string();
let diagnostics_json = json!({ let diagnostics_json = json!({
"dns_resolved": dns_resolved, "dns_resolved": dns_resolved,
"server_time": server_time,
"web_vault_version": web_vault_version.version, "web_vault_version": web_vault_version.version,
"latest_release": latest_release, "latest_release": latest_release,
"latest_commit": latest_commit, "latest_commit": latest_commit,
"latest_web_build": latest_web_build, "latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"has_http_access": has_http_access,
"uses_proxy": uses_proxy,
"db_type": *DB_TYPE,
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
}); });
let text = AdminTemplateData::diagnostics(diagnostics_json).render()?; let text = AdminTemplateData::diagnostics(diagnostics_json).render()?;
Ok(Html(text)) Ok(Html(text))
} }
#[get("/diagnostics/config")]
fn get_diagnostics_config(_token: AdminToken) -> JsonResult {
let support_json = CONFIG.get_support_json();
Ok(Json(support_json))
}
#[post("/config", data = "<data>")] #[post("/config", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult { fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner(); let data: ConfigBuilder = data.into_inner();

View File

@@ -115,7 +115,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
user.client_kdf_type = client_kdf_type; user.client_kdf_type = client_kdf_type;
} }
user.set_password(&data.MasterPasswordHash); user.set_password(&data.MasterPasswordHash, None);
user.akey = data.Key; user.akey = data.Key;
// Add extra fields if present // Add extra fields if present
@@ -232,7 +232,7 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
err!("Invalid password") err!("Invalid password")
} }
user.set_password(&data.NewMasterPasswordHash); user.set_password(&data.NewMasterPasswordHash, Some("post_rotatekey"));
user.akey = data.Key; user.akey = data.Key;
user.save(&conn) user.save(&conn)
} }
@@ -259,7 +259,7 @@ fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) ->
user.client_kdf_iter = data.KdfIterations; user.client_kdf_iter = data.KdfIterations;
user.client_kdf_type = data.Kdf; user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash); user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key; user.akey = data.Key;
user.save(&conn) user.save(&conn)
} }
@@ -338,6 +338,7 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
user.akey = data.Key; user.akey = data.Key;
user.private_key = Some(data.PrivateKey); user.private_key = Some(data.PrivateKey);
user.reset_security_stamp(); user.reset_security_stamp();
user.reset_stamp_exception();
user.save(&conn) user.save(&conn)
} }
@@ -445,7 +446,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
user.email_new = None; user.email_new = None;
user.email_new_token = None; user.email_new_token = None;
user.set_password(&data.NewMasterPasswordHash); user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key; user.akey = data.Key;
user.save(&conn) user.save(&conn)
@@ -460,7 +461,7 @@ fn post_verify_email(headers: Headers, _conn: DbConn) -> EmptyResult {
} }
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid) { if let Err(e) = mail::send_verify_email(&user.email, &user.uuid) {
error!("Error sending delete account email: {:#?}", e); error!("Error sending verify_email email: {:#?}", e);
} }
Ok(()) Ok(())

View File

@@ -1,6 +1,7 @@
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use std::path::Path; use std::path::Path;
use chrono::{NaiveDateTime, Utc};
use rocket::{http::ContentType, request::Form, Data, Route}; use rocket::{http::ContentType, request::Form, Data, Route};
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
@@ -17,6 +18,16 @@ use crate::{
}; };
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
// Note that many routes have an `admin` variant; this seems to be
// because the stored procedure that upstream Bitwarden uses to determine
// whether the user can edit a cipher doesn't take into account whether
// the user is an org owner/admin. The `admin` variant first checks
// whether the user is an owner/admin of the relevant org, and if so,
// allows the operation unconditionally.
//
// bitwarden_rs factors in the org owner/admin status as part of
// determining the write accessibility of a cipher, so most
// admin/non-admin implementations can be shared.
routes![ routes![
sync, sync,
get_ciphers, get_ciphers,
@@ -38,7 +49,7 @@ pub fn routes() -> Vec<Route> {
post_cipher_admin, post_cipher_admin,
post_cipher_share, post_cipher_share,
put_cipher_share, put_cipher_share,
put_cipher_share_seleted, put_cipher_share_selected,
post_cipher, post_cipher,
put_cipher, put_cipher,
delete_cipher_post, delete_cipher_post,
@@ -50,6 +61,9 @@ pub fn routes() -> Vec<Route> {
delete_cipher_selected, delete_cipher_selected,
delete_cipher_selected_post, delete_cipher_selected_post,
delete_cipher_selected_put, delete_cipher_selected_put,
delete_cipher_selected_admin,
delete_cipher_selected_post_admin,
delete_cipher_selected_put_admin,
restore_cipher_put, restore_cipher_put,
restore_cipher_put_admin, restore_cipher_put_admin,
restore_cipher_selected, restore_cipher_selected,
@@ -77,7 +91,9 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect(); let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn); let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn);
let collections_json: Vec<Value> = collections.iter().map(Collection::to_json).collect(); let collections_json: Vec<Value> = collections.iter()
.map(|c| c.to_json_details(&headers.user.uuid, &conn))
.collect();
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn); let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect(); let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
@@ -181,6 +197,14 @@ pub struct CipherData {
#[serde(rename = "Attachments")] #[serde(rename = "Attachments")]
_Attachments: Option<Value>, // Unused, contains map of {id: filename} _Attachments: Option<Value>, // Unused, contains map of {id: filename}
Attachments2: Option<HashMap<String, Attachments2Data>>, Attachments2: Option<HashMap<String, Attachments2Data>>,
// The revision datetime (in ISO 8601 format) of the client's local copy
// of the cipher. This is used to prevent a client from updating a cipher
// when it doesn't have the latest version, as that can result in data
// loss. It's not an error when no value is provided; this can happen
// when using older client versions, or if the operation doesn't involve
// updating an existing cipher.
LastKnownRevisionDate: Option<String>,
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
@@ -190,22 +214,46 @@ pub struct Attachments2Data {
Key: String, Key: String,
} }
/// Called when an org admin clones an org cipher.
#[post("/ciphers/admin", data = "<data>")] #[post("/ciphers/admin", data = "<data>")]
fn post_ciphers_admin(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult { fn post_ciphers_admin(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let data: ShareCipherData = data.into_inner().data; post_ciphers_create(data, headers, conn, nt)
}
/// Called when creating a new org-owned cipher, or cloning a cipher (whether
/// user- or org-owned). When cloning a cipher to a user-owned cipher,
/// `organizationId` is null.
#[post("/ciphers/create", data = "<data>")]
fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let mut data: ShareCipherData = data.into_inner().data;
// Check if there are one more more collections selected when this cipher is part of an organization.
// err if this is not the case before creating an empty cipher.
if data.Cipher.OrganizationId.is_some() && data.CollectionIds.is_empty() {
err!("You must select at least one collection.");
}
// This check is usually only needed in update_cipher_from_data(), but we
// need it here as well to avoid creating an empty cipher in the call to
// cipher.save() below.
enforce_personal_ownership_policy(&data.Cipher, &headers, &conn)?;
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone()); let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
cipher.user_uuid = Some(headers.user.uuid.clone()); cipher.user_uuid = Some(headers.user.uuid.clone());
cipher.save(&conn)?; cipher.save(&conn)?;
// When cloning a cipher, the Bitwarden clients seem to set this field
// based on the cipher being cloned (when creating a new cipher, it's set
// to null as expected). However, `cipher.created_at` is initialized to
// the current time, so the stale data check will end up failing down the
// line. Since this function only creates new ciphers (whether by cloning
// or otherwise), we can just ignore this field entirely.
data.Cipher.LastKnownRevisionDate = None;
share_cipher_by_uuid(&cipher.uuid, data, &headers, &conn, &nt) share_cipher_by_uuid(&cipher.uuid, data, &headers, &conn, &nt)
} }
#[post("/ciphers/create", data = "<data>")] /// Called when creating a new user-owned cipher.
fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
post_ciphers_admin(data, headers, conn, nt)
}
#[post("/ciphers", data = "<data>")] #[post("/ciphers", data = "<data>")]
fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult { fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let data: CipherData = data.into_inner().data; let data: CipherData = data.into_inner().data;
@@ -216,6 +264,38 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn))) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
} }
/// Enforces the personal ownership policy on user-owned ciphers, if applicable.
/// A non-owner/admin user belonging to an org with the personal ownership policy
/// enabled isn't allowed to create new user-owned ciphers or modify existing ones
/// (that were created before the policy was applicable to the user). The user is
/// allowed to delete or share such ciphers to an org, however.
///
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
fn enforce_personal_ownership_policy(
data: &CipherData,
headers: &Headers,
conn: &DbConn
) -> EmptyResult {
if data.OrganizationId.is_none() {
let user_uuid = &headers.user.uuid;
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
if policy.enabled && policy.has_type(OrgPolicyType::PersonalOwnership) {
let org_uuid = &policy.org_uuid;
match UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
Some(user) =>
if user.atype < UserOrgType::Admin &&
user.has_status(UserOrgStatus::Confirmed) {
err!("Due to an Enterprise Policy, you are restricted \
from saving items to your personal vault.")
},
None => err!("Error looking up user type"),
}
}
}
}
Ok(())
}
pub fn update_cipher_from_data( pub fn update_cipher_from_data(
cipher: &mut Cipher, cipher: &mut Cipher,
data: CipherData, data: CipherData,
@@ -225,6 +305,19 @@ pub fn update_cipher_from_data(
nt: &Notify, nt: &Notify,
ut: UpdateType, ut: UpdateType,
) -> EmptyResult { ) -> EmptyResult {
enforce_personal_ownership_policy(&data, headers, conn)?;
// Check that the client isn't updating an existing cipher with stale data.
if let Some(dt) = data.LastKnownRevisionDate {
match NaiveDateTime::parse_from_str(&dt, "%+") { // ISO 8601 format
Err(err) =>
warn!("Error parsing LastKnownRevisionDate '{}': {}", dt, err),
Ok(dt) if cipher.updated_at.signed_duration_since(dt).num_seconds() > 1 =>
err!("The client copy of this cipher is out of date. Resync the client and try again."),
Ok(_) => (),
}
}
if cipher.organization_uuid.is_some() && cipher.organization_uuid != data.OrganizationId { if cipher.organization_uuid.is_some() && cipher.organization_uuid != data.OrganizationId {
err!("Organization mismatch. Please resync the client before updating the cipher") err!("Organization mismatch. Please resync the client before updating the cipher")
} }
@@ -238,6 +331,11 @@ pub fn update_cipher_from_data(
|| cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) || cipher.is_write_accessible_to_user(&headers.user.uuid, &conn)
{ {
cipher.organization_uuid = Some(org_id); cipher.organization_uuid = Some(org_id);
// After some discussion in PR #1329 re-added the user_uuid = None again.
// TODO: Audit/Check the whole save/update cipher chain.
// Upstream uses the user_uuid to allow a cipher added by a user to an org to still allow the user to view/edit the cipher
// even when the user has hide-passwords configured as there policy.
// Removing the line below would fix that, but we have to check which effect this would have on the rest of the code.
cipher.user_uuid = None; cipher.user_uuid = None;
} else { } else {
err!("You don't have permission to add cipher directly to organization") err!("You don't have permission to add cipher directly to organization")
@@ -281,6 +379,23 @@ pub fn update_cipher_from_data(
} }
} }
// Cleanup cipher data, like removing the 'Response' key.
// This key is somewhere generated during Javascript so no way for us this fix this.
// Also, upstream only retrieves keys they actually want to store, and thus skip the 'Response' key.
// We do not mind which data is in it, the keep our model more flexible when there are upstream changes.
// But, we at least know we do not need to store and return this specific key.
fn _clean_cipher_data(mut json_data: Value) -> Value {
if json_data.is_array() {
json_data.as_array_mut()
.unwrap()
.iter_mut()
.for_each(|ref mut f| {
f.as_object_mut().unwrap().remove("Response");
});
};
json_data
}
let type_data_opt = match data.Type { let type_data_opt = match data.Type {
1 => data.Login, 1 => data.Login,
2 => data.SecureNote, 2 => data.SecureNote,
@@ -289,23 +404,22 @@ pub fn update_cipher_from_data(
_ => err!("Invalid type"), _ => err!("Invalid type"),
}; };
let mut type_data = match type_data_opt { let type_data = match type_data_opt {
Some(data) => data, Some(mut data) => {
// Remove the 'Response' key from the base object.
data.as_object_mut().unwrap().remove("Response");
// Remove the 'Response' key from every Uri.
if data["Uris"].is_array() {
data["Uris"] = _clean_cipher_data(data["Uris"].clone());
}
data
},
None => err!("Data missing"), None => err!("Data missing"),
}; };
// TODO: ******* Backwards compat start **********
// To remove backwards compatibility, just delete this code,
// and remove the compat code from cipher::to_json
type_data["Name"] = Value::String(data.Name.clone());
type_data["Notes"] = data.Notes.clone().map(Value::String).unwrap_or(Value::Null);
type_data["Fields"] = data.Fields.clone().unwrap_or(Value::Null);
type_data["PasswordHistory"] = data.PasswordHistory.clone().unwrap_or(Value::Null);
// TODO: ******* Backwards compat end **********
cipher.name = data.Name; cipher.name = data.Name;
cipher.notes = data.Notes; cipher.notes = data.Notes;
cipher.fields = data.Fields.map(|f| f.to_string()); cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string() );
cipher.data = type_data.to_string(); cipher.data = type_data.to_string();
cipher.password_history = data.PasswordHistory.map(|f| f.to_string()); cipher.password_history = data.PasswordHistory.map(|f| f.to_string());
@@ -374,6 +488,7 @@ fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbC
Ok(()) Ok(())
} }
/// Called when an org admin modifies an existing org cipher.
#[put("/ciphers/<uuid>/admin", data = "<data>")] #[put("/ciphers/<uuid>/admin", data = "<data>")]
fn put_cipher_admin( fn put_cipher_admin(
uuid: String, uuid: String,
@@ -548,7 +663,7 @@ struct ShareSelectedCipherData {
} }
#[put("/ciphers/share", data = "<data>")] #[put("/ciphers/share", data = "<data>")]
fn put_cipher_share_seleted( fn put_cipher_share_selected(
data: JsonUpcase<ShareSelectedCipherData>, data: JsonUpcase<ShareSelectedCipherData>,
headers: Headers, headers: Headers,
conn: DbConn, conn: DbConn,
@@ -862,22 +977,37 @@ fn delete_cipher_selected_post(data: JsonUpcase<Value>, headers: Headers, conn:
#[put("/ciphers/delete", data = "<data>")] #[put("/ciphers/delete", data = "<data>")]
fn delete_cipher_selected_put(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_selected_put(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_multiple_ciphers(data, headers, conn, true, nt) _delete_multiple_ciphers(data, headers, conn, true, nt) // soft delete
}
#[delete("/ciphers/admin", data = "<data>")]
fn delete_cipher_selected_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected(data, headers, conn, nt)
}
#[post("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_post_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected_post(data, headers, conn, nt)
}
#[put("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_put_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected_put(data, headers, conn, nt)
} }
#[put("/ciphers/<uuid>/restore")] #[put("/ciphers/<uuid>/restore")]
fn restore_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn restore_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt) _restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
} }
#[put("/ciphers/<uuid>/restore-admin")] #[put("/ciphers/<uuid>/restore-admin")]
fn restore_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn restore_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt) _restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
} }
#[put("/ciphers/restore", data = "<data>")] #[put("/ciphers/restore", data = "<data>")]
fn restore_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn restore_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_multiple_ciphers(data, headers, conn, nt) _restore_multiple_ciphers(data, &headers, &conn, &nt)
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -963,7 +1093,6 @@ fn delete_all(
Some(user_org) => { Some(user_org) => {
if user_org.atype == UserOrgType::Owner { if user_org.atype == UserOrgType::Owner {
Cipher::delete_all_by_organization(&org_data.org_id, &conn)?; Cipher::delete_all_by_organization(&org_data.org_id, &conn)?;
Collection::delete_all_by_organization(&org_data.org_id, &conn)?;
nt.send_user_update(UpdateType::Vault, &user); nt.send_user_update(UpdateType::Vault, &user);
Ok(()) Ok(())
} else { } else {
@@ -1002,7 +1131,7 @@ fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_del
} }
if soft_delete { if soft_delete {
cipher.deleted_at = Some(chrono::Utc::now().naive_utc()); cipher.deleted_at = Some(Utc::now().naive_utc());
cipher.save(&conn)?; cipher.save(&conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
} else { } else {
@@ -1033,7 +1162,7 @@ fn _delete_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbC
Ok(()) Ok(())
} }
fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> EmptyResult { fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) { let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher, Some(cipher) => cipher,
None => err!("Cipher doesn't exist"), None => err!("Cipher doesn't exist"),
@@ -1047,10 +1176,10 @@ fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &No
cipher.save(&conn)?; cipher.save(&conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn)); nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
Ok(()) Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
} }
fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
let data: Value = data.into_inner().data; let data: Value = data.into_inner().data;
let uuids = match data.get("Ids") { let uuids = match data.get("Ids") {
@@ -1061,13 +1190,19 @@ fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: Db
None => err!("Request missing ids field"), None => err!("Request missing ids field"),
}; };
let mut ciphers: Vec<Value> = Vec::new();
for uuid in uuids { for uuid in uuids {
if let error @ Err(_) = _restore_cipher_by_uuid(uuid, &headers, &conn, &nt) { match _restore_cipher_by_uuid(uuid, headers, conn, nt) {
return error; Ok(json) => ciphers.push(json.into_inner()),
}; err => return err
}
} }
Ok(()) Ok(Json(json!({
"Data": ciphers,
"Object": "list",
"ContinuationToken": null
})))
} }
fn _delete_cipher_attachment_by_id( fn _delete_cipher_attachment_by_id(

View File

@@ -172,7 +172,7 @@ fn hibp_breach(username: String) -> JsonResult {
"Domain": "haveibeenpwned.com", "Domain": "haveibeenpwned.com",
"BreachDate": "2019-08-18T00:00:00Z", "BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z", "AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username), "Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"LogoPath": "bwrs_static/hibp.png", "LogoPath": "bwrs_static/hibp.png",
"PwnCount": 0, "PwnCount": 0,
"DataClasses": [ "DataClasses": [

View File

@@ -5,7 +5,7 @@ use serde_json::Value;
use crate::{ use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType}, api::{EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType},
auth::{decode_invite, AdminHeaders, Headers, OwnerHeaders}, auth::{decode_invite, AdminHeaders, Headers, OwnerHeaders, ManagerHeaders, ManagerHeadersLoose},
db::{models::*, DbConn}, db::{models::*, DbConn},
mail, CONFIG, mail, CONFIG,
}; };
@@ -47,7 +47,10 @@ pub fn routes() -> Vec<Route> {
list_policies_token, list_policies_token,
get_policy, get_policy,
put_policy, put_policy,
get_organization_tax,
get_plans, get_plans,
get_plans_tax_rates,
import,
] ]
} }
@@ -217,7 +220,7 @@ fn get_org_collections(org_id: String, _headers: AdminHeaders, conn: DbConn) ->
#[post("/organizations/<org_id>/collections", data = "<data>")] #[post("/organizations/<org_id>/collections", data = "<data>")]
fn post_organization_collections( fn post_organization_collections(
org_id: String, org_id: String,
_headers: AdminHeaders, headers: ManagerHeadersLoose,
data: JsonUpcase<NewCollectionData>, data: JsonUpcase<NewCollectionData>,
conn: DbConn, conn: DbConn,
) -> JsonResult { ) -> JsonResult {
@@ -228,9 +231,22 @@ fn post_organization_collections(
None => err!("Can't find organization details"), None => err!("Can't find organization details"),
}; };
// Get the user_organization record so that we can check if the user has access to all collections.
let user_org = match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) {
Some(u) => u,
None => err!("User is not part of organization"),
};
let collection = Collection::new(org.uuid, data.Name); let collection = Collection::new(org.uuid, data.Name);
collection.save(&conn)?; collection.save(&conn)?;
// If the user doesn't have access to all collections, only in case of a Manger,
// then we need to save the creating user uuid (Manager) to the users_collection table.
// Else the user will not have access to his own created collection.
if !user_org.access_all {
CollectionUser::save(&headers.user.uuid, &collection.uuid, false, false, &conn)?;
}
Ok(Json(collection.to_json())) Ok(Json(collection.to_json()))
} }
@@ -238,7 +254,7 @@ fn post_organization_collections(
fn put_organization_collection_update( fn put_organization_collection_update(
org_id: String, org_id: String,
col_id: String, col_id: String,
headers: AdminHeaders, headers: ManagerHeaders,
data: JsonUpcase<NewCollectionData>, data: JsonUpcase<NewCollectionData>,
conn: DbConn, conn: DbConn,
) -> JsonResult { ) -> JsonResult {
@@ -249,7 +265,7 @@ fn put_organization_collection_update(
fn post_organization_collection_update( fn post_organization_collection_update(
org_id: String, org_id: String,
col_id: String, col_id: String,
_headers: AdminHeaders, _headers: ManagerHeaders,
data: JsonUpcase<NewCollectionData>, data: JsonUpcase<NewCollectionData>,
conn: DbConn, conn: DbConn,
) -> JsonResult { ) -> JsonResult {
@@ -317,7 +333,7 @@ fn post_organization_collection_delete_user(
} }
#[delete("/organizations/<org_id>/collections/<col_id>")] #[delete("/organizations/<org_id>/collections/<col_id>")]
fn delete_organization_collection(org_id: String, col_id: String, _headers: AdminHeaders, conn: DbConn) -> EmptyResult { fn delete_organization_collection(org_id: String, col_id: String, _headers: ManagerHeaders, conn: DbConn) -> EmptyResult {
match Collection::find_by_uuid(&col_id, &conn) { match Collection::find_by_uuid(&col_id, &conn) {
None => err!("Collection not found"), None => err!("Collection not found"),
Some(collection) => { Some(collection) => {
@@ -341,7 +357,7 @@ struct DeleteCollectionData {
fn post_organization_collection_delete( fn post_organization_collection_delete(
org_id: String, org_id: String,
col_id: String, col_id: String,
headers: AdminHeaders, headers: ManagerHeaders,
_data: JsonUpcase<DeleteCollectionData>, _data: JsonUpcase<DeleteCollectionData>,
conn: DbConn, conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
@@ -349,7 +365,7 @@ fn post_organization_collection_delete(
} }
#[get("/organizations/<org_id>/collections/<coll_id>/details")] #[get("/organizations/<org_id>/collections/<coll_id>/details")]
fn get_org_collection_detail(org_id: String, coll_id: String, headers: AdminHeaders, conn: DbConn) -> JsonResult { fn get_org_collection_detail(org_id: String, coll_id: String, headers: ManagerHeaders, conn: DbConn) -> JsonResult {
match Collection::find_by_uuid_and_user(&coll_id, &headers.user.uuid, &conn) { match Collection::find_by_uuid_and_user(&coll_id, &headers.user.uuid, &conn) {
None => err!("Collection not found"), None => err!("Collection not found"),
Some(collection) => { Some(collection) => {
@@ -363,7 +379,7 @@ fn get_org_collection_detail(org_id: String, coll_id: String, headers: AdminHead
} }
#[get("/organizations/<org_id>/collections/<coll_id>/users")] #[get("/organizations/<org_id>/collections/<coll_id>/users")]
fn get_collection_users(org_id: String, coll_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn get_collection_users(org_id: String, coll_id: String, _headers: ManagerHeaders, conn: DbConn) -> JsonResult {
// Get org and collection, check that collection is from org // Get org and collection, check that collection is from org
let collection = match Collection::find_by_uuid_and_org(&coll_id, &org_id, &conn) { let collection = match Collection::find_by_uuid_and_org(&coll_id, &org_id, &conn) {
None => err!("Collection not found in Organization"), None => err!("Collection not found in Organization"),
@@ -388,7 +404,7 @@ fn put_collection_users(
org_id: String, org_id: String,
coll_id: String, coll_id: String,
data: JsonUpcaseVec<CollectionData>, data: JsonUpcaseVec<CollectionData>,
_headers: AdminHeaders, _headers: ManagerHeaders,
conn: DbConn, conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
// Get org and collection, check that collection is from org // Get org and collection, check that collection is from org
@@ -440,7 +456,7 @@ fn get_org_details(data: Form<OrgIdData>, headers: Headers, conn: DbConn) -> Jso
} }
#[get("/organizations/<org_id>/users")] #[get("/organizations/<org_id>/users")]
fn get_org_users(org_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) -> JsonResult {
let users = UserOrganization::find_by_org(&org_id, &conn); let users = UserOrganization::find_by_org(&org_id, &conn);
let users_json: Vec<Value> = users.iter().map(|c| c.to_json_user_details(&conn)).collect(); let users_json: Vec<Value> = users.iter().map(|c| c.to_json_user_details(&conn)).collect();
@@ -953,7 +969,7 @@ fn list_policies_token(org_id: String, token: String, conn: DbConn) -> JsonResul
fn get_policy(org_id: String, pol_type: i32, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn get_policy(org_id: String, pol_type: i32, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) { let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt, Some(pt) => pt,
None => err!("Invalid policy type"), None => err!("Invalid or unsupported policy type"),
}; };
let policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) { let policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
@@ -993,6 +1009,13 @@ fn put_policy(org_id: String, pol_type: i32, data: Json<PolicyData>, _headers: A
Ok(Json(policy.to_json())) Ok(Json(policy.to_json()))
} }
#[allow(unused_variables)]
#[get("/organizations/<org_id>/tax")]
fn get_organization_tax(org_id: String, _headers: Headers, _conn: DbConn) -> EmptyResult {
// Prevent a 404 error, which also causes Javascript errors.
err!("Only allowed when not self hosted.")
}
#[get("/plans")] #[get("/plans")]
fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult { fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
Ok(Json(json!({ Ok(Json(json!({
@@ -1043,4 +1066,111 @@ fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
], ],
"ContinuationToken": null "ContinuationToken": null
}))) })))
} }
#[get("/plans/sales-tax-rates")]
fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> JsonResult {
// Prevent a 404 error, which also causes Javascript errors.
Ok(Json(json!({
"Object": "list",
"Data": [],
"ContinuationToken": null
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportGroupData {
Name: String, // "GroupName"
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
Users: Vec<String>, // ["uid=user,ou=People,dc=example,dc=com"]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportUserData {
Email: String, // "user@maildomain.net"
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
Deleted: bool,
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportData {
Groups: Vec<OrgImportGroupData>,
OverwriteExisting: bool,
Users: Vec<OrgImportUserData>,
}
#[post("/organizations/<org_id>/import", data = "<data>")]
fn import(org_id: String, data: JsonUpcase<OrgImportData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data = data.into_inner().data;
// TODO: Currently we aren't storing the externalId's anywhere, so we also don't have a way
// to differentiate between auto-imported users and manually added ones.
// This means that this endpoint can end up removing users that were added manually by an admin,
// as opposed to upstream which only removes auto-imported users.
// User needs to be admin or owner to use the Directry Connector
match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) {
Some(user_org) if user_org.atype >= UserOrgType::Admin => { /* Okay, nothing to do */ }
Some(_) => err!("User has insufficient permissions to use Directory Connector"),
None => err!("User not part of organization"),
};
for user_data in &data.Users {
if user_data.Deleted {
// If user is marked for deletion and it exists, delete it
if let Some(user_org) = UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn) {
user_org.delete(&conn)?;
}
// If user is not part of the organization, but it exists
} else if UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn).is_none() {
if let Some (user) = User::find_by_mail(&user_data.Email, &conn) {
let user_org_status = if CONFIG.mail_enabled() {
UserOrgStatus::Invited as i32
} else {
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
};
let mut new_org_user = UserOrganization::new(user.uuid.clone(), org_id.clone());
new_org_user.access_all = false;
new_org_user.atype = UserOrgType::User as i32;
new_org_user.status = user_org_status;
new_org_user.save(&conn)?;
if CONFIG.mail_enabled() {
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
Some(org) => org.name,
None => err!("Error looking up organization"),
};
mail::send_invite(
&user_data.Email,
&user.uuid,
Some(org_id.clone()),
Some(new_org_user.uuid),
&org_name,
Some(headers.user.email.clone()),
)?;
}
}
}
}
// If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true)
if data.OverwriteExisting {
for user_org in UserOrganization::find_by_org_and_type(&org_id, UserOrgType::User as i32, &conn) {
if let Some (user_email) = User::find_by_uuid(&user_org.user_uuid, &conn).map(|u| u.email) {
if !data.Users.iter().any(|u| u.Email == user_email) {
user_org.delete(&conn)?;
}
}
}
}
Ok(())
}

View File

@@ -1,13 +1,15 @@
use std::{ use std::{
collections::HashMap,
fs::{create_dir_all, remove_file, symlink_metadata, File}, fs::{create_dir_all, remove_file, symlink_metadata, File},
io::prelude::*, io::prelude::*,
net::{IpAddr, ToSocketAddrs}, net::{IpAddr, ToSocketAddrs},
sync::RwLock,
time::{Duration, SystemTime}, time::{Duration, SystemTime},
}; };
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use regex::Regex; use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header::HeaderMap, Url}; use reqwest::{blocking::Client, blocking::Response, header, Url};
use rocket::{http::ContentType, http::Cookie, response::Content, Route}; use rocket::{http::ContentType, http::Cookie, response::Content, Route};
use soup::prelude::*; use soup::prelude::*;
@@ -17,33 +19,67 @@ pub fn routes() -> Vec<Route> {
routes![icon] routes![icon]
} }
const FALLBACK_ICON: &[u8; 344] = include_bytes!("../static/fallback-icon.png");
const ALLOWED_CHARS: &str = "_-."; const ALLOWED_CHARS: &str = "_-.";
static CLIENT: Lazy<Client> = Lazy::new(|| { static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = header::HeaderMap::new();
default_headers.insert(header::USER_AGENT, header::HeaderValue::from_static("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15"));
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en-US,en;q=0.8"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::ACCEPT, header::HeaderValue::from_static("text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8"));
// Reuse the client between requests // Reuse the client between requests
Client::builder() Client::builder()
.timeout(Duration::from_secs(CONFIG.icon_download_timeout())) .timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(_header_map()) .default_headers(default_headers)
.build() .build()
.unwrap() .unwrap()
}); });
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"icon$|apple.*icon").unwrap()); // Build Regex only once since this takes a lot of time.
static ICON_HREF_REGEX: Lazy<Regex> = static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap());
Lazy::new(|| Regex::new(r"(?i)\w+\.(jpg|jpeg|png|ico)(\?.*)?$|^data:image.*base64").unwrap());
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap()); static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new()));
#[get("/<domain>/icon.png")]
fn icon(domain: String) -> Option<Cached<Content<Vec<u8>>>> {
if !is_valid_domain(&domain) {
warn!("Invalid domain: {}", domain);
return None;
}
get_icon(&domain).map(|icon| Cached::long(Content(ContentType::new("image", "x-icon"), icon)))
}
/// Returns if the domain provided is valid or not.
///
/// This does some manual checks and makes use of Url to do some basic checking.
/// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255.
fn is_valid_domain(domain: &str) -> bool { fn is_valid_domain(domain: &str) -> bool {
// Don't allow empty or too big domains or path traversal // If parsing the domain fails using Url, it will not work with reqwest.
if domain.is_empty() || domain.len() > 255 || domain.contains("..") { if let Err(parse_error) = Url::parse(format!("https://{}", domain).as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
return false;
} else if domain.is_empty()
|| domain.contains("..")
|| domain.starts_with('.')
|| domain.starts_with('-')
|| domain.ends_with('-')
{
debug!("Domain validation error: '{}' is either empty, contains '..', starts with an '.', starts or ends with a '-'", domain);
return false;
} else if domain.len() > 255 {
debug!("Domain validation error: '{}' exceeds 255 characters", domain);
return false; return false;
} }
// Only alphanumeric or specific characters
for c in domain.chars() { for c in domain.chars() {
if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) { if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) {
debug!("Domain validation error: '{}' contains an invalid character '{}'", domain, c);
return false; return false;
} }
} }
@@ -51,21 +87,10 @@ fn is_valid_domain(domain: &str) -> bool {
true true
} }
#[get("/<domain>/icon.png")]
fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
let icon_type = ContentType::new("image", "x-icon");
if !is_valid_domain(&domain) {
warn!("Invalid domain: {:#?}", domain);
return Cached::long(Content(icon_type, FALLBACK_ICON.to_vec()));
}
Cached::long(Content(icon_type, get_icon(&domain)))
}
/// TODO: This is extracted from IpAddr::is_global, which is unstable: /// TODO: This is extracted from IpAddr::is_global, which is unstable:
/// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global /// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global
/// Remove once https://github.com/rust-lang/rust/issues/27709 is merged /// Remove once https://github.com/rust-lang/rust/issues/27709 is merged
#[allow(clippy::nonminimal_bool)]
#[cfg(not(feature = "unstable"))] #[cfg(not(feature = "unstable"))]
fn is_global(ip: IpAddr) -> bool { fn is_global(ip: IpAddr) -> bool {
match ip { match ip {
@@ -161,7 +186,7 @@ mod tests {
} }
} }
fn check_icon_domain_is_blacklisted(domain: &str) -> bool { fn is_domain_blacklisted(domain: &str) -> bool {
let mut is_blacklisted = CONFIG.icon_blacklist_non_global_ips() let mut is_blacklisted = CONFIG.icon_blacklist_non_global_ips()
&& (domain, 0) && (domain, 0)
.to_socket_addrs() .to_socket_addrs()
@@ -179,7 +204,31 @@ fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
// Skip the regex check if the previous one is true already // Skip the regex check if the previous one is true already
if !is_blacklisted { if !is_blacklisted {
if let Some(blacklist) = CONFIG.icon_blacklist_regex() { if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
let regex = Regex::new(&blacklist).expect("Valid Regex"); let mut regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let regex = if let Some(regex) = regex_hashmap.get(&blacklist) {
regex
} else {
drop(regex_hashmap);
let mut regex_hashmap_write = ICON_BLACKLIST_REGEX.write().unwrap();
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if regex_hashmap_write.len() >= 1 {
regex_hashmap_write.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
regex_hashmap_write.insert(blacklist.to_string(), blacklist_regex);
drop(regex_hashmap_write);
regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
regex_hashmap.get(&blacklist).unwrap()
};
// Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(&domain) { if regex.is_match(&domain) {
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist); warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist);
is_blacklisted = true; is_blacklisted = true;
@@ -190,39 +239,38 @@ fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
is_blacklisted is_blacklisted
} }
fn get_icon(domain: &str) -> Vec<u8> { fn get_icon(domain: &str) -> Option<Vec<u8>> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain); let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
// Check for expiration of negatively cached copy
if icon_is_negcached(&path) {
return None;
}
if let Some(icon) = get_cached_icon(&path) { if let Some(icon) = get_cached_icon(&path) {
return icon; return Some(icon);
} }
if CONFIG.disable_icon_download() { if CONFIG.disable_icon_download() {
return FALLBACK_ICON.to_vec(); return None;
} }
// Get the icon, or fallback in case of error // Get the icon, or None in case of error
match download_icon(&domain) { match download_icon(&domain) {
Ok(icon) => { Ok(icon) => {
save_icon(&path, &icon); save_icon(&path, &icon);
icon Some(icon)
} }
Err(e) => { Err(e) => {
error!("Error downloading icon: {:?}", e); error!("Error downloading icon: {:?}", e);
let miss_indicator = path + ".miss"; let miss_indicator = path + ".miss";
let empty_icon = Vec::new(); save_icon(&miss_indicator, &[]);
save_icon(&miss_indicator, &empty_icon); None
FALLBACK_ICON.to_vec()
} }
} }
} }
fn get_cached_icon(path: &str) -> Option<Vec<u8>> { fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
// Check for expiration of negatively cached copy
if icon_is_negcached(path) {
return Some(FALLBACK_ICON.to_vec());
}
// Check for expiration of successfully cached copy // Check for expiration of successfully cached copy
if icon_is_expired(path) { if icon_is_expired(path) {
return None; return None;
@@ -284,6 +332,12 @@ impl Icon {
} }
} }
struct IconUrlResult {
iconlist: Vec<Icon>,
cookies: String,
referer: String,
}
/// Returns a Result/Tuple which holds a Vector IconList and a string which holds the cookies from the last response. /// Returns a Result/Tuple which holds a Vector IconList and a string which holds the cookies from the last response.
/// There will always be a result with a string which will contain https://example.com/favicon.ico and an empty string for the cookies. /// There will always be a result with a string which will contain https://example.com/favicon.ico and an empty string for the cookies.
/// This does not mean that that location does exists, but it is the default location browser use. /// This does not mean that that location does exists, but it is the default location browser use.
@@ -296,24 +350,65 @@ impl Icon {
/// let (mut iconlist, cookie_str) = get_icon_url("github.com")?; /// let (mut iconlist, cookie_str) = get_icon_url("github.com")?;
/// let (mut iconlist, cookie_str) = get_icon_url("gitlab.com")?; /// let (mut iconlist, cookie_str) = get_icon_url("gitlab.com")?;
/// ``` /// ```
fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> { fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Default URL with secure and insecure schemes // Default URL with secure and insecure schemes
let ssldomain = format!("https://{}", domain); let ssldomain = format!("https://{}", domain);
let httpdomain = format!("http://{}", domain); let httpdomain = format!("http://{}", domain);
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)) {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{}", base_domain);
let httpbase = format!("http://{}", base_domain);
debug!("[get_icon_url]: Trying without subdomains '{}'", base_domain);
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase));
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{}", domain);
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{}", www_domain);
let httpwww = format!("http://{}", www_domain);
debug!("[get_icon_url]: Trying with www. prefix '{}'", www_domain);
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww));
}
}
sub_resp
}
};
// Create the iconlist // Create the iconlist
let mut iconlist: Vec<Icon> = Vec::new(); let mut iconlist: Vec<Icon> = Vec::new();
// Create the cookie_str to fill it all the cookies from the response // Create the cookie_str to fill it all the cookies from the response
// These cookies can be used to request/download the favicon image. // These cookies can be used to request/download the favicon image.
// Some sites have extra security in place with for example XSRF Tokens. // Some sites have extra security in place with for example XSRF Tokens.
let mut cookie_str = String::new(); let mut cookie_str = "".to_string();
let mut referer = "".to_string();
let resp = get_page(&ssldomain).or_else(|_| get_page(&httpdomain));
if let Ok(content) = resp { if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com) // Extract the URL from the respose in case redirects occured (like @ gitlab.com)
let url = content.url().clone(); let url = content.url().clone();
// Get all the cookies and pass it on to the next function.
// Needed for XSRF Cookies for example (like @ mijn.ing.nl)
let raw_cookies = content.headers().get_all("set-cookie"); let raw_cookies = content.headers().get_all("set-cookie");
cookie_str = raw_cookies cookie_str = raw_cookies
.iter() .iter()
@@ -327,6 +422,10 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
}) })
.collect::<String>(); .collect::<String>();
// Set the referer to be used on the final request, some sites check this.
// Mostly used to prevent direct linking and other security resons.
referer = url.as_str().to_string();
// Add the default favicon.ico to the list with the domain the content responded from. // Add the default favicon.ico to the list with the domain the content responded from.
iconlist.push(Icon::new(35, url.join("/favicon.ico").unwrap().into_string())); iconlist.push(Icon::new(35, url.join("/favicon.ico").unwrap().into_string()));
@@ -339,14 +438,18 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
let favicons = soup let favicons = soup
.tag("link") .tag("link")
.attr("rel", ICON_REL_REGEX.clone()) // Only use icon rels .attr("rel", ICON_REL_REGEX.clone()) // Only use icon rels
.attr("href", ICON_HREF_REGEX.clone()) // Only allow specific extensions .attr_name("href") // Make sure there is a href
.find_all(); .find_all();
// Loop through all the found icons and determine it's priority // Loop through all the found icons and determine it's priority
for favicon in favicons { for favicon in favicons {
let sizes = favicon.get("sizes"); let sizes = favicon.get("sizes");
let href = favicon.get("href").expect("Missing href"); let href = favicon.get("href").unwrap();
let full_href = url.join(&href).unwrap().into_string(); // Skip invalid url's
let full_href = match url.join(&href) {
Ok(h) => h.into_string(),
_ => continue,
};
let priority = get_icon_priority(&full_href, sizes); let priority = get_icon_priority(&full_href, sizes);
@@ -362,28 +465,33 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
iconlist.sort_by_key(|x| x.priority); iconlist.sort_by_key(|x| x.priority);
// There always is an icon in the list, so no need to check if it exists, and just return the first one // There always is an icon in the list, so no need to check if it exists, and just return the first one
Ok((iconlist, cookie_str)) Ok(IconUrlResult{
iconlist,
cookies: cookie_str,
referer
})
} }
fn get_page(url: &str) -> Result<Response, Error> { fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_cookies(url, "") get_page_with_cookies(url, "", "")
} }
fn get_page_with_cookies(url: &str, cookie_str: &str) -> Result<Response, Error> { fn get_page_with_cookies(url: &str, cookie_str: &str, referer: &str) -> Result<Response, Error> {
if check_icon_domain_is_blacklisted(Url::parse(url).unwrap().host_str().unwrap_or_default()) { if is_domain_blacklisted(Url::parse(url).unwrap().host_str().unwrap_or_default()) {
err!("Favicon rel linked to a non blacklisted domain!"); err!("Favicon rel linked to a blacklisted domain!");
} }
if cookie_str.is_empty() { let mut client = CLIENT.get(url);
CLIENT.get(url).send()?.error_for_status().map_err(Into::into) if !cookie_str.is_empty() {
} else { client = client.header("Cookie", cookie_str)
CLIENT
.get(url)
.header("cookie", cookie_str)
.send()?
.error_for_status()
.map_err(Into::into)
} }
if !referer.is_empty() {
client = client.header("Referer", referer)
}
client.send()?
.error_for_status()
.map_err(Into::into)
} }
/// Returns a Integer with the priority of the type of the icon which to prefer. /// Returns a Integer with the priority of the type of the icon which to prefer.
@@ -411,7 +519,7 @@ fn get_icon_priority(href: &str, sizes: Option<String>) -> u8 {
1 1
} else if width == 64 { } else if width == 64 {
2 2
} else if width >= 24 && width <= 128 { } else if (24..=128).contains(&width) {
3 3
} else if width == 16 { } else if width == 16 {
4 4
@@ -466,17 +574,17 @@ fn parse_sizes(sizes: Option<String>) -> (u16, u16) {
} }
fn download_icon(domain: &str) -> Result<Vec<u8>, Error> { fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
if check_icon_domain_is_blacklisted(domain) { if is_domain_blacklisted(domain) {
err!("Domain is blacklisted", domain) err!("Domain is blacklisted", domain)
} }
let (iconlist, cookie_str) = get_icon_url(&domain)?; let icon_result = get_icon_url(&domain)?;
let mut buffer = Vec::new(); let mut buffer = Vec::new();
use data_url::DataUrl; use data_url::DataUrl;
for icon in iconlist.iter().take(5) { for icon in icon_result.iconlist.iter().take(5) {
if icon.href.starts_with("data:image") { if icon.href.starts_with("data:image") {
let datauri = DataUrl::process(&icon.href).unwrap(); let datauri = DataUrl::process(&icon.href).unwrap();
// Check if we are able to decode the data uri // Check if we are able to decode the data uri
@@ -491,13 +599,13 @@ fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
_ => warn!("data uri is invalid"), _ => warn!("data uri is invalid"),
}; };
} else { } else {
match get_page_with_cookies(&icon.href, &cookie_str) { match get_page_with_cookies(&icon.href, &icon_result.cookies, &icon_result.referer) {
Ok(mut res) => { Ok(mut res) => {
info!("Downloaded icon from {}", icon.href); info!("Downloaded icon from {}", icon.href);
res.copy_to(&mut buffer)?; res.copy_to(&mut buffer)?;
break; break;
} },
Err(_) => info!("Download failed for {}", icon.href), _ => warn!("Download failed for {}", icon.href),
}; };
} }
} }
@@ -522,25 +630,3 @@ fn save_icon(path: &str, icon: &[u8]) {
} }
} }
} }
fn _header_map() -> HeaderMap {
// Set some default headers for the request.
// Use a browser like user-agent to make sure most websites will return there correct website.
use reqwest::header::*;
macro_rules! headers {
($( $name:ident : $value:literal),+ $(,)? ) => {
let mut headers = HeaderMap::new();
$( headers.insert($name, HeaderValue::from_static($value)); )+
headers
};
}
headers! {
USER_AGENT: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299",
ACCEPT_LANGUAGE: "en-US,en;q=0.8",
CACHE_CONTROL: "no-cache",
PRAGMA: "no-cache",
ACCEPT: "text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8",
}
}

View File

@@ -102,6 +102,14 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
) )
} }
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username)
)
}
let now = Local::now(); let now = Local::now();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() { if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {

View File

@@ -19,13 +19,12 @@ static SHOW_WEBSOCKETS_MSG: AtomicBool = AtomicBool::new(true);
#[get("/hub")] #[get("/hub")]
fn websockets_err() -> EmptyResult { fn websockets_err() -> EmptyResult {
if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_and_swap(true, false, Ordering::Relaxed) { if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed).is_ok() {
err!( err!("
"########################################################### ###########################################################
'/notifications/hub' should be proxied to the websocket server or notifications won't work. '/notifications/hub' should be proxied to the websocket server or notifications won't work.
Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false. Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false.
###########################################################################################" ###########################################################################################\n")
)
} else { } else {
Err(Error::empty()) Err(Error::empty())
} }
@@ -161,7 +160,7 @@ impl WSHandler {
} }
} }
}; };
// Otherwise verify the query parameter value // Otherwise verify the query parameter value
let path = hs.request.resource(); let path = hs.request.resource();
if let Some(params) = path.split('?').nth(1) { if let Some(params) = path.split('?').nth(1) {

View File

@@ -215,12 +215,10 @@ pub fn generate_admin_claims() -> AdminJWTClaims {
// //
// Bearer token authentication // Bearer token authentication
// //
use rocket::{ use rocket::request::{FromRequest, Outcome, Request};
request::{FromRequest, Request, Outcome},
};
use crate::db::{ use crate::db::{
models::{Device, User, UserOrgStatus, UserOrgType, UserOrganization}, models::{CollectionUser, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
DbConn, DbConn,
}; };
@@ -298,7 +296,25 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
}; };
if user.security_stamp != claims.sstamp { if user.security_stamp != claims.sstamp {
err_handler!("Invalid security stamp") if let Some(stamp_exception) = user
.stamp_exception
.as_deref()
.and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{
let current_route = match request.route().and_then(|r| r.name) {
Some(name) => name,
_ => err_handler!("Error getting current route for stamp exception"),
};
// Check if both match, if not this route is not allowed with the current security stamp.
if stamp_exception.route != current_route {
err_handler!("Invalid security stamp: Current route and exception route do not match")
} else if stamp_exception.security_stamp != claims.sstamp {
err_handler!("Invalid security stamp for matched stamp exception")
}
} else {
err_handler!("Invalid security stamp")
}
} }
Outcome::Success(Headers { host, device, user }) Outcome::Success(Headers { host, device, user })
@@ -310,11 +326,13 @@ pub struct OrgHeaders {
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub org_user_type: UserOrgType, pub org_user_type: UserOrgType,
pub org_user: UserOrganization,
pub org_id: String,
} }
// org_id is usually the second param ("/organizations/<org_id>") // org_id is usually the second path param ("/organizations/<org_id>"),
// But there are cases where it is located in a query value. // but there are cases where it is a query value.
// First check the param, if this is not a valid uuid, we will try the query value. // First check the path, if this is not a valid uuid, try the query values.
fn get_org_id(request: &Request) -> Option<String> { fn get_org_id(request: &Request) -> Option<String> {
if let Some(Ok(org_id)) = request.get_param::<String>(1) { if let Some(Ok(org_id)) = request.get_param::<String>(1) {
if uuid::Uuid::parse_str(&org_id).is_ok() { if uuid::Uuid::parse_str(&org_id).is_ok() {
@@ -370,6 +388,8 @@ impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
err_handler!("Unknown user type in the database") err_handler!("Unknown user type in the database")
} }
}, },
org_user,
org_id,
}) })
} }
_ => err_handler!("Error getting the organization id"), _ => err_handler!("Error getting the organization id"),
@@ -419,6 +439,127 @@ impl Into<Headers> for AdminHeaders {
} }
} }
// col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"),
// but there could be cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_col_id(request: &Request) -> Option<String> {
if let Some(Ok(col_id)) = request.get_param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
}
if let Some(Ok(col_id)) = request.get_query_value::<String>("collectionId") {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
}
None
}
/// The ManagerHeaders are used to check if you are at least a Manager
/// and have access to the specific collection provided via the <col_id>/collections/collectionId.
/// This does strict checking on the collection_id, ManagerHeadersLoose does not.
pub struct ManagerHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
match get_col_id(request) {
Some(col_id) => {
let conn = match request.guard::<DbConn>() {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn) {
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),
}
}
}
_ => err_handler!("Error getting the collection id"),
}
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
}
}
impl Into<Headers> for ManagerHeaders {
fn into(self) -> Headers {
Headers {
host: self.host,
device: self.device,
user: self.user,
}
}
}
/// The ManagerHeadersLoose is used when you at least need to be a Manager,
/// but there is no collection_id sent with the request (either in the path or as form data).
pub struct ManagerHeadersLoose {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeadersLoose {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
}
}
impl Into<Headers> for ManagerHeadersLoose {
fn into(self) -> Headers {
Headers {
host: self.host,
device: self.device,
user: self.user,
}
}
}
pub struct OwnerHeaders { pub struct OwnerHeaders {
pub host: String, pub host: String,
pub device: Device, pub device: Device,

View File

@@ -2,6 +2,7 @@ use std::process::exit;
use std::sync::RwLock; use std::sync::RwLock;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::Url; use reqwest::Url;
use crate::{ use crate::{
@@ -22,6 +23,21 @@ pub static CONFIG: Lazy<Config> = Lazy::new(|| {
}) })
}); });
static PRIVACY_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[\w]").unwrap());
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"database_url",
"domain_origin",
"domain_path",
"domain",
"helo_name",
"org_creation_users",
"signups_domains_whitelist",
"smtp_from",
"smtp_host",
"smtp_username",
];
pub type Pass = String; pub type Pass = String;
macro_rules! make_config { macro_rules! make_config {
@@ -52,8 +68,34 @@ macro_rules! make_config {
} }
impl ConfigBuilder { impl ConfigBuilder {
#[allow(clippy::field_reassign_with_default)]
fn from_env() -> Self { fn from_env() -> Self {
dotenv::from_path(".env").ok(); match dotenv::from_path(".env") {
Ok(_) => (),
Err(e) => match e {
dotenv::Error::LineParse(msg, pos) => {
panic!("Error loading the .env file:\nNear {:?} on position {}\nPlease fix and restart!\n", msg, pos);
},
dotenv::Error::Io(ioerr) => match ioerr.kind() {
std::io::ErrorKind::NotFound => {
println!("[INFO] No .env file found.\n");
()
},
std::io::ErrorKind::PermissionDenied => {
println!("[WARNING] Permission Denied while trying to read the .env file!\n");
()
},
_ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", ioerr);
()
}
},
_ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", e);
()
}
}
};
let mut builder = ConfigBuilder::default(); let mut builder = ConfigBuilder::default();
$($( $($(
@@ -171,9 +213,38 @@ macro_rules! make_config {
}, )+ }, )+
]}, )+ ]) ]}, )+ ])
} }
pub fn get_support_json(&self) -> serde_json::Value {
let cfg = {
let inner = &self.inner.read().unwrap();
inner.config.clone()
};
json!({ $($(
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
)+)+ })
}
} }
}; };
// Support string print
( @supportstr $name:ident, $value:expr, Pass, option ) => { $value.as_ref().map(|_| String::from("***")) }; // Optional pass, we map to an Option<String> with "***"
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { String::from("***") }; // Required pass, we return "***"
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!($value.as_ref().map(|x| PRIVACY_REGEX.replace_all(&x.to_string(), "${1}*").to_string()))
} else {
json!($value)
}
};
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!(PRIVACY_REGEX.replace_all(&$value.to_string(), "${1}*").to_string())
} else {
json!($value)
}
};
// Group or empty string // Group or empty string
( @show ) => { "" }; ( @show ) => { "" };
( @show $lit:literal ) => { $lit }; ( @show $lit:literal ) => { $lit };
@@ -388,29 +459,35 @@ make_config! {
/// SMTP Email Settings /// SMTP Email Settings
smtp: _enable_smtp { smtp: _enable_smtp {
/// Enabled /// Enabled
_enable_smtp: bool, true, def, true; _enable_smtp: bool, true, def, true;
/// Host /// Host
smtp_host: String, true, option; smtp_host: String, true, option;
/// Enable Secure SMTP |> (Explicit) - Enabling this by default would use STARTTLS (Standard ports 587 or 25) /// Enable Secure SMTP |> (Explicit) - Enabling this by default would use STARTTLS (Standard ports 587 or 25)
smtp_ssl: bool, true, def, true; smtp_ssl: bool, true, def, true;
/// Force TLS |> (Implicit) - Enabling this would force the use of an SSL/TLS connection, instead of upgrading an insecure one with STARTTLS (Standard port 465) /// Force TLS |> (Implicit) - Enabling this would force the use of an SSL/TLS connection, instead of upgrading an insecure one with STARTTLS (Standard port 465)
smtp_explicit_tls: bool, true, def, false; smtp_explicit_tls: bool, true, def, false;
/// Port /// Port
smtp_port: u16, true, auto, |c| if c.smtp_explicit_tls {465} else if c.smtp_ssl {587} else {25}; smtp_port: u16, true, auto, |c| if c.smtp_explicit_tls {465} else if c.smtp_ssl {587} else {25};
/// From Address /// From Address
smtp_from: String, true, def, String::new(); smtp_from: String, true, def, String::new();
/// From Name /// From Name
smtp_from_name: String, true, def, "Bitwarden_RS".to_string(); smtp_from_name: String, true, def, "Bitwarden_RS".to_string();
/// Username /// Username
smtp_username: String, true, option; smtp_username: String, true, option;
/// Password /// Password
smtp_password: Pass, true, option; smtp_password: Pass, true, option;
/// SMTP Auth mechanism |> Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections. Possible values: ["Plain", "Login", "Xoauth2"]. Multiple options need to be separated by a comma ','. /// SMTP Auth mechanism |> Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections. Possible values: ["Plain", "Login", "Xoauth2"]. Multiple options need to be separated by a comma ','.
smtp_auth_mechanism: String, true, option; smtp_auth_mechanism: String, true, option;
/// SMTP connection timeout |> Number of seconds when to stop trying to connect to the SMTP server /// SMTP connection timeout |> Number of seconds when to stop trying to connect to the SMTP server
smtp_timeout: u64, true, def, 15; smtp_timeout: u64, true, def, 15;
/// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters /// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters
helo_name: String, true, option; helo_name: String, true, option;
/// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
smtp_debug: bool, true, def, false;
/// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks!
smtp_accept_invalid_certs: bool, true, def, false;
/// Accept Invalid Hostnames (Know the risks!) |> DANGEROUS: Allow invalid hostnames. This option introduces significant vulnerabilities to man-in-the-middle attacks!
smtp_accept_invalid_hostnames: bool, true, def, false;
}, },
/// Email 2FA Settings /// Email 2FA Settings
@@ -427,7 +504,6 @@ make_config! {
} }
fn validate_config(cfg: &ConfigItems) -> Result<(), Error> { fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
// Validate connection URL is valid and DB feature is enabled // Validate connection URL is valid and DB feature is enabled
DbConnType::from_url(&cfg.database_url)?; DbConnType::from_url(&cfg.database_url)?;
@@ -441,7 +517,9 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let dom = cfg.domain.to_lowercase(); let dom = cfg.domain.to_lowercase();
if !dom.starts_with("http://") && !dom.starts_with("https://") { if !dom.starts_with("http://") && !dom.starts_with("https://") {
err!("DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'"); err!(
"DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'"
);
} }
let whitelist = &cfg.signups_domains_whitelist; let whitelist = &cfg.signups_domains_whitelist;
@@ -450,10 +528,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
let org_creation_users = cfg.org_creation_users.trim().to_lowercase(); let org_creation_users = cfg.org_creation_users.trim().to_lowercase();
if !(org_creation_users.is_empty() || org_creation_users == "all" || org_creation_users == "none") { if !(org_creation_users.is_empty() || org_creation_users == "all" || org_creation_users == "none")
if org_creation_users.split(',').any(|u| !u.contains('@')) { && org_creation_users.split(',').any(|u| !u.contains('@'))
err!("`ORG_CREATION_USERS` contains invalid email addresses"); {
} err!("`ORG_CREATION_USERS` contains invalid email addresses");
} }
if let Some(ref token) = cfg.admin_token { if let Some(ref token) = cfg.admin_token {
@@ -479,6 +557,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("Both `SMTP_HOST` and `SMTP_FROM` need to be set for email support") err!("Both `SMTP_HOST` and `SMTP_FROM` need to be set for email support")
} }
if cfg.smtp_host.is_some() && !cfg.smtp_from.contains('@') {
err!("SMTP_FROM does not contain a mandatory @ sign")
}
if cfg.smtp_username.is_some() != cfg.smtp_password.is_some() { if cfg.smtp_username.is_some() != cfg.smtp_password.is_some() {
err!("Both `SMTP_USERNAME` and `SMTP_PASSWORD` need to be set to enable email authentication") err!("Both `SMTP_USERNAME` and `SMTP_PASSWORD` need to be set to enable email authentication")
} }
@@ -496,6 +578,15 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
} }
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
let validate_regex = Regex::new(&r);
match validate_regex {
Ok(_) => (),
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
}
}
Ok(()) Ok(())
} }
@@ -536,7 +627,12 @@ impl Config {
validate_config(&config)?; validate_config(&config)?;
Ok(Config { Ok(Config {
inner: RwLock::new(Inner { templates: load_templates(&config.templates_folder), config, _env, _usr }), inner: RwLock::new(Inner {
templates: load_templates(&config.templates_folder),
config,
_env,
_usr,
}),
}) })
} }
@@ -609,7 +705,7 @@ impl Config {
/// Tests whether the specified user is allowed to create an organization. /// Tests whether the specified user is allowed to create an organization.
pub fn is_org_creation_allowed(&self, email: &str) -> bool { pub fn is_org_creation_allowed(&self, email: &str) -> bool {
let users = self.org_creation_users(); let users = self.org_creation_users();
if users == "" || users == "all" { if users.is_empty() || users == "all" {
true true
} else if users == "none" { } else if users == "none" {
false false
@@ -663,8 +759,10 @@ impl Config {
let akey_s = data_encoding::BASE64.encode(&akey); let akey_s = data_encoding::BASE64.encode(&akey);
// Save the new value // Save the new value
let mut builder = ConfigBuilder::default(); let builder = ConfigBuilder {
builder._duo_akey = Some(akey_s.clone()); _duo_akey: Some(akey_s.clone()),
..Default::default()
};
self.update_config_partial(builder).ok(); self.update_config_partial(builder).ok();
akey_s akey_s
@@ -778,14 +876,20 @@ fn js_escape_helper<'reg, 'rc>(
.param(0) .param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?; .ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
let no_quote = h
.param(1)
.is_some();
let value = param let value = param
.value() .value()
.as_str() .as_str()
.ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?; .ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
let escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27"); let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
let quoted_value = format!("&quot;{}&quot;", escaped_value); if ! no_quote {
escaped_value = format!("&quot;{}&quot;", escaped_value);
}
out.write(&quoted_value)?; out.write(&escaped_value)?;
Ok(()) Ok(())
} }

View File

@@ -67,7 +67,7 @@ pub fn generate_token(token_size: u32) -> Result<String, Error> {
// token of fixed width, left-padding with 0 as needed. // token of fixed width, left-padding with 0 as needed.
use rand::{thread_rng, Rng}; use rand::{thread_rng, Rng};
let mut rng = thread_rng(); let mut rng = thread_rng();
let number: u64 = rng.gen_range(low, high); let number: u64 = rng.gen_range(low..high);
let token = format!("{:0size$}", number, size = token_size as usize); let token = format!("{:0size$}", number, size = token_size as usize);
Ok(token) Ok(token)

View File

@@ -83,7 +83,12 @@ impl Cipher {
use crate::util::format_date; use crate::util::format_date;
let attachments = Attachment::find_by_cipher(&self.uuid, conn); let attachments = Attachment::find_by_cipher(&self.uuid, conn);
let attachments_json: Vec<Value> = attachments.iter().map(|c| c.to_json(host)).collect(); // When there are no attachments use null instead of an empty array
let attachments_json = if attachments.is_empty() {
Value::Null
} else {
attachments.iter().map(|c| c.to_json(host)).collect()
};
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
@@ -97,28 +102,31 @@ impl Cipher {
}, },
}; };
// Get the data or a default empty value to avoid issues with the mobile apps // Get the type_data or a default to an empty json object '{}'.
let mut data_json: Value = serde_json::from_str(&self.data).unwrap_or_else(|_| json!({ // If not passing an empty object, mobile clients will crash.
"Fields":null, let mut type_data_json: Value = serde_json::from_str(&self.data).unwrap_or(json!({}));
"Name": self.name,
"Notes":null,
"Password":null,
"PasswordHistory":null,
"PasswordRevisionDate":null,
"Response":null,
"Totp":null,
"Uris":null,
"Username":null
}));
// TODO: ******* Backwards compat start ********** // NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// To remove backwards compatibility, just remove this entire section // Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
// and remove the compat code from ciphers::update_cipher_from_data if self.atype == 1 {
if self.atype == 1 && data_json["Uris"].is_array() { if type_data_json["Uris"].is_array() {
let uri = data_json["Uris"][0]["Uri"].clone(); let uri = type_data_json["Uris"][0]["Uri"].clone();
data_json["Uri"] = uri; type_data_json["Uri"] = uri;
} else {
// Upstream always has an Uri key/value
type_data_json["Uri"] = Value::Null;
}
} }
// TODO: ******* Backwards compat end **********
// Clone the type_data and add some default value.
let mut data_json = type_data_json.clone();
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// data_json should always contain the following keys with every atype
data_json["Fields"] = json!(fields_json);
data_json["Name"] = json!(self.name);
data_json["Notes"] = json!(self.notes);
data_json["PasswordHistory"] = json!(password_history_json);
// There are three types of cipher response models in upstream // There are three types of cipher response models in upstream
// Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order // Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order
@@ -137,6 +145,8 @@ impl Cipher {
"Favorite": self.is_favorite(&user_uuid, conn), "Favorite": self.is_favorite(&user_uuid, conn),
"OrganizationId": self.organization_uuid, "OrganizationId": self.organization_uuid,
"Attachments": attachments_json, "Attachments": attachments_json,
// We have UseTotp set to true by default within the Organization model.
// This variable together with UsersGetPremium is used to show or hide the TOTP counter.
"OrganizationUseTotp": true, "OrganizationUseTotp": true,
// This field is specific to the cipherDetails type. // This field is specific to the cipherDetails type.
@@ -155,6 +165,12 @@ impl Cipher {
"ViewPassword": !hide_passwords, "ViewPassword": !hide_passwords,
"PasswordHistory": password_history_json, "PasswordHistory": password_history_json,
// All Cipher types are included by default as null, but only the matching one will be populated
"Login": null,
"SecureNote": null,
"Card": null,
"Identity": null,
}); });
let key = match self.atype { let key = match self.atype {
@@ -165,7 +181,7 @@ impl Cipher {
_ => panic!("Wrong type"), _ => panic!("Wrong type"),
}; };
json_object[key] = data_json; json_object[key] = type_data_json;
json_object json_object
} }
@@ -448,7 +464,10 @@ impl Cipher {
pub fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid)) .filter(
ciphers::user_uuid.eq(user_uuid)
.and(ciphers::organization_uuid.is_null())
)
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db() .load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}} }}
} }

View File

@@ -49,12 +49,21 @@ impl Collection {
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
json!({ json!({
"ExternalId": null, // Not support by us
"Id": self.uuid, "Id": self.uuid,
"OrganizationId": self.org_uuid, "OrganizationId": self.org_uuid,
"Name": self.name, "Name": self.name,
"Object": "collection", "Object": "collection",
}) })
} }
pub fn to_json_details(&self, user_uuid: &str, conn: &DbConn) -> Value {
let mut json_object = self.to_json();
json_object["Object"] = json!("collectionDetails");
json_object["ReadOnly"] = json!(!self.is_writable_by_user(user_uuid, conn));
json_object["HidePasswords"] = json!(self.hide_passwords_for_user(user_uuid, conn));
json_object
}
} }
use crate::db::DbConn; use crate::db::DbConn;
@@ -236,6 +245,28 @@ impl Collection {
} }
} }
} }
pub fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(&user_uuid, &self.org_uuid, &conn) {
None => true, // Not in Org
Some(user_org) => {
if user_org.has_full_access() {
return false;
}
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_uuid))
.filter(users_collections::hide_passwords.eq(true))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
}
}
} }
/// Database methods /// Database methods
@@ -356,13 +387,19 @@ impl CollectionUser {
}} }}
} }
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn); let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn);
db_run! { conn: { db_run! { conn: {
diesel::delete(users_collections::table.filter(users_collections::user_uuid.eq(user_uuid))) for user in collectionusers {
.execute(conn) diesel::delete(users_collections::table.filter(
.map_res("Error removing user from collections") users_collections::user_uuid.eq(user_uuid)
.and(users_collections::collection_uuid.eq(user.collection_uuid))
))
.execute(conn)
.map_res("Error removing user from collections")?;
}
Ok(())
}} }}
} }
} }

View File

@@ -178,4 +178,15 @@ impl Device {
.from_db() .from_db()
}} }}
} }
pub fn find_latest_active_by_user(user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
.order(devices::updated_at.desc())
.first::<DeviceDb>(conn)
.ok()
.from_db()
}}
}
} }

View File

@@ -18,4 +18,4 @@ pub use self::folder::{Folder, FolderCipher};
pub use self::org_policy::{OrgPolicy, OrgPolicyType}; pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization}; pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::two_factor::{TwoFactor, TwoFactorType}; pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::user::{Invitation, User}; pub use self::user::{Invitation, User, UserStampException};

View File

@@ -4,7 +4,7 @@ use crate::api::EmptyResult;
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use super::Organization; use super::{Organization, UserOrgStatus};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
@@ -26,6 +26,9 @@ pub enum OrgPolicyType {
TwoFactorAuthentication = 0, TwoFactorAuthentication = 0,
MasterPassword = 1, MasterPassword = 1,
PasswordGenerator = 2, PasswordGenerator = 2,
// SingleOrg = 3, // Not currently supported.
// RequireSso = 4, // Not currently supported.
PersonalOwnership = 5,
} }
/// Local methods /// Local methods
@@ -40,6 +43,10 @@ impl OrgPolicy {
} }
} }
pub fn has_type(&self, policy_type: OrgPolicyType) -> bool {
self.atype == policy_type as i32
}
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null); let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null);
json!({ json!({
@@ -129,11 +136,14 @@ impl OrgPolicy {
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table
.left_join( .inner_join(
users_organizations::table.on( users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid) users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid))) .and(users_organizations::user_uuid.eq(user_uuid)))
) )
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.select(org_policies::all_columns) .select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn) .load::<OrgPolicyDb>(conn)
.expect("Error loading org_policy") .expect("Error loading org_policy")

View File

@@ -147,9 +147,10 @@ impl Organization {
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
json!({ json!({
"Id": self.uuid, "Id": self.uuid,
"Identifier": null, // not supported by us
"Name": self.name, "Name": self.name,
"Seats": 10, "Seats": 10, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, "MaxCollections": 10, // The value doesn't matter, we don't check server-side
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side "MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
"Use2fa": true, "Use2fa": true,
"UseDirectory": false, "UseDirectory": false,
@@ -157,6 +158,9 @@ impl Organization {
"UseGroups": false, "UseGroups": false,
"UseTotp": true, "UseTotp": true,
"UsePolicies": true, "UsePolicies": true,
"UseSso": false, // We do not support SSO
"SelfHost": true,
"UseApi": false, // not supported by us
"BusinessName": null, "BusinessName": null,
"BusinessAddress1": null, "BusinessAddress1": null,
@@ -274,9 +278,10 @@ impl UserOrganization {
json!({ json!({
"Id": self.org_uuid, "Id": self.org_uuid,
"Identifier": null, // not supported by us
"Name": org.name, "Name": org.name,
"Seats": 10, "Seats": 10, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, "MaxCollections": 10, // The value doesn't matter, we don't check server-side
"UsersGetPremium": true, "UsersGetPremium": true,
"Use2fa": true, "Use2fa": true,
@@ -285,8 +290,30 @@ impl UserOrganization {
"UseGroups": false, "UseGroups": false,
"UseTotp": true, "UseTotp": true,
"UsePolicies": true, "UsePolicies": true,
"UseApi": false, "UseApi": false, // not supported by us
"SelfHost": true, "SelfHost": true,
"SsoBound": false, // We do not support SSO
"UseSso": false, // We do not support SSO
// TODO: Add support for Business Portal
// Upstream is moving Policies and SSO management outside of the web-vault to /portal
// For now they still have that code also in the web-vault, but they will remove it at some point.
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
"UseBusinessPortal": false, // Disable BusinessPortal Button
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
// "Permissions": {
// "AccessBusinessPortal": false,
// "AccessEventLogs": false,
// "AccessImportExport": false,
// "AccessReports": false,
// "ManageAllCollections": false,
// "ManageAssignedCollections": false,
// "ManageGroups": false,
// "ManagePolicies": false,
// "ManageSso": false,
// "ManageUsers": false
// },
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side "MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
@@ -389,7 +416,7 @@ impl UserOrganization {
pub fn delete(self, conn: &DbConn) -> EmptyResult { pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn); User::update_uuid_revision(&self.user_uuid, conn);
CollectionUser::delete_all_by_user(&self.user_uuid, &conn)?; CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, &conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid))) diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
@@ -412,11 +439,25 @@ impl UserOrganization {
Ok(()) Ok(())
} }
pub fn has_status(self, status: UserOrgStatus) -> bool { pub fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn) {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, &conn) {
return Some(user_org);
}
}
None
}
pub fn has_status(&self, status: UserOrgStatus) -> bool {
self.status == status as i32 self.status == status as i32
} }
pub fn has_full_access(self) -> bool { pub fn has_type(&self, user_type: UserOrgType) -> bool {
self.atype == user_type as i32
}
pub fn has_full_access(&self) -> bool {
(self.access_all || self.atype >= UserOrgType::Admin) && (self.access_all || self.atype >= UserOrgType::Admin) &&
self.has_status(UserOrgStatus::Confirmed) self.has_status(UserOrgStatus::Confirmed)
} }

View File

@@ -11,6 +11,7 @@ db_object! {
#[primary_key(uuid)] #[primary_key(uuid)]
pub struct User { pub struct User {
pub uuid: String, pub uuid: String,
pub enabled: bool,
pub created_at: NaiveDateTime, pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime, pub updated_at: NaiveDateTime,
pub verified_at: Option<NaiveDateTime>, pub verified_at: Option<NaiveDateTime>,
@@ -36,6 +37,7 @@ db_object! {
pub totp_recover: Option<String>, pub totp_recover: Option<String>,
pub security_stamp: String, pub security_stamp: String,
pub stamp_exception: Option<String>,
pub equivalent_domains: String, pub equivalent_domains: String,
pub excluded_globals: String, pub excluded_globals: String,
@@ -59,6 +61,12 @@ enum UserStatus {
_Disabled = 2, _Disabled = 2,
} }
#[derive(Serialize, Deserialize)]
pub struct UserStampException {
pub route: String,
pub security_stamp: String
}
/// Local methods /// Local methods
impl User { impl User {
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0 pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
@@ -70,6 +78,7 @@ impl User {
Self { Self {
uuid: crate::util::get_uuid(), uuid: crate::util::get_uuid(),
enabled: true,
created_at: now, created_at: now,
updated_at: now, updated_at: now,
verified_at: None, verified_at: None,
@@ -86,6 +95,7 @@ impl User {
password_iterations: CONFIG.password_iterations(), password_iterations: CONFIG.password_iterations(),
security_stamp: crate::util::get_uuid(), security_stamp: crate::util::get_uuid(),
stamp_exception: None,
password_hint: None, password_hint: None,
private_key: None, private_key: None,
@@ -119,14 +129,52 @@ impl User {
} }
} }
pub fn set_password(&mut self, password: &str) { /// Set the password hash generated
/// And resets the security_stamp. Based upon the allow_next_route the security_stamp will be different.
///
/// # Arguments
///
/// * `password` - A str which contains a hashed version of the users master password.
/// * `allow_next_route` - A Option<&str> with the function name of the next allowed (rocket) route.
///
pub fn set_password(&mut self, password: &str, allow_next_route: Option<&str>) {
self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32); self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32);
self.reset_security_stamp();
if let Some(route) = allow_next_route {
self.set_stamp_exception(route);
}
self.reset_security_stamp()
} }
pub fn reset_security_stamp(&mut self) { pub fn reset_security_stamp(&mut self) {
self.security_stamp = crate::util::get_uuid(); self.security_stamp = crate::util::get_uuid();
} }
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
///
/// # Arguments
/// * `route_exception` - A str with the function name of the next allowed (rocket) route.
///
/// ### Future
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn set_stamp_exception(&mut self, route_exception: &str) {
let stamp_exception = UserStampException {
route: route_exception.to_string(),
security_stamp: self.security_stamp.to_string()
};
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
}
/// Resets the stamp_exception to prevent re-use of the previous security-stamp
///
/// ### Future
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn reset_stamp_exception(&mut self) {
self.stamp_exception = None;
}
} }
use super::{Cipher, Device, Favorite, Folder, TwoFactor, UserOrgType, UserOrganization}; use super::{Cipher, Device, Favorite, Folder, TwoFactor, UserOrgType, UserOrganization};
@@ -288,6 +336,13 @@ impl User {
users::table.load::<UserDb>(conn).expect("Error loading users").from_db() users::table.load::<UserDb>(conn).expect("Error loading users").from_db()
}} }}
} }
pub fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn) {
Some(device) => Some(device.updated_at),
None => None
}
}
} }
impl Invitation { impl Invitation {

View File

@@ -116,6 +116,7 @@ table! {
table! { table! {
users (uuid) { users (uuid) {
uuid -> Text, uuid -> Text,
enabled -> Bool,
created_at -> Datetime, created_at -> Datetime,
updated_at -> Datetime, updated_at -> Datetime,
verified_at -> Nullable<Datetime>, verified_at -> Nullable<Datetime>,
@@ -135,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>, totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>, totp_recover -> Nullable<Text>,
security_stamp -> Text, security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text, equivalent_domains -> Text,
excluded_globals -> Text, excluded_globals -> Text,
client_kdf_type -> Integer, client_kdf_type -> Integer,

View File

@@ -116,6 +116,7 @@ table! {
table! { table! {
users (uuid) { users (uuid) {
uuid -> Text, uuid -> Text,
enabled -> Bool,
created_at -> Timestamp, created_at -> Timestamp,
updated_at -> Timestamp, updated_at -> Timestamp,
verified_at -> Nullable<Timestamp>, verified_at -> Nullable<Timestamp>,
@@ -135,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>, totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>, totp_recover -> Nullable<Text>,
security_stamp -> Text, security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text, equivalent_domains -> Text,
excluded_globals -> Text, excluded_globals -> Text,
client_kdf_type -> Integer, client_kdf_type -> Integer,

View File

@@ -116,6 +116,7 @@ table! {
table! { table! {
users (uuid) { users (uuid) {
uuid -> Text, uuid -> Text,
enabled -> Bool,
created_at -> Timestamp, created_at -> Timestamp,
updated_at -> Timestamp, updated_at -> Timestamp,
verified_at -> Nullable<Timestamp>, verified_at -> Nullable<Timestamp>,
@@ -135,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>, totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>, totp_recover -> Nullable<Text>,
security_stamp -> Text, security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text, equivalent_domains -> Text,
excluded_globals -> Text, excluded_globals -> Text,
client_kdf_type -> Integer, client_kdf_type -> Integer,

View File

@@ -191,6 +191,7 @@ impl<'r> Responder<'r> for Error {
fn respond_to(self, _: &Request) -> response::Result<'r> { fn respond_to(self, _: &Request) -> response::Result<'r> {
match self.error { match self.error {
ErrorKind::EmptyError(_) => {} // Don't print the error in this situation ErrorKind::EmptyError(_) => {} // Don't print the error in this situation
ErrorKind::SimpleError(_) => {} // Don't print the error in this situation
_ => error!(target: "error", "{:#?}", self), _ => error!(target: "error", "{:#?}", self),
}; };
@@ -210,9 +211,11 @@ impl<'r> Responder<'r> for Error {
#[macro_export] #[macro_export]
macro_rules! err { macro_rules! err {
($msg:expr) => {{ ($msg:expr) => {{
error!("{}", $msg);
return Err(crate::error::Error::new($msg, $msg)); return Err(crate::error::Error::new($msg, $msg));
}}; }};
($usr_msg:expr, $log_value:expr) => {{ ($usr_msg:expr, $log_value:expr) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err(crate::error::Error::new($usr_msg, $log_value)); return Err(crate::error::Error::new($usr_msg, $log_value));
}}; }};
} }

View File

@@ -1,12 +1,12 @@
use std::{env, str::FromStr}; use std::{str::FromStr};
use chrono::{DateTime, Local}; use chrono::{DateTime, Local};
use chrono_tz::Tz;
use percent_encoding::{percent_encode, NON_ALPHANUMERIC}; use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use lettre::{ use lettre::{
message::{header, Mailbox, Message, MultiPart, SinglePart}, message::{header, Mailbox, Message, MultiPart, SinglePart},
transport::smtp::authentication::{Credentials, Mechanism as SmtpAuthMechanism}, transport::smtp::authentication::{Credentials, Mechanism as SmtpAuthMechanism},
transport::smtp::client::{Tls, TlsParameters},
transport::smtp::extension::ClientId, transport::smtp::extension::ClientId,
Address, SmtpTransport, Transport, Address, SmtpTransport, Transport,
}; };
@@ -22,21 +22,30 @@ fn mailer() -> SmtpTransport {
use std::time::Duration; use std::time::Duration;
let host = CONFIG.smtp_host().unwrap(); let host = CONFIG.smtp_host().unwrap();
// Determine security let smtp_client = SmtpTransport::builder_dangerous(host.as_str())
let smtp_client = if CONFIG.smtp_ssl() {
if CONFIG.smtp_explicit_tls() {
SmtpTransport::relay(host.as_str())
} else {
SmtpTransport::starttls_relay(host.as_str())
}
} else {
Ok(SmtpTransport::builder_dangerous(host.as_str()))
};
let smtp_client = smtp_client.unwrap()
.port(CONFIG.smtp_port()) .port(CONFIG.smtp_port())
.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout()))); .timeout(Some(Duration::from_secs(CONFIG.smtp_timeout())));
// Determine security
let smtp_client = if CONFIG.smtp_ssl() {
let mut tls_parameters = TlsParameters::builder(host);
if CONFIG.smtp_accept_invalid_hostnames() {
tls_parameters.dangerous_accept_invalid_hostnames(true);
}
if CONFIG.smtp_accept_invalid_certs() {
tls_parameters.dangerous_accept_invalid_certs(true);
}
let tls_parameters = tls_parameters.build().unwrap();
if CONFIG.smtp_explicit_tls() {
smtp_client.tls(Tls::Wrapper(tls_parameters))
} else {
smtp_client.tls(Tls::Required(tls_parameters))
}
} else {
smtp_client
};
let smtp_client = match (CONFIG.smtp_username(), CONFIG.smtp_password()) { let smtp_client = match (CONFIG.smtp_username(), CONFIG.smtp_password()) {
(Some(user), Some(pass)) => smtp_client.credentials(Credentials::new(user, pass)), (Some(user), Some(pass)) => smtp_client.credentials(Credentials::new(user, pass)),
_ => smtp_client, _ => smtp_client,
@@ -97,22 +106,6 @@ fn get_template(template_name: &str, data: &serde_json::Value) -> Result<(String
Ok((subject, body)) Ok((subject, body))
} }
pub fn format_datetime(dt: &DateTime<Local>) -> String {
let fmt = "%A, %B %_d, %Y at %r %Z";
// With a DateTime<Local>, `%Z` formats as the time zone's UTC offset
// (e.g., `+00:00`). If the `TZ` environment variable is set, try to
// format as a time zone abbreviation instead (e.g., `UTC`).
if let Ok(tz) = env::var("TZ") {
if let Ok(tz) = tz.parse::<Tz>() {
return dt.with_timezone(&tz).format(fmt).to_string();
}
}
// Otherwise, fall back to just displaying the UTC offset.
dt.format(fmt).to_string()
}
pub fn send_password_hint(address: &str, hint: Option<String>) -> EmptyResult { pub fn send_password_hint(address: &str, hint: Option<String>) -> EmptyResult {
let template_name = if hint.is_some() { let template_name = if hint.is_some() {
"email/pw_hint_some" "email/pw_hint_some"
@@ -247,13 +240,14 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>,
use crate::util::upcase_first; use crate::util::upcase_first;
let device = upcase_first(device); let device = upcase_first(device);
let fmt = "%A, %B %_d, %Y at %r %Z";
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
"email/new_device_logged_in", "email/new_device_logged_in",
json!({ json!({
"url": CONFIG.domain(), "url": CONFIG.domain(),
"ip": ip, "ip": ip,
"device": device, "device": device,
"datetime": format_datetime(dt), "datetime": crate::util::format_datetime_local(dt, fmt),
}), }),
)?; )?;
@@ -308,27 +302,32 @@ fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) ->
let address = format!("{}@{}", address_split[1], domain_puny); let address = format!("{}@{}", address_split[1], domain_puny);
let html = SinglePart::base64() let html = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/html; charset=utf-8".parse()?)) .header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(body_html); .body(String::from(body_html));
let text = SinglePart::base64() let text = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/plain; charset=utf-8".parse()?)) .header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(body_text); .body(String::from(body_text));
// The boundary generated by Lettre it self is mostly too large based on the RFC822, so we generate one our selfs.
use uuid::Uuid;
let boundary = format!("_Part_{}_", Uuid::new_v4().to_simple());
let alternative = MultiPart::alternative().boundary(boundary).singlepart(text).singlepart(html);
let smtp_from = &CONFIG.smtp_from();
let email = Message::builder() let email = Message::builder()
.message_id(Some(format!("<{}@{}>", crate::util::get_uuid(), smtp_from.split('@').collect::<Vec<&str>>()[1] )))
.to(Mailbox::new(None, Address::from_str(&address)?)) .to(Mailbox::new(None, Address::from_str(&address)?))
.from(Mailbox::new( .from(Mailbox::new(
Some(CONFIG.smtp_from_name()), Some(CONFIG.smtp_from_name()),
Address::from_str(&CONFIG.smtp_from())?, Address::from_str(smtp_from)?,
)) ))
.subject(subject) .subject(subject)
.multipart(alternative)?; .multipart(
MultiPart::alternative()
.singlepart(text)
.singlepart(html)
)?;
match mailer().send(&email) { match mailer().send(&email) {
Ok(_) => Ok(()), Ok(_) => Ok(()),

View File

@@ -1,12 +1,12 @@
#![forbid(unsafe_code)] #![forbid(unsafe_code)]
#![cfg_attr(feature = "unstable", feature(ip))] #![cfg_attr(feature = "unstable", feature(ip))]
#![recursion_limit = "256"] #![recursion_limit = "512"]
extern crate openssl; extern crate openssl;
#[macro_use] #[macro_use]
extern crate rocket; extern crate rocket;
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde;
#[macro_use] #[macro_use]
extern crate serde_json; extern crate serde_json;
#[macro_use] #[macro_use]
@@ -113,8 +113,21 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
.level_for("launch_", log::LevelFilter::Off) .level_for("launch_", log::LevelFilter::Off)
.level_for("rocket::rocket", log::LevelFilter::Off) .level_for("rocket::rocket", log::LevelFilter::Off)
.level_for("rocket::fairing", log::LevelFilter::Off) .level_for("rocket::fairing", log::LevelFilter::Off)
// Never show html5ever and hyper::proto logs, too noisy
.level_for("html5ever", log::LevelFilter::Off)
.level_for("hyper::proto", log::LevelFilter::Off)
.chain(std::io::stdout()); .chain(std::io::stdout());
// Enable smtp debug logging only specifically for smtp when need.
// This can contain sensitive information we do not want in the default debug/trace logging.
if CONFIG.smtp_debug() {
println!("[WARNING] SMTP Debugging is enabled (SMTP_DEBUG=true). Sensitive information could be disclosed via logs!");
println!("[WARNING] Only enable SMTP_DEBUG during troubleshooting!\n");
logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Debug)
} else {
logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Off)
}
if CONFIG.extended_logging() { if CONFIG.extended_logging() {
logger = logger.format(|out, message, record| { logger = logger.format(|out, message, record| {
out.finish(format_args!( out.finish(format_args!(

Binary file not shown.

Before

Width:  |  Height:  |  Size: 344 B

View File

@@ -508,7 +508,8 @@
"disneymoviesanywhere.com", "disneymoviesanywhere.com",
"go.com", "go.com",
"disney.com", "disney.com",
"dadt.com" "dadt.com",
"disneyplus.com"
], ],
"Excluded": false "Excluded": false
}, },
@@ -885,5 +886,13 @@
"yandex.uz" "yandex.uz"
], ],
"Excluded": false "Excluded": false
},
{
"Type": 84,
"Domains": [
"sonyentertainmentnetwork.com",
"sony.com"
],
"Excluded": false
} }
] ]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -1,6 +1,6 @@
/*! /*!
* Native JavaScript for Bootstrap v3.0.10 (https://thednp.github.io/bootstrap.native/) * Native JavaScript for Bootstrap v3.0.15 (https://thednp.github.io/bootstrap.native/)
* Copyright 2015-2020 © dnp_theme * Copyright 2015-2021 © dnp_theme
* Licensed under MIT (https://github.com/thednp/bootstrap.native/blob/master/LICENSE) * Licensed under MIT (https://github.com/thednp/bootstrap.native/blob/master/LICENSE)
*/ */
(function (global, factory) { (function (global, factory) {
@@ -15,10 +15,14 @@
var transitionDuration = 'webkitTransition' in document.head.style ? 'webkitTransitionDuration' : 'transitionDuration'; var transitionDuration = 'webkitTransition' in document.head.style ? 'webkitTransitionDuration' : 'transitionDuration';
var transitionProperty = 'webkitTransition' in document.head.style ? 'webkitTransitionProperty' : 'transitionProperty';
function getElementTransitionDuration(element) { function getElementTransitionDuration(element) {
var duration = supportTransition ? parseFloat(getComputedStyle(element)[transitionDuration]) : 0; var computedStyle = getComputedStyle(element),
duration = typeof duration === 'number' && !isNaN(duration) ? duration * 1000 : 0; property = computedStyle[transitionProperty],
return duration; duration = supportTransition && property && property !== 'none'
? parseFloat(computedStyle[transitionDuration]) : 0;
return !isNaN(duration) ? duration * 1000 : 0;
} }
function emulateTransitionEnd(element,handler){ function emulateTransitionEnd(element,handler){
@@ -35,9 +39,15 @@
return selector instanceof Element ? selector : lookUp.querySelector(selector); return selector instanceof Element ? selector : lookUp.querySelector(selector);
} }
function bootstrapCustomEvent(eventName, componentName, related) { function bootstrapCustomEvent(eventName, componentName, eventProperties) {
var OriginalCustomEvent = new CustomEvent( eventName + '.bs.' + componentName, {cancelable: true}); var OriginalCustomEvent = new CustomEvent( eventName + '.bs.' + componentName, {cancelable: true});
OriginalCustomEvent.relatedTarget = related; if (typeof eventProperties !== 'undefined') {
Object.keys(eventProperties).forEach(function (key) {
Object.defineProperty(OriginalCustomEvent, key, {
value: eventProperties[key]
});
});
}
return OriginalCustomEvent; return OriginalCustomEvent;
} }
@@ -352,7 +362,7 @@
}; };
self.slideTo = function (next) { self.slideTo = function (next) {
if (vars.isSliding) { return; } if (vars.isSliding) { return; }
var activeItem = self.getActiveIndex(), orientation; var activeItem = self.getActiveIndex(), orientation, eventProperties;
if ( activeItem === next ) { if ( activeItem === next ) {
return; return;
} else if ( (activeItem < next ) || (activeItem === 0 && next === slides.length -1 ) ) { } else if ( (activeItem < next ) || (activeItem === 0 && next === slides.length -1 ) ) {
@@ -363,8 +373,9 @@
if ( next < 0 ) { next = slides.length - 1; } if ( next < 0 ) { next = slides.length - 1; }
else if ( next >= slides.length ){ next = 0; } else if ( next >= slides.length ){ next = 0; }
orientation = vars.direction === 'left' ? 'next' : 'prev'; orientation = vars.direction === 'left' ? 'next' : 'prev';
slideCustomEvent = bootstrapCustomEvent('slide', 'carousel', slides[next]); eventProperties = { relatedTarget: slides[next], direction: vars.direction, from: activeItem, to: next };
slidCustomEvent = bootstrapCustomEvent('slid', 'carousel', slides[next]); slideCustomEvent = bootstrapCustomEvent('slide', 'carousel', eventProperties);
slidCustomEvent = bootstrapCustomEvent('slid', 'carousel', eventProperties);
dispatchCustomEvent.call(element, slideCustomEvent); dispatchCustomEvent.call(element, slideCustomEvent);
if (slideCustomEvent.defaultPrevented) { return; } if (slideCustomEvent.defaultPrevented) { return; }
vars.index = next; vars.index = next;
@@ -615,7 +626,7 @@
} }
} }
self.show = function () { self.show = function () {
showCustomEvent = bootstrapCustomEvent('show', 'dropdown', relatedTarget); showCustomEvent = bootstrapCustomEvent('show', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, showCustomEvent); dispatchCustomEvent.call(parent, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; } if ( showCustomEvent.defaultPrevented ) { return; }
menu.classList.add('show'); menu.classList.add('show');
@@ -626,12 +637,12 @@
setTimeout(function () { setTimeout(function () {
setFocus( menu.getElementsByTagName('INPUT')[0] || element ); setFocus( menu.getElementsByTagName('INPUT')[0] || element );
toggleDismiss(); toggleDismiss();
shownCustomEvent = bootstrapCustomEvent( 'shown', 'dropdown', relatedTarget); shownCustomEvent = bootstrapCustomEvent('shown', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, shownCustomEvent); dispatchCustomEvent.call(parent, shownCustomEvent);
},1); },1);
}; };
self.hide = function () { self.hide = function () {
hideCustomEvent = bootstrapCustomEvent('hide', 'dropdown', relatedTarget); hideCustomEvent = bootstrapCustomEvent('hide', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, hideCustomEvent); dispatchCustomEvent.call(parent, hideCustomEvent);
if ( hideCustomEvent.defaultPrevented ) { return; } if ( hideCustomEvent.defaultPrevented ) { return; }
menu.classList.remove('show'); menu.classList.remove('show');
@@ -643,7 +654,7 @@
setTimeout(function () { setTimeout(function () {
element.Dropdown && element.addEventListener('click',clickHandler,false); element.Dropdown && element.addEventListener('click',clickHandler,false);
},1); },1);
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'dropdown', relatedTarget); hiddenCustomEvent = bootstrapCustomEvent('hidden', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, hiddenCustomEvent); dispatchCustomEvent.call(parent, hiddenCustomEvent);
}; };
self.toggle = function () { self.toggle = function () {
@@ -749,7 +760,7 @@
setFocus(modal); setFocus(modal);
modal.isAnimating = false; modal.isAnimating = false;
toggleEvents(1); toggleEvents(1);
shownCustomEvent = bootstrapCustomEvent('shown', 'modal', relatedTarget); shownCustomEvent = bootstrapCustomEvent('shown', 'modal', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(modal, shownCustomEvent); dispatchCustomEvent.call(modal, shownCustomEvent);
} }
function triggerHide(force) { function triggerHide(force) {
@@ -804,7 +815,7 @@
}; };
self.show = function () { self.show = function () {
if (modal.classList.contains('show') && !!modal.isAnimating ) {return} if (modal.classList.contains('show') && !!modal.isAnimating ) {return}
showCustomEvent = bootstrapCustomEvent('show', 'modal', relatedTarget); showCustomEvent = bootstrapCustomEvent('show', 'modal', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(modal, showCustomEvent); dispatchCustomEvent.call(modal, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; } if ( showCustomEvent.defaultPrevented ) { return; }
modal.isAnimating = true; modal.isAnimating = true;
@@ -1193,7 +1204,7 @@
if (dropLink && !dropLink.classList.contains('active') ) { if (dropLink && !dropLink.classList.contains('active') ) {
dropLink.classList.add('active'); dropLink.classList.add('active');
} }
dispatchCustomEvent.call(element, bootstrapCustomEvent( 'activate', 'scrollspy', vars.items[index])); dispatchCustomEvent.call(element, bootstrapCustomEvent( 'activate', 'scrollspy', { relatedTarget: vars.items[index] }));
} else if ( isActive && !inside ) { } else if ( isActive && !inside ) {
item.classList.remove('active'); item.classList.remove('active');
if (dropLink && dropLink.classList.contains('active') && !item.parentNode.getElementsByClassName('active').length ) { if (dropLink && dropLink.classList.contains('active') && !item.parentNode.getElementsByClassName('active').length ) {
@@ -1278,7 +1289,7 @@
} else { } else {
tabs.isAnimating = false; tabs.isAnimating = false;
} }
shownCustomEvent = bootstrapCustomEvent('shown', 'tab', activeTab); shownCustomEvent = bootstrapCustomEvent('shown', 'tab', { relatedTarget: activeTab });
dispatchCustomEvent.call(next, shownCustomEvent); dispatchCustomEvent.call(next, shownCustomEvent);
} }
function triggerHide() { function triggerHide() {
@@ -1287,8 +1298,8 @@
nextContent.style.float = 'left'; nextContent.style.float = 'left';
containerHeight = activeContent.scrollHeight; containerHeight = activeContent.scrollHeight;
} }
showCustomEvent = bootstrapCustomEvent('show', 'tab', activeTab); showCustomEvent = bootstrapCustomEvent('show', 'tab', { relatedTarget: activeTab });
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'tab', next); hiddenCustomEvent = bootstrapCustomEvent('hidden', 'tab', { relatedTarget: next });
dispatchCustomEvent.call(next, showCustomEvent); dispatchCustomEvent.call(next, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; } if ( showCustomEvent.defaultPrevented ) { return; }
nextContent.classList.add('active'); nextContent.classList.add('active');
@@ -1331,7 +1342,7 @@
nextContent = queryElement(next.getAttribute('href')); nextContent = queryElement(next.getAttribute('href'));
activeTab = getActiveTab(); activeTab = getActiveTab();
activeContent = getActiveContent(); activeContent = getActiveContent();
hideCustomEvent = bootstrapCustomEvent( 'hide', 'tab', next); hideCustomEvent = bootstrapCustomEvent( 'hide', 'tab', { relatedTarget: next });
dispatchCustomEvent.call(activeTab, hideCustomEvent); dispatchCustomEvent.call(activeTab, hideCustomEvent);
if (hideCustomEvent.defaultPrevented) { return; } if (hideCustomEvent.defaultPrevented) { return; }
tabs.isAnimating = true; tabs.isAnimating = true;
@@ -1637,7 +1648,7 @@
} }
} }
var version = "3.0.10"; var version = "3.0.15";
var index = { var index = {
Alert: Alert, Alert: Alert,

View File

@@ -1,10 +1,10 @@
/*! /*!
* Bootstrap v4.5.2 (https://getbootstrap.com/) * Bootstrap v4.5.3 (https://getbootstrap.com/)
* Copyright 2011-2020 The Bootstrap Authors * Copyright 2011-2020 The Bootstrap Authors
* Copyright 2011-2020 Twitter, Inc. * Copyright 2011-2020 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/ */
:root { :root {
--blue: #007bff; --blue: #007bff;
--indigo: #6610f2; --indigo: #6610f2;
--purple: #6f42c1; --purple: #6f42c1;
@@ -216,6 +216,7 @@ caption {
th { th {
text-align: inherit; text-align: inherit;
text-align: -webkit-match-parent;
} }
label { label {
@@ -3750,6 +3751,8 @@ input[type="button"].btn-block {
display: block; display: block;
min-height: 1.5rem; min-height: 1.5rem;
padding-left: 1.5rem; padding-left: 1.5rem;
-webkit-print-color-adjust: exact;
color-adjust: exact;
} }
.custom-control-inline { .custom-control-inline {
@@ -5289,6 +5292,7 @@ a.badge-dark:focus, a.badge-dark.focus {
position: absolute; position: absolute;
top: 0; top: 0;
right: 0; right: 0;
z-index: 2;
padding: 0.75rem 1.25rem; padding: 0.75rem 1.25rem;
color: inherit; color: inherit;
} }
@@ -10163,7 +10167,7 @@ a.text-dark:hover, a.text-dark:focus {
.text-break { .text-break {
word-break: break-word !important; word-break: break-word !important;
overflow-wrap: break-word !important; word-wrap: break-word !important;
} }
.text-reset { .text-reset {
@@ -10256,3 +10260,4 @@ a.text-dark:hover, a.text-dark:focus {
border-color: #dee2e6; border-color: #dee2e6;
} }
} }
/*# sourceMappingURL=bootstrap.css.map */

View File

@@ -4,12 +4,13 @@
* *
* To rebuild or modify this file with the latest versions of the included * To rebuild or modify this file with the latest versions of the included
* software please visit: * software please visit:
* https://datatables.net/download/#bs4/dt-1.10.22 * https://datatables.net/download/#bs4/dt-1.10.23
* *
* Included libraries: * Included libraries:
* DataTables 1.10.22 * DataTables 1.10.23
*/ */
@charset "UTF-8";
table.dataTable { table.dataTable {
clear: both; clear: both;
margin-top: 6px !important; margin-top: 6px !important;
@@ -114,7 +115,7 @@ table.dataTable > thead .sorting_desc:before,
table.dataTable > thead .sorting_asc_disabled:before, table.dataTable > thead .sorting_asc_disabled:before,
table.dataTable > thead .sorting_desc_disabled:before { table.dataTable > thead .sorting_desc_disabled:before {
right: 1em; right: 1em;
content: "\2191"; content: "";
} }
table.dataTable > thead .sorting:after, table.dataTable > thead .sorting:after,
table.dataTable > thead .sorting_asc:after, table.dataTable > thead .sorting_asc:after,
@@ -122,7 +123,7 @@ table.dataTable > thead .sorting_desc:after,
table.dataTable > thead .sorting_asc_disabled:after, table.dataTable > thead .sorting_asc_disabled:after,
table.dataTable > thead .sorting_desc_disabled:after { table.dataTable > thead .sorting_desc_disabled:after {
right: 0.5em; right: 0.5em;
content: "\2193"; content: "";
} }
table.dataTable > thead .sorting_asc:before, table.dataTable > thead .sorting_asc:before,
table.dataTable > thead .sorting_desc:after { table.dataTable > thead .sorting_desc:after {
@@ -165,9 +166,9 @@ div.dataTables_scrollFoot > .dataTables_scrollFootInner > table {
@media screen and (max-width: 767px) { @media screen and (max-width: 767px) {
div.dataTables_wrapper div.dataTables_length, div.dataTables_wrapper div.dataTables_length,
div.dataTables_wrapper div.dataTables_filter, div.dataTables_wrapper div.dataTables_filter,
div.dataTables_wrapper div.dataTables_info, div.dataTables_wrapper div.dataTables_info,
div.dataTables_wrapper div.dataTables_paginate { div.dataTables_wrapper div.dataTables_paginate {
text-align: center; text-align: center;
} }
div.dataTables_wrapper div.dataTables_paginate ul.pagination { div.dataTables_wrapper div.dataTables_paginate ul.pagination {
@@ -213,10 +214,10 @@ div.dataTables_scrollHead table.table-bordered {
div.table-responsive > div.dataTables_wrapper > div.row { div.table-responsive > div.dataTables_wrapper > div.row {
margin: 0; margin: 0;
} }
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:first-child { div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:first-child {
padding-left: 0; padding-left: 0;
} }
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:last-child { div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:last-child {
padding-right: 0; padding-right: 0;
} }

View File

@@ -4,20 +4,20 @@
* *
* To rebuild or modify this file with the latest versions of the included * To rebuild or modify this file with the latest versions of the included
* software please visit: * software please visit:
* https://datatables.net/download/#bs4/dt-1.10.22 * https://datatables.net/download/#bs4/dt-1.10.23
* *
* Included libraries: * Included libraries:
* DataTables 1.10.22 * DataTables 1.10.23
*/ */
/*! DataTables 1.10.22 /*! DataTables 1.10.23
* ©2008-2020 SpryMedia Ltd - datatables.net/license * ©2008-2020 SpryMedia Ltd - datatables.net/license
*/ */
/** /**
* @summary DataTables * @summary DataTables
* @description Paginate, search and order HTML tables * @description Paginate, search and order HTML tables
* @version 1.10.22 * @version 1.10.23
* @file jquery.dataTables.js * @file jquery.dataTables.js
* @author SpryMedia Ltd * @author SpryMedia Ltd
* @contact www.datatables.net * @contact www.datatables.net
@@ -2775,7 +2775,7 @@
for ( var i=0, iLen=a.length-1 ; i<iLen ; i++ ) for ( var i=0, iLen=a.length-1 ; i<iLen ; i++ )
{ {
// Protect against prototype pollution // Protect against prototype pollution
if (a[i] === '__proto__') { if (a[i] === '__proto__' || a[i] === 'constructor') {
throw new Error('Cannot set prototype values'); throw new Error('Cannot set prototype values');
} }
@@ -3157,7 +3157,7 @@
cells.push( nTd ); cells.push( nTd );
// Need to create the HTML if new, or if a rendering function is defined // Need to create the HTML if new, or if a rendering function is defined
if ( create || ((!nTrIn || oCol.mRender || oCol.mData !== i) && if ( create || ((oCol.mRender || oCol.mData !== i) &&
(!$.isPlainObject(oCol.mData) || oCol.mData._ !== i+'.display') (!$.isPlainObject(oCol.mData) || oCol.mData._ !== i+'.display')
)) { )) {
nTd.innerHTML = _fnGetCellData( oSettings, iRow, i, 'display' ); nTd.innerHTML = _fnGetCellData( oSettings, iRow, i, 'display' );
@@ -3189,10 +3189,6 @@
_fnCallbackFire( oSettings, 'aoRowCreatedCallback', null, [nTr, rowData, iRow, cells] ); _fnCallbackFire( oSettings, 'aoRowCreatedCallback', null, [nTr, rowData, iRow, cells] );
} }
// Remove once webkit bug 131819 and Chromium bug 365619 have been resolved
// and deployed
row.nTr.setAttribute( 'role', 'row' );
} }
@@ -9546,7 +9542,7 @@
* @type string * @type string
* @default Version number * @default Version number
*/ */
DataTable.version = "1.10.22"; DataTable.version = "1.10.23";
/** /**
* Private data store, containing all of the settings objects that are * Private data store, containing all of the settings objects that are
@@ -13970,7 +13966,7 @@
* *
* @type string * @type string
*/ */
build:"bs4/dt-1.10.22", build:"bs4/dt-1.10.23",
/** /**

View File

@@ -4,6 +4,7 @@
<meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="robots" content="noindex,nofollow" /> <meta name="robots" content="noindex,nofollow" />
<link rel="icon" type="image/png" href="{{urlpath}}/bwrs_static/shield-white.png">
<title>Bitwarden_rs Admin Panel</title> <title>Bitwarden_rs Admin Panel</title>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" /> <link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" />
<style> <style>
@@ -73,7 +74,7 @@
<body class="bg-light"> <body class="bg-light">
<nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top"> <nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top">
<div class="container"> <div class="container-xl">
<a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Bitwarden_rs Admin</a> <a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Bitwarden_rs Admin</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse" <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse"
aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation"> aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation">
@@ -96,7 +97,7 @@
</li> </li>
{{/if}} {{/if}}
<li class="nav-item"> <li class="nav-item">
<a class="nav-link" href="{{urlpath}}/">Vault</a> <a class="nav-link" href="{{urlpath}}/" target="_blank" rel="noreferrer">Vault</a>
</li> </li>
</ul> </ul>

View File

@@ -1,4 +1,4 @@
<main class="container"> <main class="container-xl">
<div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow"> <div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-2">Diagnostics</h6> <h6 class="border-bottom pb-2 mb-2">Diagnostics</h6>
@@ -15,7 +15,7 @@
<span id="server-installed">{{version}}</span> <span id="server-installed">{{version}}</span>
</dd> </dd>
<dt class="col-sm-5">Server Latest <dt class="col-sm-5">Server Latest
<span class="badge badge-danger d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span> <span class="badge badge-secondary d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span> <span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span>
@@ -27,12 +27,14 @@
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-installed">{{diagnostics.web_vault_version}}</span> <span id="web-installed">{{diagnostics.web_vault_version}}</span>
</dd> </dd>
{{#unless diagnostics.running_within_docker}}
<dt class="col-sm-5">Web Latest <dt class="col-sm-5">Web Latest
<span class="badge badge-danger d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span> <span class="badge badge-secondary d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-latest">{{diagnostics.latest_web_build}}</span> <span id="web-latest">{{diagnostics.latest_web_build}}</span>
</dd> </dd>
{{/unless}}
</dl> </dl>
</div> </div>
</div> </div>
@@ -41,6 +43,40 @@
<div class="row"> <div class="row">
<div class="col-md"> <div class="col-md">
<dl class="row"> <dl class="row">
<dt class="col-sm-5">Running within Docker</dt>
<dd class="col-sm-7">
{{#if diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Uses a proxy</dt>
<dd class="col-sm-7">
{{#if diagnostics.uses_proxy}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.uses_proxy}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Internet access
{{#if diagnostics.has_http_access}}
<span class="badge badge-success" id="internet-success" title="We have internet access!">Ok</span>
{{/if}}
{{#unless diagnostics.has_http_access}}
<span class="badge badge-danger" id="internet-warning" title="There seems to be no internet access. Please fix.">Error</span>
{{/unless}}
</dt>
<dd class="col-sm-7">
{{#if diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">DNS (github.com) <dt class="col-sm-5">DNS (github.com)
<span class="badge badge-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span> <span class="badge badge-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge badge-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span> <span class="badge badge-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
@@ -57,6 +93,46 @@
<span id="time-server" class="d-block"><b>Server:</b> <span id="time-server-string">{{diagnostics.server_time}}</span></span> <span id="time-server" class="d-block"><b>Server:</b> <span id="time-server-string">{{diagnostics.server_time}}</span></span>
<span id="time-browser" class="d-block"><b>Browser:</b> <span id="time-browser-string"></span></span> <span id="time-browser" class="d-block"><b>Browser:</b> <span id="time-browser-string"></span></span>
</dd> </dd>
<dt class="col-sm-5">Domain configuration
<span class="badge badge-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
<span class="badge badge-danger d-none" id="domain-warning" title="The domain variable does not matches the browsers location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span>
<span class="badge badge-success d-none" id="https-success" title="Configurued to use HTTPS">HTTPS</span>
<span class="badge badge-danger d-none" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span>
</dt>
<dd class="col-sm-7">
<span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{diagnostics.admin_url}}</span></span>
<span id="domain-browser" class="d-block"><b>Browser:</b> <span id="domain-browser-string"></span></span>
</dd>
</dl>
</div>
</div>
<h3>Support</h3>
<div class="row">
<div class="col-md">
<dl class="row">
<dd class="col-sm-12">
If you need support please check the following links first before you create a new issue:
<a href="https://bitwardenrs.discourse.group/" target="_blank" rel="noreferrer">Bitwarden_RS Forum</a>
| <a href="https://github.com/dani-garcia/bitwarden_rs/discussions" target="_blank" rel="noreferrer">Github Discussions</a>
</dd>
</dl>
<dl class="row">
<dd class="col-sm-12">
You can use the button below to pre-generate a string which you can copy/paste on either the Forum or when Creating a new issue at Github.<br>
We try to hide the most sensitive values from the generated support string by default, but please verify if there is nothing in there which you want to hide!<br>
</dd>
</dl>
<dl class="row">
<dt class="col-sm-3">
<button type="button" id="gen-support" class="btn btn-primary" onclick="generateSupportString(); return false;">Generate Support String</button>
<br><br>
<button type="button" id="copy-support" class="btn btn-info d-none" onclick="copyToClipboard(); return false;">Copy To Clipboard</button>
</dt>
<dd class="col-sm-9">
<pre id="support-string" class="pre-scrollable d-none" style="width: 100%; height: 16em; size: 0.6em; border: 1px solid; padding: 4px;"></pre>
</dd>
</dl> </dl>
</div> </div>
</div> </div>
@@ -64,7 +140,13 @@
</main> </main>
<script> <script>
dnsCheck = false;
timeCheck = false;
domainCheck = false;
httpsCheck = false;
(() => { (() => {
// ================================
// Date & Time Check
const d = new Date(); const d = new Date();
const year = d.getUTCFullYear(); const year = d.getUTCFullYear();
const month = String(d.getUTCMonth()+1).padStart(2, '0'); const month = String(d.getUTCMonth()+1).padStart(2, '0');
@@ -72,7 +154,7 @@
const hour = String(d.getUTCHours()).padStart(2, '0'); const hour = String(d.getUTCHours()).padStart(2, '0');
const minute = String(d.getUTCMinutes()).padStart(2, '0'); const minute = String(d.getUTCMinutes()).padStart(2, '0');
const seconds = String(d.getUTCSeconds()).padStart(2, '0'); const seconds = String(d.getUTCSeconds()).padStart(2, '0');
const browserUTC = year + '-' + month + '-' + day + ' ' + hour + ':' + minute + ':' + seconds; const browserUTC = `${year}-${month}-${day} ${hour}:${minute}:${seconds} UTC`;
document.getElementById("time-browser-string").innerText = browserUTC; document.getElementById("time-browser-string").innerText = browserUTC;
const serverUTC = document.getElementById("time-server-string").innerText; const serverUTC = document.getElementById("time-server-string").innerText;
@@ -81,16 +163,21 @@
document.getElementById('time-warning').classList.remove('d-none'); document.getElementById('time-warning').classList.remove('d-none');
} else { } else {
document.getElementById('time-success').classList.remove('d-none'); document.getElementById('time-success').classList.remove('d-none');
timeCheck = true;
} }
// ================================
// Check if the output is a valid IP // Check if the output is a valid IP
const isValidIp = value => (/^(?:(?:^|\.)(?:2(?:5[0-5]|[0-4]\d)|1?\d?\d)){4}$/.test(value) ? true : false); const isValidIp = value => (/^(?:(?:^|\.)(?:2(?:5[0-5]|[0-4]\d)|1?\d?\d)){4}$/.test(value) ? true : false);
if (isValidIp(document.getElementById('dns-resolved').innerText)) { if (isValidIp(document.getElementById('dns-resolved').innerText)) {
document.getElementById('dns-success').classList.remove('d-none'); document.getElementById('dns-success').classList.remove('d-none');
dnsCheck = true;
} else { } else {
document.getElementById('dns-warning').classList.remove('d-none'); document.getElementById('dns-warning').classList.remove('d-none');
} }
// ================================
// Version check for both bitwarden_rs and web-vault
let serverInstalled = document.getElementById('server-installed').innerText; let serverInstalled = document.getElementById('server-installed').innerText;
let serverLatest = document.getElementById('server-latest').innerText; let serverLatest = document.getElementById('server-latest').innerText;
let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', ''); let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', '');
@@ -99,10 +186,12 @@
} }
const webInstalled = document.getElementById('web-installed').innerText; const webInstalled = document.getElementById('web-installed').innerText;
const webLatest = document.getElementById('web-latest').innerText;
checkVersions('server', serverInstalled, serverLatest, serverLatestCommit); checkVersions('server', serverInstalled, serverLatest, serverLatestCommit);
{{#unless diagnostics.running_within_docker}}
const webLatest = document.getElementById('web-latest').innerText;
checkVersions('web', webInstalled, webLatest); checkVersions('web', webInstalled, webLatest);
{{/unless}}
function checkVersions(platform, installed, latest, commit=null) { function checkVersions(platform, installed, latest, commit=null) {
if (installed === '-' || latest === '-') { if (installed === '-' || latest === '-') {
@@ -146,5 +235,68 @@
} }
} }
} }
// ================================
// Check valid DOMAIN configuration
document.getElementById('domain-browser-string').innerText = location.href.toLowerCase();
if (document.getElementById('domain-server-string').innerText.toLowerCase() == location.href.toLowerCase()) {
document.getElementById('domain-success').classList.remove('d-none');
domainCheck = true;
} else {
document.getElementById('domain-warning').classList.remove('d-none');
}
// Check for HTTPS at domain-server-string
if (document.getElementById('domain-server-string').innerText.toLowerCase().startsWith('https://') ) {
document.getElementById('https-success').classList.remove('d-none');
httpsCheck = true;
} else {
document.getElementById('https-warning').classList.remove('d-none');
}
})(); })();
</script>
// ================================
// Generate support string to be pasted on github or the forum
async function generateSupportString() {
supportString = "### Your environment (Generated via diagnostics page)\n";
supportString += "* Bitwarden_rs version: v{{ version }}\n";
supportString += "* Web-vault version: v{{ diagnostics.web_vault_version }}\n";
supportString += "* Running within Docker: {{ diagnostics.running_within_docker }}\n";
supportString += "* Internet access: {{ diagnostics.has_http_access }}\n";
supportString += "* Uses a proxy: {{ diagnostics.uses_proxy }}\n";
supportString += "* DNS Check: " + dnsCheck + "\n";
supportString += "* Time Check: " + timeCheck + "\n";
supportString += "* Domain Configuration Check: " + domainCheck + "\n";
supportString += "* HTTPS Check: " + httpsCheck + "\n";
supportString += "* Database type: {{ diagnostics.db_type }}\n";
{{#case diagnostics.db_type "MySQL" "PostgreSQL"}}
supportString += "* Database version: [PLEASE PROVIDE DATABASE VERSION]\n";
{{/case}}
supportString += "* Clients used: \n";
supportString += "* Reverse proxy and version: \n";
supportString += "* Other relevant information: \n";
jsonResponse = await fetch('{{urlpath}}/admin/diagnostics/config');
configJson = await jsonResponse.json();
supportString += "\n### Config (Generated via diagnostics page)\n```json\n" + JSON.stringify(configJson, undefined, 2) + "\n```\n";
document.getElementById('support-string').innerText = supportString;
document.getElementById('support-string').classList.remove('d-none');
document.getElementById('copy-support').classList.remove('d-none');
}
function copyToClipboard() {
const str = document.getElementById('support-string').innerText;
const el = document.createElement('textarea');
el.value = str;
el.setAttribute('readonly', '');
el.style.position = 'absolute';
el.style.left = '-9999px';
document.body.appendChild(el);
el.select();
document.execCommand('copy');
document.body.removeChild(el);
}
</script>

View File

@@ -1,4 +1,4 @@
<main class="container"> <main class="container-xl">
{{#if error}} {{#if error}}
<div class="align-items-center p-3 mb-3 text-white-50 bg-warning rounded shadow"> <div class="align-items-center p-3 mb-3 text-white-50 bg-warning rounded shadow">
<div> <div>

View File

@@ -1,4 +1,4 @@
<main class="container"> <main class="container-xl">
<div id="organizations-block" class="my-3 p-3 bg-white rounded shadow"> <div id="organizations-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Organizations</h6> <h6 class="border-bottom pb-2 mb-3">Organizations</h6>
@@ -10,6 +10,7 @@
<th>Users</th> <th>Users</th>
<th>Items</th> <th>Items</th>
<th>Attachments</th> <th>Attachments</th>
<th style="width: 120px; min-width: 120px;">Actions</th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
@@ -37,6 +38,9 @@
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span> <span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
{{/if}} {{/if}}
</td> </td>
<td style="font-size: 90%; text-align: right; padding-right: 15px">
<a class="d-block" href="#" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</a>
</td>
</tr> </tr>
{{/each}} {{/each}}
</tbody> </tbody>
@@ -50,6 +54,25 @@
<script src="{{urlpath}}/bwrs_static/jquery-3.5.1.slim.js"></script> <script src="{{urlpath}}/bwrs_static/jquery-3.5.1.slim.js"></script>
<script src="{{urlpath}}/bwrs_static/datatables.js"></script> <script src="{{urlpath}}/bwrs_static/datatables.js"></script>
<script> <script>
function deleteOrganization(id, name, billing_email) {
// First make sure the user wants to delete this organization
var continueDelete = confirm("WARNING: All data of this organization ("+ name +") will be lost!\nMake sure you have a backup, this cannot be undone!");
if (continueDelete == true) {
var input_org_uuid = prompt("To delete the organization '" + name + " (" + billing_email +")', please type the organization uuid below.")
if (input_org_uuid != null) {
if (input_org_uuid == id) {
_post("{{urlpath}}/admin/organizations/" + id + "/delete",
"Organization deleted correctly",
"Error deleting organization");
} else {
alert("Wrong organization uuid, please try again")
}
}
}
return false;
}
document.querySelectorAll("img.identicon").forEach(function (e, i) { document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src); e.src = identicon(e.dataset.src);
}); });
@@ -59,6 +82,9 @@
"responsive": true, "responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ], "lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all "pageLength": -1, // Default show all
"columnDefs": [
{ "targets": 4, "searchable": false, "orderable": false }
]
}); });
}); });
</script> </script>

View File

@@ -1,4 +1,4 @@
<main class="container"> <main class="container-xl">
<div id="config-block" class="align-items-center p-3 mb-3 bg-secondary rounded shadow"> <div id="config-block" class="align-items-center p-3 mb-3 bg-secondary rounded shadow">
<div> <div>
<h6 class="text-white mb-3">Configuration</h6> <h6 class="text-white mb-3">Configuration</h6>
@@ -17,7 +17,7 @@
<div id="g_{{group}}" class="card-body collapse" data-parent="#config-form"> <div id="g_{{group}}" class="card-body collapse" data-parent="#config-form">
{{#each elements}} {{#each elements}}
{{#if editable}} {{#if editable}}
<div class="form-group row" title="[{{name}}] {{doc.description}}"> <div class="form-group row align-items-center" title="[{{name}}] {{doc.description}}">
{{#case type "text" "number" "password"}} {{#case type "text" "number" "password"}}
<label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label> <label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label>
<div class="col-sm-8 input-group"> <div class="col-sm-8 input-group">
@@ -34,7 +34,7 @@
</div> </div>
{{/case}} {{/case}}
{{#case type "checkbox"}} {{#case type "checkbox"}}
<div class="col-sm-3">{{doc.name}}</div> <div class="col-sm-3 col-form-label">{{doc.name}}</div>
<div class="col-sm-8"> <div class="col-sm-8">
<div class="form-check"> <div class="form-check">
<input class="form-check-input conf-{{type}}" type="checkbox" id="input_{{name}}" <input class="form-check-input conf-{{type}}" type="checkbox" id="input_{{name}}"
@@ -48,7 +48,7 @@
{{/if}} {{/if}}
{{/each}} {{/each}}
{{#case group "smtp"}} {{#case group "smtp"}}
<div class="form-group row pt-3 border-top" title="Send a test email to given email address"> <div class="form-group row align-items-center pt-3 border-top" title="Send a test email to given email address">
<label for="smtp-test-email" class="col-sm-3 col-form-label">Test SMTP</label> <label for="smtp-test-email" class="col-sm-3 col-form-label">Test SMTP</label>
<div class="col-sm-8 input-group"> <div class="col-sm-8 input-group">
<input class="form-control" id="smtp-test-email" type="email" placeholder="Enter test email"> <input class="form-control" id="smtp-test-email" type="email" placeholder="Enter test email">
@@ -76,7 +76,7 @@
{{#each config}} {{#each config}}
{{#each elements}} {{#each elements}}
{{#unless editable}} {{#unless editable}}
<div class="form-group row" title="[{{name}}] {{doc.description}}"> <div class="form-group row align-items-center" title="[{{name}}] {{doc.description}}">
{{#case type "text" "number" "password"}} {{#case type "text" "number" "password"}}
<label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label> <label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label>
<div class="col-sm-8 input-group"> <div class="col-sm-8 input-group">
@@ -92,9 +92,9 @@
</div> </div>
{{/case}} {{/case}}
{{#case type "checkbox"}} {{#case type "checkbox"}}
<div class="col-sm-3">{{doc.name}}</div> <div class="col-sm-3 col-form-label">{{doc.name}}</div>
<div class="col-sm-8"> <div class="col-sm-8">
<div class="form-check"> <div class="form-check align-middle">
<input disabled class="form-check-input" type="checkbox" id="input_{{name}}" <input disabled class="form-check-input" type="checkbox" id="input_{{name}}"
{{#if value}} checked {{/if}}> {{#if value}} checked {{/if}}>
@@ -139,6 +139,10 @@
<script> <script>
function smtpTest() { function smtpTest() {
if (formHasChanges(config_form)) {
alert("Config has been changed but not yet saved.\nPlease save the changes first before sending a test email.");
return false;
}
test_email = document.getElementById("smtp-test-email"); test_email = document.getElementById("smtp-test-email");
data = JSON.stringify({ "email": test_email.value }); data = JSON.stringify({ "email": test_email.value });
_post("{{urlpath}}/admin/test/smtp/", _post("{{urlpath}}/admin/test/smtp/",
@@ -205,4 +209,35 @@
// {{#each config}} {{#if grouptoggle}} // {{#each config}} {{#if grouptoggle}}
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input"); masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
// {{/if}} {{/each}} // {{/if}} {{/each}}
// Two functions to help check if there were changes to the form fields
// Useful for example during the smtp test to prevent people from clicking save before testing there new settings
function initChangeDetection(form) {
const ignore_fields = ["smtp-test-email"];
Array.from(form).forEach((el) => {
if (! ignore_fields.includes(el.id)) {
el.dataset.origValue = el.value
}
});
}
function formHasChanges(form) {
return Array.from(form).some(el => 'origValue' in el.dataset && ( el.dataset.origValue !== el.value));
}
// Trigger Form Change Detection
const config_form = document.getElementById('config-form');
initChangeDetection(config_form);
// Colorize some settings which are high risk
const risk_items = document.getElementsByClassName('col-form-label');
function colorRiskSettings(risk_el) {
Array.from(risk_el).forEach((el) => {
if (el.innerText.toLowerCase().includes('risks') ) {
el.parentElement.className += ' alert-danger'
console.log(el)
}
});
}
colorRiskSettings(risk_items);
</script> </script>

View File

@@ -1,4 +1,4 @@
<main class="container"> <main class="container-xl">
<div id="users-block" class="my-3 p-3 bg-white rounded shadow"> <div id="users-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Registered Users</h6> <h6 class="border-bottom pb-2 mb-3">Registered Users</h6>
@@ -7,10 +7,12 @@
<thead> <thead>
<tr> <tr>
<th>User</th> <th>User</th>
<th style="width:60px; min-width: 60px;">Items</th> <th style="width:65px; min-width: 65px;">Created at</th>
<th style="width:70px; min-width: 65px;">Last Active</th>
<th style="width:35px; min-width: 35px;">Items</th>
<th>Attachments</th> <th>Attachments</th>
<th style="min-width: 120px;">Organizations</th> <th style="min-width: 120px;">Organizations</th>
<th style="width: 140px; min-width: 140px;">Actions</th> <th style="width: 120px; min-width: 120px;">Actions</th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
@@ -22,6 +24,9 @@
<strong>{{Name}}</strong> <strong>{{Name}}</strong>
<span class="d-block">{{Email}}</span> <span class="d-block">{{Email}}</span>
<span class="d-block"> <span class="d-block">
{{#unless user_enabled}}
<span class="badge badge-danger mr-2" title="User is disabled">Disabled</span>
{{/unless}}
{{#if TwoFactorEnabled}} {{#if TwoFactorEnabled}}
<span class="badge badge-success mr-2" title="2FA is enabled">2FA</span> <span class="badge badge-success mr-2" title="2FA is enabled">2FA</span>
{{/if}} {{/if}}
@@ -34,6 +39,12 @@
</span> </span>
</div> </div>
</td> </td>
<td>
<span class="d-block">{{created_at}}</span>
</td>
<td>
<span class="d-block">{{last_active}}</span>
</td>
<td> <td>
<span class="d-block">{{cipher_count}}</span> <span class="d-block">{{cipher_count}}</span>
</td> </td>
@@ -44,9 +55,11 @@
{{/if}} {{/if}}
</td> </td>
<td> <td>
<div class="overflow-auto" style="max-height: 120px;">
{{#each Organizations}} {{#each Organizations}}
<span class="badge badge-primary" data-orgtype="{{Type}}">{{Name}}</span> <button class="badge badge-primary" data-toggle="modal" data-target="#userOrgTypeDialog" data-orgtype="{{Type}}" data-orguuid="{{jsesc Id no_quote}}" data-orgname="{{jsesc Name no_quote}}" data-useremail="{{jsesc ../Email no_quote}}" data-useruuid="{{jsesc ../Id no_quote}}">{{Name}}</button>
{{/each}} {{/each}}
</div>
</td> </td>
<td style="font-size: 90%; text-align: right; padding-right: 15px"> <td style="font-size: 90%; text-align: right; padding-right: 15px">
{{#if TwoFactorEnabled}} {{#if TwoFactorEnabled}}
@@ -54,6 +67,11 @@
{{/if}} {{/if}}
<a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a> <a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
<a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a> <a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
{{#if user_enabled}}
<a class="d-block" href="#" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</a>
{{else}}
<a class="d-block" href="#" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</a>
{{/if}}
</td> </td>
</tr> </tr>
{{/each}} {{/each}}
@@ -82,6 +100,41 @@
</form> </form>
</div> </div>
</div> </div>
<div id="userOrgTypeDialog" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered modal-sm">
<div class="modal-content">
<div class="modal-header">
<h6 class="modal-title" id="userOrgTypeDialogTitle"></h6>
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">&times;</span>
</button>
</div>
<form class="form" id="userOrgTypeForm" onsubmit="updateUserOrgType(); return false;">
<input type="hidden" name="user_uuid" id="userOrgTypeUserUuid" value="">
<input type="hidden" name="org_uuid" id="userOrgTypeOrgUuid" value="">
<div class="modal-body">
<div class="radio">
<label><input type="radio" value="2" class="form-radio-input" name="user_type" id="userOrgTypeUser">&nbsp;User</label>
</div>
<div class="radio">
<label><input type="radio" value="3" class="form-radio-input" name="user_type" id="userOrgTypeManager">&nbsp;Manager</label>
</div>
<div class="radio">
<label><input type="radio" value="1" class="form-radio-input" name="user_type" id="userOrgTypeAdmin">&nbsp;Admin</label>
</div>
<div class="radio">
<label><input type="radio" value="0" class="form-radio-input" name="user_type" id="userOrgTypeOwner">&nbsp;Owner</label>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-sm btn-secondary" data-dismiss="modal">Cancel</button>
<button type="submit" class="btn btn-sm btn-primary">Change Role</button>
</div>
</form>
</div>
</div>
</div>
</main> </main>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/datatables.css" /> <link rel="stylesheet" href="{{urlpath}}/bwrs_static/datatables.css" />
@@ -113,6 +166,24 @@
"Error deauthorizing sessions"); "Error deauthorizing sessions");
return false; return false;
} }
function disableUser(id, mail) {
var confirmed = confirm("Are you sure you want to disable user '" + mail + "'? This will also deauthorize their sessions.")
if (confirmed) {
_post("{{urlpath}}/admin/users/" + id + "/disable",
"User disabled successfully",
"Error disabling user");
}
return false;
}
function enableUser(id, mail) {
var confirmed = confirm("Are you sure you want to enable user '" + mail + "'?")
if (confirmed) {
_post("{{urlpath}}/admin/users/" + id + "/enable",
"User enabled successfully",
"Error enabling user");
}
return false;
}
function updateRevisions() { function updateRevisions() {
_post("{{urlpath}}/admin/users/update_revision", _post("{{urlpath}}/admin/users/update_revision",
"Success, clients will sync next time they connect", "Success, clients will sync next time they connect",
@@ -145,18 +216,76 @@
e.title = orgtype.name; e.title = orgtype.name;
}); });
// Special sort function to sort dates in ISO format
jQuery.extend( jQuery.fn.dataTableExt.oSort, {
"date-iso-pre": function ( a ) {
let x;
let sortDate = a.replace(/(<([^>]+)>)/gi, "").trim();
if ( sortDate !== '' ) {
let dtParts = sortDate.split(' ');
var timeParts = (undefined != dtParts[1]) ? dtParts[1].split(':') : [00,00,00];
var dateParts = dtParts[0].split('-');
x = (dateParts[0] + dateParts[1] + dateParts[2] + timeParts[0] + timeParts[1] + ((undefined != timeParts[2]) ? timeParts[2] : 0)) * 1;
if ( isNaN(x) ) {
x = 0;
}
} else {
x = Infinity;
}
return x;
},
"date-iso-asc": function ( a, b ) {
return a - b;
},
"date-iso-desc": function ( a, b ) {
return b - a;
}
});
document.addEventListener("DOMContentLoaded", function(event) { document.addEventListener("DOMContentLoaded", function(event) {
$('#users-table').DataTable({ $('#users-table').DataTable({
"responsive": true, "responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ], "lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all "pageLength": -1, // Default show all
"columns": [ "columnDefs": [
null, // Userdata { "targets": [1,2], "type": "date-iso" },
null, // Items { "targets": 6, "searchable": false, "orderable": false }
null, // Attachments ]
null, // Organizations
{ "searchable": false, "orderable": false }, // Actions
],
}); });
}); });
var userOrgTypeDialog = document.getElementById('userOrgTypeDialog');
// Fill the form and title
userOrgTypeDialog.addEventListener('show.bs.modal', function(event){
let userOrgType = event.relatedTarget.getAttribute("data-orgtype");
let userOrgTypeName = OrgTypes[userOrgType]["name"];
let orgName = event.relatedTarget.getAttribute("data-orgname");
let userEmail = event.relatedTarget.getAttribute("data-useremail");
let orgUuid = event.relatedTarget.getAttribute("data-orguuid");
let userUuid = event.relatedTarget.getAttribute("data-useruuid");
document.getElementById("userOrgTypeDialogTitle").innerHTML = "<b>Update User Type:</b><br><b>Organization:</b> " + orgName + "<br><b>User:</b> " + userEmail;
document.getElementById("userOrgTypeUserUuid").value = userUuid;
document.getElementById("userOrgTypeOrgUuid").value = orgUuid;
document.getElementById("userOrgType"+userOrgTypeName).checked = true;
}, false);
// Prevent accidental submission of the form with valid elements after the modal has been hidden.
userOrgTypeDialog.addEventListener('hide.bs.modal', function(event){
document.getElementById("userOrgTypeDialogTitle").innerHTML = '';
document.getElementById("userOrgTypeUserUuid").value = '';
document.getElementById("userOrgTypeOrgUuid").value = '';
}, false);
function updateUserOrgType() {
let orgForm = document.getElementById("userOrgTypeForm");
const data = JSON.stringify(Object.fromEntries(new FormData(orgForm).entries()));
_post("{{urlpath}}/admin/users/org_type",
"Updated organization type of the user successfully",
"Error updating organization type of the user", data);
return false;
}
</script> </script>

View File

@@ -1,6 +1,8 @@
Your Email Change Your Email Change
<!----------------> <!---------------->
<html> To finalize changing your email address enter the following code in web vault: {{token}}
<p>To finalize changing your email address enter the following code in web vault: <b>{{token}}</b></p>
<p>If you did not try to change an email address, you can safely ignore this email.</p> If you did not try to change an email address, you can safely ignore this email.
</html>
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,12 +1,10 @@
Delete Your Account Delete Your Account
<!----------------> <!---------------->
<html> Click the link below to delete your account.
<p>
click the link below to delete your account. Delete Your Account: {{url}}/#/verify-recover-delete?userId={{user_id}}&token={{token}}&email={{email}}
<br>
<br> If you did not request this email to delete your account, you can safely ignore this email.
<a href="{{url}}/#/verify-recover-delete?userId={{user_id}}&token={{token}}&email={{email}}">
Delete Your Account</a> ===
</p> Github: https://github.com/dani-garcia/bitwarden_rs
<p>If you did not request this email to delete your account, you can safely ignore this email.</p>
</html>

View File

@@ -1,8 +1,7 @@
Invitation to {{{org_name}}} accepted Invitation to {{{org_name}}} accepted
<!----------------> <!---------------->
<html> Your invitation for *{{email}}* to join *{{org_name}}* was accepted.
<p> Please log in via {{url}} to the bitwarden_rs server and confirm them from the organization management page.
Your invitation for <b>{{email}}</b> to join <b>{{org_name}}</b> was accepted.
Please <a href="{{url}}/">log in</a> to the bitwarden_rs server and confirm them from the organization management page. ===
</p> Github: https://github.com/dani-garcia/bitwarden_rs
</html>

View File

@@ -1,8 +1,7 @@
Invitation to {{{org_name}}} confirmed Invitation to {{{org_name}}} confirmed
<!----------------> <!---------------->
<html> Your invitation to join *{{org_name}}* was confirmed.
<p> It will now appear under the Organizations the next time you log in to the web vault at {{url}}.
Your invitation to join <b>{{org_name}}</b> was confirmed.
It will now appear under the Organizations the next time you <a href="{{url}}/">log in</a> to the web vault. ===
</p> Github: https://github.com/dani-garcia/bitwarden_rs
</html>

View File

@@ -1,14 +1,12 @@
New Device Logged In From {{{device}}} New Device Logged In From {{{device}}}
<!----------------> <!---------------->
<html> Your account was just logged into from a new device.
<p>
Your account was just logged into from a new device.
Date: {{datetime}} * Date: {{datetime}}
IP Address: {{ip}} * IP Address: {{ip}}
Device Type: {{device}} * Device Type: {{device}}
You can deauthorize all devices that have access to your account from the You can deauthorize all devices that have access to your account from the web vault ( {{url}} ) under Settings > My Account > Deauthorize Sessions.
<a href="{{url}}/">web vault</a> under Settings > My Account > Deauthorize Sessions.
</p> ===
</html> Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -2,6 +2,9 @@ Your master password hint
<!----------------> <!---------------->
You (or someone) recently requested your master password hint. Unfortunately, your account does not have a master password hint. You (or someone) recently requested your master password hint. Unfortunately, your account does not have a master password hint.
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted. If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to delete the account ( {{url}}/#/recover-delete ) so that you can register again and start over. All data associated with your account will be deleted.
If you did not request your master password hint you can safely ignore this email. If you did not request your master password hint you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -2,9 +2,12 @@ Your master password hint
<!----------------> <!---------------->
You (or someone) recently requested your master password hint. You (or someone) recently requested your master password hint.
Your hint is: "{{hint}}" Your hint is: *{{hint}}*
Log in: <a href="{{url}}/">Web Vault</a> Log in to the web vault: {{url}}
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted. If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to delete the account ( {{url}}/#/recover-delete ) so that you can register again and start over. All data associated with your account will be deleted.
If you did not request your master password hint you can safely ignore this email. If you did not request your master password hint you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,12 +1,12 @@
Join {{{org_name}}} Join {{{org_name}}}
<!----------------> <!---------------->
<html> You have been invited to join the *{{org_name}}* organization.
<p>
You have been invited to join the <b>{{org_name}}</b> organization.
<br> Click here to join: {{url}}/#/accept-organization/?organizationId={{org_id}}&organizationUserId={{org_user_id}}&email={{email}}&organizationName={{org_name}}&token={{token}}
<br>
<a href="{{url}}/#/accept-organization/?organizationId={{org_id}}&organizationUserId={{org_user_id}}&email={{email}}&organizationName={{org_name}}&token={{token}}">
Click here to join</a> If you do not wish to join this organization, you can safely ignore this email.
</p>
<p>If you do not wish to join this organization, you can safely ignore this email.</p> ===
</html> Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,8 +1,8 @@
Bitwarden_rs SMTP Test Bitwarden_rs SMTP Test
<!----------------> <!---------------->
<html> This is a test email to verify the SMTP configuration for {{url}}.
<p>
This is a test email to verify the SMTP configuration for <a href="{{url}}">{{url}}</a>. When you can read this email it is probably configured correctly.
</p>
<p>When you can read this email it is probably configured correctly.</p> ===
</html> Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,9 +1,8 @@
Your Two-step Login Verification Code Your Two-step Login Verification Code
<!----------------> <!---------------->
<html> Your two-step verification code is: {{token}}
<p>
Your two-step verification code is: <b>{{token}}</b>
Use this code to complete logging in with Bitwarden. Use this code to complete logging in with Bitwarden.
</p>
</html> ===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,12 +1,10 @@
Verify Your Email Verify Your Email
<!----------------> <!---------------->
<html>
<p>
Verify this email address for your account by clicking the link below. Verify this email address for your account by clicking the link below.
<br>
<br> Verify Email Address Now: {{url}}/#/verify-email/?userId={{user_id}}&token={{token}}
<a href="{{url}}/#/verify-email/?userId={{user_id}}&token={{token}}">
Verify Email Address Now</a> If you did not request to verify your account, you can safely ignore this email.
</p>
<p>If you did not request to verify your account, you can safely ignore this email.</p> ===
</html> Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,8 +1,8 @@
Welcome Welcome
<!----------------> <!---------------->
<html> Thank you for creating an account at {{url}}. You may now log in with your new account.
<p>
Thank you for creating an account at <a href="{{url}}/">{{url}}</a>. You may now log in with your new account. If you did not request to create an account, you can safely ignore this email.
</p>
<p>If you did not request to create an account, you can safely ignore this email.</p> ===
</html> Github: https://github.com/dani-garcia/bitwarden_rs

View File

@@ -1,12 +1,10 @@
Welcome Welcome
<!----------------> <!---------------->
<html> Thank you for creating an account at {{url}}. Before you can login with your new account, you must verify this email address by clicking the link below.
<p>
Thank you for creating an account at <a href="{{url}}/">{{url}}</a>. Before you can login with your new account, you must verify this email address by clicking the link below. Verify Email Address Now: {{url}}/#/verify-email/?userId={{user_id}}&token={{token}}
<br>
<br> If you did not request to create an account, you can safely ignore this email.
<a href="{{url}}/#/verify-email/?userId={{user_id}}&token={{token}}">
Verify Email Address Now</a> ===
</p> Github: https://github.com/dani-garcia/bitwarden_rs
<p>If you did not request to create an account, you can safely ignore this email.</p>
</html>

View File

@@ -283,20 +283,37 @@ where
use std::env; use std::env;
pub fn get_env_str_value(key: &str) -> Option<String>
{
let key_file = format!("{}_FILE", key);
let value_from_env = env::var(key);
let value_file = env::var(&key_file);
match (value_from_env, value_file) {
(Ok(_), Ok(_)) => panic!("You should not define both {} and {}!", key, key_file),
(Ok(v_env), Err(_)) => Some(v_env),
(Err(_), Ok(v_file)) => match fs::read_to_string(v_file) {
Ok(content) => Some(content.trim().to_string()),
Err(e) => panic!("Failed to load {}: {:?}", key, e)
},
_ => None
}
}
pub fn get_env<V>(key: &str) -> Option<V> pub fn get_env<V>(key: &str) -> Option<V>
where where
V: FromStr, V: FromStr,
{ {
try_parse_string(env::var(key).ok()) try_parse_string(get_env_str_value(key))
} }
const TRUE_VALUES: &[&str] = &["true", "t", "yes", "y", "1"]; const TRUE_VALUES: &[&str] = &["true", "t", "yes", "y", "1"];
const FALSE_VALUES: &[&str] = &["false", "f", "no", "n", "0"]; const FALSE_VALUES: &[&str] = &["false", "f", "no", "n", "0"];
pub fn get_env_bool(key: &str) -> Option<bool> { pub fn get_env_bool(key: &str) -> Option<bool> {
match env::var(key) { match get_env_str_value(key) {
Ok(val) if TRUE_VALUES.contains(&val.to_lowercase().as_ref()) => Some(true), Some(val) if TRUE_VALUES.contains(&val.to_lowercase().as_ref()) => Some(true),
Ok(val) if FALSE_VALUES.contains(&val.to_lowercase().as_ref()) => Some(false), Some(val) if FALSE_VALUES.contains(&val.to_lowercase().as_ref()) => Some(false),
_ => None, _ => None,
} }
} }
@@ -305,12 +322,40 @@ pub fn get_env_bool(key: &str) -> Option<bool> {
// Date util methods // Date util methods
// //
use chrono::NaiveDateTime; use chrono::{DateTime, Local, NaiveDateTime, TimeZone};
use chrono_tz::Tz;
const DATETIME_FORMAT: &str = "%Y-%m-%dT%H:%M:%S%.6fZ"; /// Formats a UTC-offset `NaiveDateTime` in the format used by Bitwarden API
/// responses with "date" fields (`CreationDate`, `RevisionDate`, etc.).
pub fn format_date(dt: &NaiveDateTime) -> String {
dt.format("%Y-%m-%dT%H:%M:%S%.6fZ").to_string()
}
pub fn format_date(date: &NaiveDateTime) -> String { /// Formats a `DateTime<Local>` using the specified format string.
date.format(DATETIME_FORMAT).to_string() ///
/// For a `DateTime<Local>`, the `%Z` specifier normally formats as the
/// time zone's UTC offset (e.g., `+00:00`). In this function, if the
/// `TZ` environment variable is set, then `%Z` instead formats as the
/// abbreviation for that time zone (e.g., `UTC`).
pub fn format_datetime_local(dt: &DateTime<Local>, fmt: &str) -> String {
// Try parsing the `TZ` environment variable to enable formatting `%Z` as
// a time zone abbreviation.
if let Ok(tz) = env::var("TZ") {
if let Ok(tz) = tz.parse::<Tz>() {
return dt.with_timezone(&tz).format(fmt).to_string();
}
}
// Otherwise, fall back to formatting `%Z` as a UTC offset.
dt.format(fmt).to_string()
}
/// Formats a UTC-offset `NaiveDateTime` as a datetime in the local time zone.
///
/// This function basically converts the `NaiveDateTime` to a `DateTime<Local>`,
/// and then calls [format_datetime_local](crate::util::format_datetime_local).
pub fn format_naive_datetime_local(dt: &NaiveDateTime, fmt: &str) -> String {
format_datetime_local(&Local.from_utc_datetime(dt), fmt)
} }
// //

View File

@@ -10,16 +10,17 @@ import urllib.request
from collections import OrderedDict from collections import OrderedDict
if len(sys.argv) != 2: if not (2 <= len(sys.argv) <= 3):
print("usage: %s <OUTPUT-FILE>" % sys.argv[0]) print("usage: %s <OUTPUT-FILE> [GIT-REF]" % sys.argv[0])
print() print()
print("This script generates a global equivalent domains JSON file from") print("This script generates a global equivalent domains JSON file from")
print("the upstream Bitwarden source repo.") print("the upstream Bitwarden source repo.")
sys.exit(1) sys.exit(1)
OUTPUT_FILE = sys.argv[1] OUTPUT_FILE = sys.argv[1]
GIT_REF = 'master' if len(sys.argv) == 2 else sys.argv[2]
BASE_URL = 'https://github.com/bitwarden/server/raw/master' BASE_URL = 'https://github.com/bitwarden/server/raw/%s' % GIT_REF
ENUMS_URL = '%s/src/Core/Enums/GlobalEquivalentDomainsType.cs' % BASE_URL ENUMS_URL = '%s/src/Core/Enums/GlobalEquivalentDomainsType.cs' % BASE_URL
DOMAIN_LISTS_URL = '%s/src/Core/Utilities/StaticStore.cs' % BASE_URL DOMAIN_LISTS_URL = '%s/src/Core/Utilities/StaticStore.cs' % BASE_URL