Compare commits

...

109 Commits

Author SHA1 Message Date
Daniel García
1e5306b820 Remove warning when compiling only with mysql and add compatibility mode with the old docker script names 2021-04-29 16:01:04 +02:00
Daniel García
6890c25ea1 Merge pull request #1636 from rkowalewski/fix-libressl-332
Update openssl crate to support LibreSSL 3.3.2
2021-04-29 16:00:14 +02:00
rkowalewski
48482fece0 Merge branch 'main' into fix-libressl-332 2021-04-29 08:34:10 +02:00
Roger Kowalewski
1dc1d4df72 update openssl crate to support LibreSSL 3.3.2 2021-04-29 10:04:08 +02:00
Daniel García
2b4dd6f137 Fix branch name 2021-04-28 21:46:20 +02:00
Daniel García
3da44a8d30 Fix formatting 2021-04-27 23:39:36 +02:00
Daniel García
34ea10475d Project renaming 2021-04-27 23:18:32 +02:00
Daniel García
ced7f1771a Update dependencies 2021-04-15 18:38:00 +02:00
Daniel García
af2235bf88 Merge branch 'RealOrangeOne-fmt' 2021-04-15 18:30:50 +02:00
Daniel García
305de2e2cd Format the changes from merge to master 2021-04-15 18:30:23 +02:00
Daniel García
8756c5c255 Merge branch 'fmt' of https://github.com/RealOrangeOne/bitwarden_rs into RealOrangeOne-fmt 2021-04-15 18:29:03 +02:00
Daniel García
27609ac4cc Update README.md 2021-04-15 18:27:05 +02:00
Daniel García
95d906bdbb Merge branch 'master' into fmt 2021-04-15 18:24:04 +02:00
Daniel García
4bb0d7bc05 Merge pull request #1587 from RealOrangeOne/request-proxy
Allow outbound requests to go via a proxy
2021-04-15 17:40:39 +02:00
Daniel García
d9599155ae Merge pull request #1602 from jjlin/backup-warning
Warn that the SQLite backup feature doesn't produce a complete backup
2021-04-15 17:38:31 +02:00
Jeremy Lin
244bad3a24 Warn that the SQLite backup feature doesn't produce a complete backup
Also add a link to the wiki page on backups.
2021-04-09 22:30:39 -07:00
Jake Howard
f7056bcaa5 Enable socks feature for reqwest
This allowed HTTP_PROXY be set with a socks5 proxy
2021-04-07 19:25:02 +01:00
Jake Howard
994669fb69 Merge remote-tracking branch 'origin/master' into fmt 2021-04-06 21:55:28 +01:00
Jake Howard
3ab90259f2 Modify rustfmt file 2021-04-06 21:54:42 +01:00
Jake Howard
155109dea1 Extract client creation to a single place 2021-04-06 21:04:37 +01:00
Daniel García
b268c3dd1c Update web vault and add unnoficialserver response 2021-04-06 20:38:22 +02:00
Daniel García
4e64dbdde4 Merge pull request #1579 from jjlin/job-scheduler
Add support for auto-deleting trashed items
2021-04-06 19:48:49 +02:00
Daniel García
a2955daffe Merge pull request #1576 from jjlin/global-domains
Sync global_domains.json
2021-04-06 19:36:11 +02:00
Daniel García
d3921b973b Merge pull request #1583 from BlackDex/icon-updates
Updated icon fetching.
2021-04-06 19:35:51 +02:00
Daniel García
cf6ad3cb15 Merge pull request #1584 from BlackDex/admin-interface
Some admin interface updates.
2021-04-06 19:33:15 +02:00
Jeremy Lin
90e0b7fec6 Offset scheduled jobs by 5 minutes
This is intended to avoid contention with database backups that many users
probably schedule to start at exactly the top of an hour.
2021-04-05 23:20:08 -07:00
Jeremy Lin
d77333576b Add support for auto-deleting trashed items
Upstream will soon auto-delete trashed items after 30 days, but some people
use the trash as an archive folder, so to avoid unexpected data loss, this
implementation requires the user to explicitly enable auto-deletion.
2021-04-05 23:07:25 -07:00
Jeremy Lin
73ff8d79f7 Add a generic job scheduler
Also rewrite deletion of old sends using the job scheduler.
2021-04-05 23:07:15 -07:00
BlackDex
95fc88ae5b Some admin interface updates.
- Fixed bug when web-vault is disabled.
- Updated sql-server version check to be simpler thx to @weiznich ( https://github.com/dani-garcia/bitwarden_rs/pull/1548#discussion_r604767196 )
- Use `VACUUM INTO` to create a SQLite backup instead of using the external sqlite3 application.
  - This also removes the dependancy of having the sqlite3 packages installed on the final image unnecessary, and thus removed it.
- Updated backup filename to also have the current time.
- Add specific bitwarden_rs web-vault version check (to match letter patched versions)
  Will work when https://github.com/dani-garcia/bw_web_builds/pull/33 is build (But still works without it also).
2021-04-05 15:09:16 +02:00
BlackDex
1d0eaac260 Updated icon fetching.
- Added image type checking, and prevent downloading non images.
  We didn't checked this before, which could in turn could allow someone
to download an arbitrary file.
- This also prevents SVG images from being used, while they work on the
  web-vault and desktop client, they didn't on the mobile versions.
- Because of this image type checking we can return a valid file type
  instead of only 'x-icon' (which is still used as a fallback).
- Prevent rel values with `icon-mask`, these are not valid favicons.
2021-04-03 22:51:44 +02:00
Jeremy Lin
3565bfc939 Sync global_domains.json to bitwarden/server@261916d (Stack Exchange) 2021-04-01 21:59:06 -07:00
Mathijs van Veluw
a82c04910f Merge pull request #1575 from RealOrangeOne/linguist-vendored
Just ignore scripts
2021-04-01 21:57:23 +02:00
Jake Howard
233f03ca2b Just ignore scripts
Nothing else in `src/static` is vendored external scripts, so just ignore these.

This also fixes the glob, which previously wasn't matching anything
2021-04-01 20:44:58 +01:00
Jake Howard
93c881a7a9 Reflow some lines manually 2021-03-31 21:45:05 +01:00
Jake Howard
0af3956abd Run cargo fmt on codebase 2021-03-31 21:18:35 +01:00
Jake Howard
15feff3e79 Add fmt to CI 2021-03-31 21:16:57 +01:00
Daniel García
5c5700caa7 Merge pull request #1565 from BlackDex/misc-updates
Misc changes.
2021-03-30 23:31:58 +02:00
Daniel García
3bddc176d6 Updated sponsors 2021-03-30 23:27:55 +02:00
BlackDex
9caf4bf383 Misc changes.
Some small changes in general:
- Moved the SQL Version check struct into the function.
- Updated hadolint to 2.0.0
- Fixed hadolint 2.0.0 warnings
- Updated github workflows
- Added .editorconfig for some general shared editor settings.
2021-03-30 21:45:10 +02:00
Daniel García
9b2234fa0e Merge pull request #1556 from mkilchhofer/docs/update_template
fix(env.template): IP_HEADER defaults to X-Real-IP
2021-03-29 23:35:50 +02:00
Daniel García
1f79fdec4e Merge pull request #1552 from BlackDex/misc-fixes
Icon and SMTP Debug fixes.
2021-03-29 23:35:31 +02:00
Marco Kilchhofer
a56f4c97e4 fix(env.template): IP_HEADER defaults to X-Real-IP
This was wrong in commit 88c56de97b.
2021-03-29 11:16:20 +02:00
BlackDex
3a3390963c Icon and SMTP Debug fixes.
- We need to add some feature to enable smtp debugging again. See: https://github.com/lettre/lettre/pull/584
- Upstream added the fallback icon again, probably because of caching ;). See: https://github.com/bitwarden/server/pull/1149
- Enabled gzip and brotli compression support with reqwest. Some sites seem to force this, or assume that because of the User-Agent string it is supported. This caused some failed icons.

Fixes #1540
2021-03-29 10:27:58 +02:00
Daniel García
fd27759a95 Merge pull request #1546 from RealOrangeOne/clippy-run
Run Clippy
2021-03-28 16:04:09 +02:00
Daniel García
01d8056c73 Merge pull request #1545 from RealOrangeOne/icon-client-cache
Client caching
2021-03-28 16:03:16 +02:00
Jake Howard
81fa33ebb5 Remove unnecessary reference 2021-03-28 10:59:49 +01:00
Jake Howard
e8aa3bc066 Merge branch 'master' into clippy-run 2021-03-28 10:51:25 +01:00
Jake Howard
0bf0125e82 Reverse negation on ordering
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2021-03-28 10:49:29 +01:00
Jake Howard
6209e778e5 Icons should always be cached using full TTL 2021-03-28 10:39:12 +01:00
Daniel García
5323283f98 Merge pull request #1548 from BlackDex/admin-interface
Updated diagnostics page
2021-03-28 01:31:38 +01:00
BlackDex
57e17d0648 Updated diagnostics page
- Added reverse proxy check
- Better deffinition of internet proxy
- Added SQL Server version detection
2021-03-28 00:10:01 +01:00
Jake Howard
da55d5ec70 Also run actions CI on pull request
`push` only counts for pushes to branches on the repo, not forks
2021-03-27 15:21:00 +00:00
Jake Howard
828a060698 Run clippy on CI 2021-03-27 15:10:00 +00:00
Jake Howard
3e5971b9db Remove unnecessary result return types 2021-03-27 15:07:26 +00:00
Jake Howard
47c2625d38 Prevent clippy complaining at method
It's not incorrectly wrapped. We care about the return type being `Option`.
2021-03-27 14:36:50 +00:00
Jake Howard
49af9cf4f5 Correctly camelCase acronyms
https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
2021-03-27 14:26:32 +00:00
Jake Howard
6b1daeba05 Implement From over Into
https://rust-lang.github.io/rust-clippy/master/index.html#from_over_into
2021-03-27 14:19:57 +00:00
Jake Howard
9f1240d8d9 Only construct JSON object if it's useful 2021-03-27 14:03:46 +00:00
Jake Howard
a8138be69b Use if let more 2021-03-27 14:03:31 +00:00
Jake Howard
ea57dc3bc9 Use matches macro 2021-03-27 14:03:07 +00:00
Jake Howard
131348a49f Add immutable caching for vault assets
The URLs are cachebusted, so updates will still be applied cleanly and immediately
2021-03-27 13:37:56 +00:00
Jake Howard
b22564cb00 Cache icons on the client
This should make the vault pages load much faster, and massively reduce the number of requests.
2021-03-27 13:30:40 +00:00
Daniel García
16eb0a56f9 Exclude vendored scripts from Github language statistics 2021-03-25 21:39:34 +01:00
Daniel García
3e4ff47a38 Update dependencies, particularly openssl to 1.1.1k 2021-03-25 20:05:20 +01:00
Daniel García
8ea01a67f6 Merge pull request #1529 from mprasil/more-generic-send-error-messages
Return generic message when Send not available
2021-03-25 19:56:24 +01:00
Miro Prasil
aa5cc642e1 Use constant for the "inaccessible" error message 2021-03-25 11:40:32 +00:00
Daniel García
a121cb6f00 Merge pull request #1530 from jjlin/global-domains
Sync global_domains.json
2021-03-23 23:48:20 +01:00
Daniel García
60164182ae Fix alpine armv7 build
Reference: https://github.com/messense/rust-musl-cross/pull/34
2021-03-23 23:47:12 +01:00
Jeremy Lin
f842a80cdb Sync global_domains.json to bitwarden/server@455e4b2 (ProtonMail/ProtonVPN) 2021-03-23 11:30:00 -07:00
Miro Prasil
4b6a574ee0 Return generic message when Send not available
This should help avoid leaking information about (non)existence of Send
and be more in line with what official server returns.
2021-03-23 13:39:09 +00:00
Daniel García
f9ebb780f9 Update dependencies 2021-03-22 20:00:57 +01:00
Daniel García
1fc6c30652 Send deletion thread and updated users revision 2021-03-22 19:57:35 +01:00
Daniel García
46a1a013cd Update user revision date with sends 2021-03-22 19:05:15 +01:00
Daniel García
551810c486 Fix updating file send 2021-03-17 19:39:48 +01:00
Daniel García
b987ba506d Merge pull request #1493 from jjlin/send
Add support for the Disable Send policy
2021-03-16 18:13:55 +01:00
Daniel García
84810f2bb2 Remove unnecessary fields from send access 2021-03-16 18:11:25 +01:00
Jeremy Lin
424d666a50 Add support for the Disable Send policy
Upstream refs:

* https://github.com/bitwarden/server/pull/1130
* https://bitwarden.com/help/article/policies/#disable-send
2021-03-16 02:07:45 -07:00
Daniel García
a71359f647 Merge pull request #1469 from jjlin/cors
CORS fixes
2021-03-15 16:57:00 +01:00
Daniel García
d93c344176 Merge branch 'master' into cors 2021-03-15 16:49:12 +01:00
Daniel García
b9c3213b90 Merge pull request #1487 from jjlin/send
Send access check fixes
2021-03-15 16:47:14 +01:00
Daniel García
95e24ffc51 rename send key -> akey 2021-03-15 16:42:20 +01:00
Jeremy Lin
00d56d7295 Send access check fixes
Adjust checks for max access count, expiration date, and deletion date.
The date checks aren't that important, but the access count check
currently allows one more access than it should.
2021-03-14 23:20:49 -07:00
Daniel García
7436b454db Update web vault to 2.19.0 2021-03-14 23:36:49 +01:00
Daniel García
8da5b99482 Send API 2021-03-14 23:35:55 +01:00
Daniel García
2969e87b52 Add separate host-only fromrequest handler 2021-03-14 23:24:47 +01:00
Daniel García
ce62e898c3 Remove debug impl from database structs
This is only implemented for the database specific structs, which is not what we want
2021-03-13 22:04:04 +01:00
Daniel García
431462d839 Update dependencies and enable serde integration for chrono 2021-03-13 22:02:11 +01:00
Jeremy Lin
7d0e234b34 CORS fixes
* The Safari extension apparently now uses the origin `file://` and expects
  that to be returned (see bitwarden/browser#1311, bitwarden/server#800).

* The `Access-Control-Allow-Origin` header was reflecting the value of the
  `Origin` header without checking whether the origin was actually allowed.
  This effectively allows any origin to interact with the server, which
  defeats the purpose of CORS.
2021-03-07 00:35:08 -08:00
Daniel García
dad1b1bee9 Updated dependencies 2021-03-06 22:04:01 +01:00
Daniel García
9312cebee3 Merge pull request #1463 from std2main/patch-2
Add a dot in find command.
2021-03-06 00:04:12 +01:00
std2main
cdf5b6ec2d Add a dot in find command.
Add a dot indicting current directory to search by find.

find in mac won't work without the dot
2021-03-05 15:49:45 -05:00
Mathijs van Veluw
ce99fc8f95 Merge pull request #1460 from jjlin/invitation-org-name
Fix custom org name in invitation confirmation email
2021-03-04 08:15:34 +01:00
Jeremy Lin
a75d050001 Fix custom org name in invitation confirmation email
The org name in the invitation email was made customizable in 8867626, but
the org name is still hardcoded as "bitwarden_rs" in the confirmation email.
2021-03-03 23:03:55 -08:00
Daniel García
75cfd10f11 Merge pull request #1444 from jjlin/remove-md5
Remove `md5.js` dependency
2021-02-28 18:23:27 +01:00
Daniel García
9859ba6339 Merge pull request #1443 from jjlin/data-folder
Check for data folder on startup
2021-02-28 18:22:46 +01:00
Jeremy Lin
513056f711 Check for data folder on startup
Currently, when starting up for the first time (running standalone, outside
of Docker), bitwarden_rs panics when the `openssl` tool isn't able to create
`data/rsa_key.pem` due to the `data` dir not existing. Instead, print a more
helpful error message telling the user to create the directory.
2021-02-28 01:45:05 -08:00
Mathijs van Veluw
ebe334fcc7 Merge pull request #1447 from jjlin/issue-templates
Allow only bug report issues
2021-02-28 08:32:04 +01:00
Jeremy Lin
0eec12472e Allow only bug report issues
Remove templates for other issue types, directing them to the forum instead.
2021-02-27 22:13:51 -08:00
Jeremy Lin
39106d440a Remove md5.js dependency
Switch to the built-in WebCrypto APIs for computing identicon hashes.
2021-02-26 21:48:01 -08:00
Daniel García
9117095764 Update dependencies and web vault 2021-02-24 20:30:19 +01:00
Daniel García
099bba950c Merge pull request #1432 from jjlin/2fa
Change `twofactorauth.org` to `2fa.directory`
2021-02-24 20:05:57 +01:00
Jeremy Lin
e37ff60617 Change twofactorauth.org to 2fa.directory
The `twofactorauth.org` has apparently been sold to some company for
marketing purposes.
2021-02-23 18:51:07 -08:00
Daniel García
5b14608041 Update web vault to have better error messages when not using HTTPS 2021-02-20 19:13:20 +01:00
Daniel García
ad92692bab Merge pull request #1413 from paolobarbolini/email-clones
Remove unnecessary allocations
2021-02-20 17:58:12 +01:00
Paolo Barbolini
d956d42903 Remove unnecessary allocations 2021-02-19 20:17:18 +01:00
Daniel García
d69be7d03a Merge pull request #1389 from jjlin/alpine
Update Alpine base images to 3.13
2021-02-15 20:58:13 +01:00
Jeremy Lin
f82de8d00d Update Alpine base images to 3.13 2021-02-14 15:18:47 -08:00
Daniel García
c836f88ff2 Remove soup and use a newer html5ever directly 2021-02-07 22:28:02 +01:00
Daniel García
8b660ae090 Swap structopt for a simpler alternative 2021-02-07 20:10:40 +01:00
114 changed files with 2855 additions and 2078 deletions

View File

@@ -4,6 +4,8 @@ target
# Data folder # Data folder
data data
.env .env
.env.template
.gitattributes
# IDE files # IDE files
.vscode .vscode

23
.editorconfig Normal file
View File

@@ -0,0 +1,23 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
[*]
end_of_line = lf
charset = utf-8
[*.{rs,py}]
indent_style = space
indent_size = 4
trim_trailing_whitespace = true
insert_final_newline = true
[*.{yml,yaml}]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true
insert_final_newline = true
[Makefile]
indent_style = tab

View File

@@ -1,4 +1,4 @@
## Bitwarden_RS Configuration File ## Vaultwarden Configuration File
## Uncomment any of the following lines to change the defaults ## Uncomment any of the following lines to change the defaults
## ##
## Be aware that most of these settings will be overridden if they were changed ## Be aware that most of these settings will be overridden if they were changed
@@ -28,6 +28,7 @@
# RSA_KEY_FILENAME=data/rsa_key # RSA_KEY_FILENAME=data/rsa_key
# ICON_CACHE_FOLDER=data/icon_cache # ICON_CACHE_FOLDER=data/icon_cache
# ATTACHMENTS_FOLDER=data/attachments # ATTACHMENTS_FOLDER=data/attachments
# SENDS_FOLDER=data/sends
## Templates data folder, by default uses embedded templates ## Templates data folder, by default uses embedded templates
## Check source code to see the format ## Check source code to see the format
@@ -35,9 +36,9 @@
## Automatically reload the templates for every request, slow, use only for development ## Automatically reload the templates for every request, slow, use only for development
# RELOAD_TEMPLATES=false # RELOAD_TEMPLATES=false
## Client IP Header, used to identify the IP of the client, defaults to "X-Client-IP" ## Client IP Header, used to identify the IP of the client, defaults to "X-Real-IP"
## Set to the string "none" (without quotes), to disable any headers and just use the remote IP ## Set to the string "none" (without quotes), to disable any headers and just use the remote IP
# IP_HEADER=X-Client-IP # IP_HEADER=X-Real-IP
## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever") ## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever")
# ICON_CACHE_TTL=2592000 # ICON_CACHE_TTL=2592000
@@ -55,6 +56,23 @@
# WEBSOCKET_ADDRESS=0.0.0.0 # WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012 # WEBSOCKET_PORT=3012
## Job scheduler settings
##
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
## and are always in terms of UTC time (regardless of your local time zone settings).
##
## How often (in ms) the job scheduler thread checks for jobs that need running.
## Set to 0 to globally disable scheduled jobs.
# JOB_POLL_INTERVAL_MS=30000
##
## Cron schedule of the job that checks for Sends past their deletion date.
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
# SEND_PURGE_SCHEDULE="0 5 * * * *"
##
## Cron schedule of the job that checks for trashed items to delete permanently.
## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
# TRASH_PURGE_SCHEDULE="0 5 0 * * *"
## Enable extended logging, which shows timestamps and targets in the logs ## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true # EXTENDED_LOGGING=true
@@ -81,7 +99,7 @@
## Enable WAL for the DB ## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup. ## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB, ## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents bitwarden_rs from automatically enabling it on start. ## this setting only prevents vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can ## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start. ## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true # ENABLE_DB_WAL=true
@@ -169,7 +187,7 @@
## Invitations org admins to invite users, even when signups are disabled ## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true # INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization ## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Bitwarden_RS # INVITATION_ORG_NAME=Vaultwarden
## Per-organization attachment limit (KB) ## Per-organization attachment limit (KB)
## Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more ## Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
@@ -241,8 +259,8 @@
## To make sure the email links are pointing to the correct host, set the DOMAIN variable. ## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory ## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory
# SMTP_HOST=smtp.domain.tld # SMTP_HOST=smtp.domain.tld
# SMTP_FROM=bitwarden-rs@domain.tld # SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Bitwarden_RS # SMTP_FROM_NAME=Vaultwarden
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS. # SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default. # SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default.
# SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here. # SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here.

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
# Ignore vendored scripts in GitHub stats
src/static/scripts/* linguist-vendored

View File

@@ -1,6 +1,6 @@
--- ---
name: Bug report name: Bug report
about: Create a report to help us improve about: Use this ONLY for bugs in vaultwarden itself. Use the Discourse forum (link below) to request features or get help with usage/configuration. If in doubt, use the forum.
title: '' title: ''
labels: '' labels: ''
assignees: '' assignees: ''
@@ -8,44 +8,59 @@ assignees: ''
--- ---
<!-- <!--
# ### # ###
NOTE: Please update to the latest version of bitwarden_rs before reporting an issue! NOTE: Please update to the latest version of vaultwarden before reporting an issue!
This saves you and us a lot of time and troubleshooting. This saves you and us a lot of time and troubleshooting.
See: https://github.com/dani-garcia/bitwarden_rs/issues/1180 See:
* https://github.com/dani-garcia/vaultwarden/issues/1180
* https://github.com/dani-garcia/vaultwarden/wiki/Updating-the-vaultwarden-image
# ### # ###
--> -->
<!-- <!--
Please fill out the following template to make solving your problem easier and faster for us. Please fill out the following template to make solving your problem easier and faster for us.
This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them. This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them.
Remember to hide/obfuscate personal and confidential information, Remember to hide/redact personal or confidential information,
such as names, global IP/DNS addresses and especially passwords, if necessary. such as passwords, IP addresses, and DNS names as appropriate.
--> -->
### Subject of the issue ### Subject of the issue
<!-- Describe your issue here.--> <!-- Describe your issue here. -->
### Your environment ### Deployment environment
<!-- The version number, obtained from the logs or the admin diagnostics page -->
<!-- Remember to check your issue on the latest version first! --> <!--
* Bitwarden_rs version: =========================================================================================
<!-- How the server was installed: Docker image / package / built from source --> Preferably, use the `Generate Support String` button on the admin page's Diagnostics tab.
That will auto-generate most of the info requested in this section.
=========================================================================================
-->
<!-- The version number, obtained from the logs (at startup) or the admin diagnostics page -->
<!-- This is NOT the version number shown on the web vault, which is versioned separately from vaultwarden -->
<!-- Remember to check if your issue exists on the latest version first! -->
* vaultwarden version:
<!-- How the server was installed: Docker image, OS package, built from source, etc. -->
* Install method: * Install method:
* Clients used: <!-- if applicable -->
* Clients used: <!-- web vault, desktop, Android, iOS, etc. (if applicable) -->
* Reverse proxy and version: <!-- if applicable --> * Reverse proxy and version: <!-- if applicable -->
* Version of mysql/postgresql: <!-- if applicable -->
* Other relevant information: * MySQL/MariaDB or PostgreSQL version: <!-- if applicable -->
* Other relevant details:
### Steps to reproduce ### Steps to reproduce
<!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults) <!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults)
and how did you start bitwarden_rs? --> and how did you start vaultwarden? -->
### Expected behaviour ### Expected behaviour
<!-- Tell us what should happen --> <!-- Tell us what you expected to happen -->
### Actual behaviour ### Actual behaviour
<!-- Tell us what happens instead --> <!-- Tell us what actually happened -->
### Relevant logs ### Troubleshooting data
<!-- Share some logfiles, screenshots or output of relevant programs with us. --> <!-- Share any log files, screenshots, or other relevant troubleshooting data -->

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Discourse forum for bitwarden_rs
url: https://bitwardenrs.discourse.group/
about: Use this forum to request features or get help with usage/configuration.
- name: GitHub Discussions for vaultwarden
url: https://github.com/dani-garcia/vaultwarden/discussions
about: An alternative to the Discourse forum, if this is easier for you.

View File

@@ -1,11 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your feature requests to the forum
Link: https://bitwardenrs.discourse.group/c/feature-requests

View File

@@ -1,11 +0,0 @@
---
name: Help with installation/configuration
about: Any questions about the setup of bitwarden_rs
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your third party help requests to the forum
Link: https://bitwardenrs.discourse.group/c/help

View File

@@ -1,11 +0,0 @@
---
name: Help with proxy/database/NAS setup
about: Any questions about third party software
title: ''
labels: better for forum
assignees: ''
---
# Please submit all your third party help requests to the forum
Link: https://bitwardenrs.discourse.group/c/third-party-help

View File

@@ -2,14 +2,21 @@ name: Build
on: on:
push: push:
pull_request:
# Ignore when there are only changes done too one of these paths # Ignore when there are only changes done too one of these paths
paths-ignore: paths-ignore:
- "**.md" - "**.md"
- "**.txt" - "**.txt"
- ".dockerignore"
- ".env.template"
- ".gitattributes"
- ".gitignore"
- "azure-pipelines.yml" - "azure-pipelines.yml"
- "docker/**" - "docker/**"
- "hooks/**" - "hooks/**"
- "tools/**" - "tools/**"
- ".github/FUNDING.yml"
- ".github/ISSUE_TEMPLATE/**"
jobs: jobs:
build: build:
@@ -82,10 +89,11 @@ jobs:
with: with:
profile: minimal profile: minimal
target: ${{ matrix.target-triple }} target: ${{ matrix.target-triple }}
components: clippy, rustfmt
# End Uses the rust-toolchain file to determine version # End Uses the rust-toolchain file to determine version
# Run cargo tests (In release mode to speed up cargo build afterwards) # Run cargo tests (In release mode to speed up future builds)
- name: '`cargo test --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`' - name: '`cargo test --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: with:
@@ -94,6 +102,24 @@ jobs:
# End Run cargo tests # End Run cargo tests
# Run cargo clippy (In release mode to speed up future builds)
- name: '`cargo clippy --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1
with:
command: clippy
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}
# End Run cargo clippy
# Run cargo fmt
- name: '`cargo fmt`'
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
# End Run cargo fmt
# Build the binary # Build the binary
- name: '`cargo build --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`' - name: '`cargo build --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
@@ -107,8 +133,8 @@ jobs:
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v2
with: with:
name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }} name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }} path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# End Upload artifact to Github Actions # End Upload artifact to Github Actions
@@ -119,7 +145,7 @@ jobs:
# uses: Shopify/upload-to-release@1 # uses: Shopify/upload-to-release@1
# if: startsWith(github.ref, 'refs/tags/') # if: startsWith(github.ref, 'refs/tags/')
# with: # with:
# name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }} # name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
# path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }} # path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
# repo-token: ${{ secrets.GITHUB_TOKEN }} # repo-token: ${{ secrets.GITHUB_TOKEN }}
# End Upload to github actions release # End Upload to github actions release

View File

@@ -1,6 +1,7 @@
name: Hadolint name: Hadolint
on: on:
push:
pull_request: pull_request:
# Ignore when there are only changes done too one of these paths # Ignore when there are only changes done too one of these paths
paths: paths:
@@ -24,7 +25,7 @@ jobs:
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \ sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint sudo chmod +x /usr/local/bin/hadolint
env: env:
HADOLINT_VERSION: 1.19.0 HADOLINT_VERSION: 2.0.0
# End Download hadolint # End Download hadolint
# Test Dockerfiles # Test Dockerfiles

1103
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,10 @@
[package] [package]
name = "bitwarden_rs" name = "vaultwarden"
version = "1.0.0" version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"] authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2018" edition = "2018"
repository = "https://github.com/dani-garcia/bitwarden_rs" repository = "https://github.com/dani-garcia/vaultwarden"
readme = "README.md" readme = "README.md"
license = "GPL-3.0-only" license = "GPL-3.0-only"
publish = false publish = false
@@ -32,7 +32,7 @@ rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev" rocket_contrib = "0.5.0-dev"
# HTTP client # HTTP client
reqwest = { version = "0.11.0", features = ["blocking", "json"] } reqwest = { version = "0.11.3", features = ["blocking", "json", "gzip", "brotli", "socks"] }
# multipart/form-data support # multipart/form-data support
multipart = { version = "0.17.1", features = ["server"], default-features = false } multipart = { version = "0.17.1", features = ["server"], default-features = false }
@@ -47,19 +47,19 @@ rmpv = "0.4.7"
chashmap = "2.2.2" chashmap = "2.2.2"
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = { version = "1.0.123", features = ["derive"] } serde = { version = "1.0.125", features = ["derive"] }
serde_json = "1.0.62" serde_json = "1.0.64"
# Logging # Logging
log = "0.4.14" log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] } fern = { version = "0.6.0", features = ["syslog-4"] }
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "1.4.5", features = [ "chrono", "r2d2"] } diesel = { version = "1.4.6", features = [ "chrono", "r2d2"] }
diesel_migrations = "1.4.0" diesel_migrations = "1.4.0"
# Bundled SQLite # Bundled SQLite
libsqlite3-sys = { version = "0.18.0", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.20.1", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = "0.8.3" rand = "0.8.3"
@@ -69,9 +69,12 @@ ring = "0.16.20"
uuid = { version = "0.8.2", features = ["v4"] } uuid = { version = "0.8.2", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = "0.4.19" chrono = { version = "0.4.19", features = ["serde"] }
chrono-tz = "0.5.3" chrono-tz = "0.5.3"
time = "0.2.25" time = "0.2.26"
# Job scheduler
job_scheduler = "1.2.1"
# TOTP library # TOTP library
oath = "0.10.2" oath = "0.10.2"
@@ -86,46 +89,48 @@ jsonwebtoken = "7.2.0"
u2f = "0.2.0" u2f = "0.2.0"
# Yubico Library # Yubico Library
yubico = { version = "0.9.2", features = ["online-tokio"], default-features = false } yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust # A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false } dotenv = { version = "0.15.0", default-features = false }
# Lazy initialization # Lazy initialization
once_cell = "1.5.2" once_cell = "1.7.2"
# Numerical libraries # Numerical libraries
num-traits = "0.2.14" num-traits = "0.2.14"
num-derive = "0.3.3" num-derive = "0.3.3"
# Email libraries # Email libraries
lettre = { version = "0.10.0-alpha.5", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false } tracing = { version = "0.1.25", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
newline-converter = "0.1.0" lettre = { version = "0.10.0-beta.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
newline-converter = "0.2.0"
# Template library # Template library
handlebars = { version = "3.5.2", features = ["dir_source"] } handlebars = { version = "3.5.4", features = ["dir_source"] }
# For favicon extraction from main website # For favicon extraction from main website
soup = "0.5.0" html5ever = "0.25.1"
regex = { version = "1.4.3", features = ["std", "perf"], default-features = false } markup5ever_rcdom = "0.1.0"
regex = { version = "1.4.5", features = ["std", "perf"], default-features = false }
data-url = "0.1.0" data-url = "0.1.0"
# Used by U2F, JWT and Postgres # Used by U2F, JWT and Postgres
openssl = "0.10.32" openssl = "0.10.34"
# URL encoding library # URL encoding library
percent-encoding = "2.1.0" percent-encoding = "2.1.0"
# Punycode conversion # Punycode conversion
idna = "0.2.1" idna = "0.2.2"
# CLI argument parsing # CLI argument parsing
structopt = "0.3.21" pico-args = "0.4.0"
# Logging panics to logfile instead stderr only # Logging panics to logfile instead stderr only
backtrace = "0.3.56" backtrace = "0.3.56"
# Macro ident concatenation # Macro ident concatenation
paste = "1.0.4" paste = "1.0.5"
[patch.crates-io] [patch.crates-io]
# Use newest ring # Use newest ring
@@ -134,3 +139,10 @@ rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e3
# For favicon extraction from main website # For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '540ede02d0771824c0c80ff9f57fe8eff38b1291' } data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '540ede02d0771824c0c80ff9f57fe8eff38b1291' }
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
# to any issues or PRs for almost a year (as of April 2021). This hopefully
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
# In particular, `cron` has since implemented parsing of some common syntax
# that wasn't previously supported (https://github.com/zslayton/cron/pull/64).
job_scheduler = { git = 'https://github.com/jjlin/job_scheduler', rev = 'ee023418dbba2bfe1e30a5fd7d937f9e33739806' }

View File

@@ -1,15 +1,14 @@
### This is a Bitwarden server API implementation written in Rust compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal. ### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
--- ---
[![Travis Build Status](https://travis-ci.org/dani-garcia/bitwarden_rs.svg?branch=master)](https://travis-ci.org/dani-garcia/bitwarden_rs) [![Docker Pulls](https://img.shields.io/docker/pulls/bitwardenrs/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Docker Pulls](https://img.shields.io/docker/pulls/bitwardenrs/server.svg)](https://hub.docker.com/r/bitwardenrs/server) [![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/bitwarden_rs/status.svg)](https://deps.rs/repo/github/dani-garcia/bitwarden_rs) [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/bitwarden_rs.svg)](https://github.com/dani-garcia/bitwarden_rs/releases/latest) [![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/master/LICENSE.txt)
[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/bitwarden_rs.svg)](https://github.com/dani-garcia/bitwarden_rs/blob/master/LICENSE.txt) [![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
[![Matrix Chat](https://img.shields.io/matrix/bitwarden_rs:matrix.org.svg?logo=matrix)](https://matrix.to/#/#bitwarden_rs:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/bitwarden_rs). Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.** **This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor 8bit Solutions LLC.**
@@ -33,29 +32,57 @@ Basically full implementation of Bitwarden API is provided including:
Pull the docker image and mount a volume from the host for persistent storage: Pull the docker image and mount a volume from the host for persistent storage:
```sh ```sh
docker pull bitwardenrs/server:latest docker pull vaultwarden/server:latest
docker run -d --name bitwarden -v /bw-data/:/data/ -p 80:80 bitwardenrs/server:latest docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server:latest
``` ```
This will preserve any persistent data under /bw-data/, you can adapt the path to whatever suits you. This will preserve any persistent data under /bw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS. **IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS.
This can be configured in [bitwarden_rs directly](https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/bitwarden_rs/wiki/Proxy-examples)). This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy (see examples linked above). If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy (see examples linked above).
## Usage ## Usage
See the [bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs/wiki) for more information on how to configure and run the bitwarden_rs server. See the [vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki) for more information on how to configure and run the vaultwarden server.
## Get in touch ## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please [use the forum](https://bitwardenrs.discourse.group/). To ask a question, offer suggestions or new features or to get help configuring or installing the software, please [use the forum](https://vaultwarden.discourse.group/).
If you spot any bugs or crashes with bitwarden_rs itself, please [create an issue](https://github.com/dani-garcia/bitwarden_rs/issues/). Make sure there aren't any similar issues open, though! If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure there aren't any similar issues open, though!
If you prefer to chat, we're usually hanging around at [#bitwarden_rs:matrix.org](https://matrix.to/#/#bitwarden_rs:matrix.org) room on Matrix. Feel free to join us! If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org](https://matrix.to/#/#vaultwarden:matrix.org) room on Matrix. Feel free to join us!
### Sponsors ### Sponsors
Thanks for your contribution to the project! Thanks for your contribution to the project!
- [@ChonoN](https://github.com/ChonoN) <table>
- [@themightychris](https://github.com/themightychris) <tr>
<td align="center">
<a href="https://github.com/netdadaltd">
<img src="https://avatars.githubusercontent.com/u/77323954?s=75&v=4" width="75px;" alt="netdadaltd"/>
<br />
<sub><b>netDada Ltd.</b></sub>
</a>
</td>
</tr>
</table>
<br/>
<table>
<tr>
<td align="center">
<a href="https://github.com/ChonoN" style="width: 75px">
<sub><b>ChonoN</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/themightychris">
<sub><b>themightychris</b></sub>
</a>
</td>
</tr>
</table>

View File

@@ -1,5 +1,5 @@
use std::process::Command;
use std::env; use std::env;
use std::process::Command;
fn main() { fn main() {
// This allow using #[cfg(sqlite)] instead of #[cfg(feature = "sqlite")], which helps when trying to add them through macros // This allow using #[cfg(sqlite)] instead of #[cfg(feature = "sqlite")], which helps when trying to add them through macros
@@ -11,7 +11,9 @@ fn main() {
println!("cargo:rustc-cfg=postgresql"); println!("cargo:rustc-cfg=postgresql");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))] #[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!("You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"); compile_error!(
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
if let Ok(version) = env::var("BWRS_VERSION") { if let Ok(version) = env::var("BWRS_VERSION") {
println!("cargo:rustc-env=BWRS_VERSION={}", version); println!("cargo:rustc-env=BWRS_VERSION={}", version);
@@ -56,7 +58,7 @@ fn read_git_info() -> Result<(), std::io::Error> {
// Combined version // Combined version
let version = if let Some(exact) = exact_tag { let version = if let Some(exact) = exact_tag {
exact exact
} else if &branch != "master" { } else if &branch != "main" && &branch != "master" {
format!("{}-{} ({})", last_tag, rev_short, branch) format!("{}-{} ({})", last_tag, rev_short, branch)
} else { } else {
format!("{}-{}", last_tag, rev_short) format!("{}-{}", last_tag, rev_short)

View File

@@ -1,7 +1,7 @@
# The cross-built images have the build arch (`amd64`) embedded in the image # The cross-built images have the build arch (`amd64`) embedded in the image
# manifest, rather than the target arch. For example: # manifest, rather than the target arch. For example:
# #
# $ docker inspect bitwardenrs/server:latest-armv7 | jq -r '.[]|.Architecture' # $ docker inspect vaultwarden/server:latest-armv7 | jq -r '.[]|.Architecture'
# amd64 # amd64
# #
# Recent versions of Docker have started printing a warning when the image's # Recent versions of Docker have started printing a warning when the image's

View File

@@ -1,15 +1,15 @@
# This file was generated using a Jinja2 template. # This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles. # Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.48" %} {% set build_stage_base_image = "rust:1.51" %}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "amd64" in target_file %} {% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2021-01-25" %} {% set build_stage_base_image = "clux/muslrust:nightly-2021-04-14" %}
{% set runtime_stage_base_image = "alpine:3.12" %} {% set runtime_stage_base_image = "alpine:3.13" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %} {% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "armv7" in target_file %} {% elif "armv7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %} {% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.12" %} {% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.13" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %} {% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% endif %} {% endif %}
{% elif "amd64" in target_file %} {% elif "amd64" in target_file %}
@@ -44,26 +44,26 @@
# https://docs.docker.com/develop/develop-images/multistage-build/ # https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/ # https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE ####################### ####################### VAULT BUILD IMAGE #######################
{% set vault_version = "2.18.1b" %} {% set vault_version = "2.19.0d" %}
{% set vault_image_digest = "sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb" %} {% set vault_image_digest = "sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub. # The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security, # Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later # as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v{{ vault_version }} # $ docker pull vaultwarden/web-vault:v{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" bitwardenrs/web-vault:v{{ vault_version }} # $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" vaultwarden/web-vault:v{{ vault_version }}
# [bitwardenrs/web-vault@{{ vault_image_digest }}] # [vaultwarden/web-vault@{{ vault_image_digest }}]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" bitwardenrs/web-vault@{{ vault_image_digest }} # $ docker image inspect --format "{{ '{{' }}.RepoTags}}" vaultwarden/web-vault@{{ vault_image_digest }}
# [bitwardenrs/web-vault:v{{ vault_version }}] # [vaultwarden/web-vault:v{{ vault_version }}]
# #
FROM bitwardenrs/web-vault@{{ vault_image_digest }} as vault FROM vaultwarden/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build FROM {{ build_stage_base_image }} as build
@@ -93,6 +93,9 @@ RUN rustup set profile minimal
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
{% if "armv7" in target_file %}
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
{% endif %}
{% elif "arm" in target_file %} {% elif "arm" in target_file %}
# Install required build libs for {{ package_arch_name }} architecture. # Install required build libs for {{ package_arch_name }} architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch # To compile both mysql and postgresql we need some extra packages for both host arch and target arch
@@ -186,7 +189,7 @@ RUN touch src/main.rs
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %} {% if "alpine" in target_file %}
{% if "armv7" in target_file %} {% if "armv7" in target_file %}
RUN musl-strip target/{{ package_arch_target }}/release/bitwarden_rs RUN musl-strip target/{{ package_arch_target }}/release/vaultwarden
{% endif %} {% endif %}
{% endif %} {% endif %}
@@ -212,9 +215,6 @@ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
dumb-init \ dumb-init \
{% if "sqlite" in features %}
sqlite \
{% endif %}
{% if "mysql" in features %} {% if "mysql" in features %}
mariadb-connector-c \ mariadb-connector-c \
{% endif %} {% endif %}
@@ -229,7 +229,6 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \ dumb-init \
sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -247,12 +246,13 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
{% if package_arch_target is defined %} {% if package_arch_target is defined %}
COPY --from=build /app/target/{{ package_arch_target }}/release/bitwarden_rs . COPY --from=build /app/target/{{ package_arch_target }}/release/vaultwarden .
{% else %} {% else %}
COPY --from=build /app/target/release/bitwarden_rs . COPY --from=build /app/target/release/vaultwarden .
{% endif %} {% endif %}
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
@@ -261,6 +261,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,4 +1,4 @@
OBJECTS := $(shell find -mindepth 2 -name 'Dockerfile*') OBJECTS := $(shell find ./ -mindepth 2 -name 'Dockerfile*')
all: $(OBJECTS) all: $(OBJECTS)

View File

@@ -11,21 +11,21 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.48 as build FROM rust:1.51 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -86,7 +86,6 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \ dumb-init \
sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -98,9 +97,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/release/bitwarden_rs . COPY --from=build /app/target/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -108,6 +108,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -11,21 +11,21 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2021-01-25 as build FROM clux/muslrust:nightly-2021-04-14 as build
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time. # Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql ARG DB=sqlite,postgresql
@@ -70,7 +70,7 @@ RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image # Create a new stage with a minimal image
# because we already have a binary built # because we already have a binary built
FROM alpine:3.12 FROM alpine:3.13
ENV ROCKET_ENV "staging" ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
@@ -82,7 +82,6 @@ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
dumb-init \ dumb-init \
sqlite \
postgresql-libs \ postgresql-libs \
ca-certificates ca-certificates
@@ -93,9 +92,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs . COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -103,6 +103,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -11,21 +11,21 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.48 as build FROM rust:1.51 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -129,7 +129,6 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \ dumb-init \
sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -144,9 +143,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs . COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -154,6 +154,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -11,21 +11,21 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.48 as build FROM rust:1.51 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -129,7 +129,6 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \ dumb-init \
sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -144,9 +143,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs . COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -154,6 +154,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -11,21 +11,21 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM rust:1.48 as build FROM rust:1.51 as build
# Debian-based builds support multidb # Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql ARG DB=sqlite,mysql,postgresql
@@ -129,7 +129,6 @@ RUN apt-get update && apt-get install -y \
ca-certificates \ ca-certificates \
curl \ curl \
dumb-init \ dumb-init \
sqlite3 \
libmariadb-dev-compat \ libmariadb-dev-compat \
libpq5 \ libpq5 \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -144,9 +143,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs . COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -154,6 +154,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -11,18 +11,18 @@
# be changed to point to a malicious image. # be changed to point to a malicious image.
# #
# To verify the current digest for a given tag name: # To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b # $ docker pull vaultwarden/web-vault:v2.19.0d
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b # $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.19.0d
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb] # [vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb # $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233
# [bitwardenrs/web-vault:v2.18.1b] # [vaultwarden/web-vault:v2.19.0d]
# #
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault FROM vaultwarden/web-vault@sha256:a7bd6bc4db33bd45f723c4b1ac90918b7f80204560683cfc8efd9efd03a9b233 as vault
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build FROM messense/rust-musl-cross:armv7-musleabihf as build
@@ -38,6 +38,7 @@ RUN rustup set profile minimal
ENV USER "root" ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s' ENV RUSTFLAGS='-C link-arg=-s'
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
# Creates a dummy project used to grab dependencies # Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app RUN USER=root cargo new --bin /app
@@ -66,12 +67,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be # Builds again, this time it'll just be
# your actual source files being built # your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/bitwarden_rs RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
######################## RUNTIME IMAGE ######################## ######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image # Create a new stage with a minimal image
# because we already have a binary built # because we already have a binary built
FROM balenalib/armv7hf-alpine:3.12 FROM balenalib/armv7hf-alpine:3.13
ENV ROCKET_ENV "staging" ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80 ENV ROCKET_PORT=80
@@ -85,7 +86,6 @@ RUN apk add --no-cache \
openssl \ openssl \
curl \ curl \
dumb-init \ dumb-init \
sqlite \
ca-certificates ca-certificates
RUN mkdir /data RUN mkdir /data
@@ -98,9 +98,10 @@ EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault) # Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage # and the binary from the "build" stage to the current stage
WORKDIR /
COPY Rocket.toml . COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/bitwarden_rs . COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
COPY docker/healthcheck.sh /healthcheck.sh COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh COPY docker/start.sh /start.sh
@@ -108,6 +109,5 @@ COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"] HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup! # Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"] ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"] CMD ["/start.sh"]

View File

@@ -1,10 +1,20 @@
#!/bin/sh #!/bin/sh
if [ -r /etc/bitwarden_rs.sh ]; then if [ -r /etc/vaultwarden.sh ]; then
. /etc/vaultwarden.sh
elif [ -r /etc/bitwarden_rs.sh ]; then
echo "### You are using the old /etc/bitwarden_rs.sh script, please migrate to /etc/vaultwarden.sh ###"
. /etc/bitwarden_rs.sh . /etc/bitwarden_rs.sh
fi fi
if [ -d /etc/bitwarden_rs.d ]; then if [ -d /etc/vaultwarden.d ]; then
for f in /etc/vaultwarden.d/*.sh; do
if [ -r $f ]; then
. $f
fi
done
elif [ -d /etc/bitwarden_rs.d ]; then
echo "### You are using the old /etc/bitwarden_rs.d script directory, please migrate to /etc/vaultwarden.d ###"
for f in /etc/bitwarden_rs.d/*.sh; do for f in /etc/bitwarden_rs.d/*.sh; do
if [ -r $f ]; then if [ -r $f ]; then
. $f . $f
@@ -12,4 +22,4 @@ if [ -d /etc/bitwarden_rs.d ]; then
done done
fi fi
exec /bitwarden_rs "${@}" exec /vaultwarden "${@}"

View File

@@ -10,7 +10,7 @@ Docker Hub hooks provide these predefined [environment variables](https://docs.d
* `DOCKER_TAG`: the Docker repository tag being built. * `DOCKER_TAG`: the Docker repository tag being built.
* `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO:DOCKER_TAG`.) * `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO:DOCKER_TAG`.)
The current multi-arch image build relies on the original bitwarden_rs Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/distro combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point. The current multi-arch image build relies on the original vaultwarden Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/distro combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point.
## References ## References

View File

@@ -22,7 +22,7 @@ fi
LABELS=( LABELS=(
# https://github.com/opencontainers/image-spec/blob/master/annotations.md # https://github.com/opencontainers/image-spec/blob/master/annotations.md
org.opencontainers.image.created="$(date --utc --iso-8601=seconds)" org.opencontainers.image.created="$(date --utc --iso-8601=seconds)"
org.opencontainers.image.documentation="https://github.com/dani-garcia/bitwarden_rs/wiki" org.opencontainers.image.documentation="https://github.com/dani-garcia/vaultwarden/wiki"
org.opencontainers.image.licenses="GPL-3.0-only" org.opencontainers.image.licenses="GPL-3.0-only"
org.opencontainers.image.revision="${SOURCE_COMMIT}" org.opencontainers.image.revision="${SOURCE_COMMIT}"
org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}" org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}"

View File

@@ -103,7 +103,7 @@ docker buildx build \
# (https://github.com/moby/moby/issues/41017). # (https://github.com/moby/moby/issues/41017).
# #
# Note that we use `arm32v6` instead of `armv6` to be consistent with the # Note that we use `arm32v6` instead of `armv6` to be consistent with the
# existing bitwarden_rs tags, which adhere to the naming conventions of the # existing vaultwarden tags, which adhere to the naming conventions of the
# Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6). # Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6).
# Unfortunately, these per-arch repo names aren't always consistent with the # Unfortunately, these per-arch repo names aren't always consistent with the
# corresponding platform (OS/arch/variant) IDs, particularly in the case of # corresponding platform (OS/arch/variant) IDs, particularly in the case of

View File

@@ -0,0 +1 @@
DROP TABLE sends;

View File

@@ -0,0 +1,25 @@
CREATE TABLE sends (
uuid CHAR(36) NOT NULL PRIMARY KEY,
user_uuid CHAR(36) REFERENCES users (uuid),
organization_uuid CHAR(36) REFERENCES organizations (uuid),
name TEXT NOT NULL,
notes TEXT,
atype INTEGER NOT NULL,
data TEXT NOT NULL,
akey TEXT NOT NULL,
password_hash BLOB,
password_salt BLOB,
password_iter INTEGER,
max_access_count INTEGER,
access_count INTEGER NOT NULL,
creation_date DATETIME NOT NULL,
revision_date DATETIME NOT NULL,
expiration_date DATETIME,
deletion_date DATETIME NOT NULL,
disabled BOOLEAN NOT NULL
);

View File

@@ -0,0 +1 @@
DROP TABLE sends;

View File

@@ -0,0 +1,25 @@
CREATE TABLE sends (
uuid CHAR(36) NOT NULL PRIMARY KEY,
user_uuid CHAR(36) REFERENCES users (uuid),
organization_uuid CHAR(36) REFERENCES organizations (uuid),
name TEXT NOT NULL,
notes TEXT,
atype INTEGER NOT NULL,
data TEXT NOT NULL,
key TEXT NOT NULL,
password_hash BYTEA,
password_salt BYTEA,
password_iter INTEGER,
max_access_count INTEGER,
access_count INTEGER NOT NULL,
creation_date TIMESTAMP NOT NULL,
revision_date TIMESTAMP NOT NULL,
expiration_date TIMESTAMP,
deletion_date TIMESTAMP NOT NULL,
disabled BOOLEAN NOT NULL
);

View File

@@ -0,0 +1 @@
ALTER TABLE sends RENAME COLUMN key TO akey;

View File

@@ -0,0 +1 @@
DROP TABLE sends;

View File

@@ -0,0 +1,25 @@
CREATE TABLE sends (
uuid TEXT NOT NULL PRIMARY KEY,
user_uuid TEXT REFERENCES users (uuid),
organization_uuid TEXT REFERENCES organizations (uuid),
name TEXT NOT NULL,
notes TEXT,
atype INTEGER NOT NULL,
data TEXT NOT NULL,
key TEXT NOT NULL,
password_hash BLOB,
password_salt BLOB,
password_iter INTEGER,
max_access_count INTEGER,
access_count INTEGER NOT NULL,
creation_date DATETIME NOT NULL,
revision_date DATETIME NOT NULL,
expiration_date DATETIME,
deletion_date DATETIME NOT NULL,
disabled BOOLEAN NOT NULL
);

View File

@@ -0,0 +1 @@
ALTER TABLE sends RENAME COLUMN key TO akey;

View File

@@ -1 +1 @@
nightly-2021-01-25 nightly-2021-04-14

View File

@@ -1,2 +1,7 @@
version = "Two" version = "Two"
edition = "2018"
max_width = 120 max_width = 120
newline_style = "Unix"
use_small_heuristics = "Off"
struct_lit_single_line = false
overflow_delimited_expr = true

View File

@@ -1,9 +1,8 @@
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde_json::Value; use serde_json::Value;
use std::{env, process::Command, time::Duration}; use std::{env, time::Duration};
use reqwest::{blocking::Client, header::USER_AGENT};
use rocket::{ use rocket::{
http::{Cookie, Cookies, SameSite}, http::{Cookie, Cookies, SameSite},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request}, request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
@@ -13,13 +12,13 @@ use rocket::{
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, JsonResult, NumberOrString}, api::{ApiResult, EmptyResult, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp}, auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder, config::ConfigBuilder,
db::{backup_database, models::*, DbConn, DbConnType}, db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
error::{Error, MapResult}, error::{Error, MapResult},
mail, mail,
util::{format_naive_datetime_local, get_display_size}, util::{format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker},
CONFIG, CONFIG,
}; };
@@ -64,12 +63,8 @@ static DB_TYPE: Lazy<&str> = Lazy::new(|| {
.unwrap_or("Unknown") .unwrap_or("Unknown")
}); });
static CAN_BACKUP: Lazy<bool> = Lazy::new(|| { static CAN_BACKUP: Lazy<bool> =
DbConnType::from_url(&CONFIG.database_url()) Lazy::new(|| DbConnType::from_url(&CONFIG.database_url()).map(|t| t == DbConnType::sqlite).unwrap_or(false));
.map(|t| t == DbConnType::sqlite)
.unwrap_or(false)
&& Command::new("sqlite3").arg("-version").status().is_ok()
});
#[get("/")] #[get("/")]
fn admin_disabled() -> &'static str { fn admin_disabled() -> &'static str {
@@ -96,6 +91,27 @@ impl<'a, 'r> FromRequest<'a, 'r> for Referer {
} }
} }
#[derive(Debug)]
struct IpHeader(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for IpHeader {
type Error = ();
fn from_request(req: &'a Request<'r>) -> Outcome<Self, Self::Error> {
if req.headers().get_one(&CONFIG.ip_header()).is_some() {
Outcome::Success(IpHeader(Some(CONFIG.ip_header())))
} else if req.headers().get_one("X-Client-IP").is_some() {
Outcome::Success(IpHeader(Some(String::from("X-Client-IP"))))
} else if req.headers().get_one("X-Real-IP").is_some() {
Outcome::Success(IpHeader(Some(String::from("X-Real-IP"))))
} else if req.headers().get_one("X-Forwarded-For").is_some() {
Outcome::Success(IpHeader(Some(String::from("X-Forwarded-For"))))
} else {
Outcome::Success(IpHeader(None))
}
}
}
/// Used for `Location` response headers, which must specify an absolute URI /// Used for `Location` response headers, which must specify an absolute URI
/// (see https://tools.ietf.org/html/rfc2616#section-14.30). /// (see https://tools.ietf.org/html/rfc2616#section-14.30).
fn admin_url(referer: Referer) -> String { fn admin_url(referer: Referer) -> String {
@@ -121,7 +137,12 @@ fn admin_url(referer: Referer) -> String {
fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> { fn admin_login(flash: Option<FlashMessage>) -> ApiResult<Html<String>> {
// If there is an error, show it // If there is an error, show it
let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg())); let msg = flash.map(|msg| format!("{}: {}", msg.name(), msg.msg()));
let json = json!({"page_content": "admin/login", "version": VERSION, "error": msg, "urlpath": CONFIG.domain_path()}); let json = json!({
"page_content": "admin/login",
"version": VERSION,
"error": msg,
"urlpath": CONFIG.domain_path()
});
// Return the page // Return the page
let text = CONFIG.render_template(BASE_TEMPLATE, &json)?; let text = CONFIG.render_template(BASE_TEMPLATE, &json)?;
@@ -145,10 +166,7 @@ fn post_admin_login(
// If the token is invalid, redirect to login page // If the token is invalid, redirect to login page
if !_validate_token(&data.token) { if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip); error!("Invalid admin token. IP: {}", ip.ip);
Err(Flash::error( Err(Flash::error(Redirect::to(admin_url(referer)), "Invalid admin token, please try again."))
Redirect::to(admin_url(referer)),
"Invalid admin token, please try again.",
))
} else { } else {
// If the token received is valid, generate JWT and save it as a cookie // If the token received is valid, generate JWT and save it as a cookie
let claims = generate_admin_claims(); let claims = generate_admin_claims();
@@ -291,24 +309,25 @@ fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
} }
#[get("/logout")] #[get("/logout")]
fn logout(mut cookies: Cookies, referer: Referer) -> Result<Redirect, ()> { fn logout(mut cookies: Cookies, referer: Referer) -> Redirect {
cookies.remove(Cookie::named(COOKIE_NAME)); cookies.remove(Cookie::named(COOKIE_NAME));
Ok(Redirect::to(admin_url(referer))) Redirect::to(admin_url(referer))
} }
#[get("/users")] #[get("/users")]
fn get_users_json(_token: AdminToken, conn: DbConn) -> JsonResult { fn get_users_json(_token: AdminToken, conn: DbConn) -> Json<Value> {
let users = User::get_all(&conn); let users = User::get_all(&conn);
let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect(); let users_json: Vec<Value> = users.iter().map(|u| u.to_json(&conn)).collect();
Ok(Json(Value::Array(users_json))) Json(Value::Array(users_json))
} }
#[get("/users/overview")] #[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> { fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn); let users = User::get_all(&conn);
let dt_fmt = "%Y-%m-%d %H:%M:%S %Z"; let dt_fmt = "%Y-%m-%d %H:%M:%S %Z";
let users_json: Vec<Value> = users.iter() let users_json: Vec<Value> = users
.iter()
.map(|u| { .map(|u| {
let mut usr = u.to_json(&conn); let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn)); usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
@@ -318,7 +337,7 @@ fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt)); usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt));
usr["last_active"] = match u.last_active(&conn) { usr["last_active"] = match u.last_active(&conn) {
Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)), Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)),
None => json!("Never") None => json!("Never"),
}; };
usr usr
}) })
@@ -403,7 +422,6 @@ fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: D
user_to_edit.save(&conn) user_to_edit.save(&conn)
} }
#[post("/users/update_revision")] #[post("/users/update_revision")]
fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult { fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn) User::update_all_revisions(&conn)
@@ -412,7 +430,8 @@ fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
#[get("/organizations/overview")] #[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> { fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn); let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations.iter() let organizations_json: Vec<Value> = organizations
.iter()
.map(|o| { .map(|o| {
let mut org = o.to_json(); let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn)); org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
@@ -449,44 +468,40 @@ struct GitCommit {
} }
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> { fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let github_api = Client::builder().build()?; let github_api = get_reqwest_client();
Ok(github_api Ok(github_api.get(url).timeout(Duration::from_secs(10)).send()?.error_for_status()?.json::<T>()?)
.get(url)
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()?
.error_for_status()?
.json::<T>()?)
} }
fn has_http_access() -> bool { fn has_http_access() -> bool {
let http_access = Client::builder().build().unwrap(); let http_access = get_reqwest_client();
match http_access match http_access.head("https://github.com/dani-garcia/vaultwarden").timeout(Duration::from_secs(10)).send() {
.head("https://github.com/dani-garcia/bitwarden_rs")
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()
{
Ok(r) => r.status().is_success(), Ok(r) => r.status().is_success(),
_ => false, _ => false,
} }
} }
#[get("/diagnostics")] #[get("/diagnostics")]
fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> { fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResult<Html<String>> {
use crate::util::read_file_string; use crate::util::read_file_string;
use chrono::prelude::*; use chrono::prelude::*;
use std::net::ToSocketAddrs; use std::net::ToSocketAddrs;
// Get current running versions // Get current running versions
let vault_version_path = format!("{}/{}", CONFIG.web_vault_folder(), "version.json"); let web_vault_version: WebVaultVersion =
let vault_version_str = read_file_string(&vault_version_path)?; match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "bwrs-version.json")) {
let web_vault_version: WebVaultVersion = serde_json::from_str(&vault_version_str)?; Ok(s) => serde_json::from_str(&s)?,
_ => match read_file_string(&format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => WebVaultVersion {
version: String::from("Version file missing"),
},
},
};
// Execute some environment checks // Execute some environment checks
let running_within_docker = std::path::Path::new("/.dockerenv").exists() || std::path::Path::new("/run/.containerenv").exists(); let running_within_docker = is_running_in_docker();
let has_http_access = has_http_access(); let has_http_access = has_http_access();
let uses_proxy = env::var_os("HTTP_PROXY").is_some() let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some() || env::var_os("http_proxy").is_some()
@@ -503,11 +518,11 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
// TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already. // TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already.
let (latest_release, latest_commit, latest_web_build) = if has_http_access { let (latest_release, latest_commit, latest_web_build) = if has_http_access {
( (
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bitwarden_rs/releases/latest") { match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/vaultwarden/releases/latest") {
Ok(r) => r.tag_name, Ok(r) => r.tag_name,
_ => "-".to_string(), _ => "-".to_string(),
}, },
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/bitwarden_rs/commits/master") { match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/vaultwarden/commits/main") {
Ok(mut c) => { Ok(mut c) => {
c.sha.truncate(8); c.sha.truncate(8);
c.sha c.sha
@@ -519,7 +534,9 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
if running_within_docker { if running_within_docker {
"-".to_string() "-".to_string()
} else { } else {
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") { match get_github_api::<GitRelease>(
"https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest",
) {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(), Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string(), _ => "-".to_string(),
} }
@@ -529,17 +546,29 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
("-".to_string(), "-".to_string(), "-".to_string()) ("-".to_string(), "-".to_string(), "-".to_string())
}; };
let ip_header_name = match &ip_header.0 {
Some(h) => h,
_ => "",
};
let diagnostics_json = json!({ let diagnostics_json = json!({
"dns_resolved": dns_resolved, "dns_resolved": dns_resolved,
"web_vault_version": web_vault_version.version,
"latest_release": latest_release, "latest_release": latest_release,
"latest_commit": latest_commit, "latest_commit": latest_commit,
"web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version.version,
"latest_web_build": latest_web_build, "latest_web_build": latest_web_build,
"running_within_docker": running_within_docker, "running_within_docker": running_within_docker,
"has_http_access": has_http_access, "has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
"ip_header_name": ip_header_name,
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy, "uses_proxy": uses_proxy,
"db_type": *DB_TYPE, "db_type": *DB_TYPE,
"db_version": get_sql_server_version(&conn),
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))), "admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference "server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
}); });
@@ -548,9 +577,9 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
} }
#[get("/diagnostics/config")] #[get("/diagnostics/config")]
fn get_diagnostics_config(_token: AdminToken) -> JsonResult { fn get_diagnostics_config(_token: AdminToken) -> Json<Value> {
let support_json = CONFIG.get_support_json(); let support_json = CONFIG.get_support_json();
Ok(Json(support_json)) Json(support_json)
} }
#[post("/config", data = "<data>")] #[post("/config", data = "<data>")]
@@ -565,11 +594,11 @@ fn delete_config(_token: AdminToken) -> EmptyResult {
} }
#[post("/config/backup_db")] #[post("/config/backup_db")]
fn backup_db(_token: AdminToken) -> EmptyResult { fn backup_db(_token: AdminToken, conn: DbConn) -> EmptyResult {
if *CAN_BACKUP { if *CAN_BACKUP {
backup_database() backup_database(&conn)
} else { } else {
err!("Can't back up current DB (either it's not SQLite or the 'sqlite' binary is not present)"); err!("Can't back up current DB (Only SQLite supports this feature)");
} }
} }

View File

@@ -1,5 +1,6 @@
use chrono::Utc; use chrono::Utc;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{ use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType}, api::{EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType},
@@ -94,7 +95,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
} }
None => { None => {
// Order is important here; the invitation check must come first // Order is important here; the invitation check must come first
// because the bitwarden_rs admin can invite anyone, regardless // because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions. // of other signup restrictions.
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) { if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) {
User::new(data.Email.clone()) User::new(data.Email.clone())
@@ -139,19 +140,17 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
} }
user.last_verifying_at = Some(user.created_at); user.last_verifying_at = Some(user.created_at);
} else { } else if let Err(e) = mail::send_welcome(&user.email) {
if let Err(e) = mail::send_welcome(&user.email) {
error!("Error sending welcome email: {:#?}", e); error!("Error sending welcome email: {:#?}", e);
} }
} }
}
user.save(&conn) user.save(&conn)
} }
#[get("/accounts/profile")] #[get("/accounts/profile")]
fn profile(headers: Headers, conn: DbConn) -> JsonResult { fn profile(headers: Headers, conn: DbConn) -> Json<Value> {
Ok(Json(headers.user.to_json(&conn))) Json(headers.user.to_json(&conn))
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
@@ -321,15 +320,7 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
err!("The cipher is not owned by the user") err!("The cipher is not owned by the user")
} }
update_cipher_from_data( update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::CipherUpdate)?
&mut saved_cipher,
cipher_data,
&headers,
false,
&conn,
&nt,
UpdateType::CipherUpdate,
)?
} }
// Update user data // Update user data
@@ -612,7 +603,7 @@ struct PreloginData {
} }
#[post("/accounts/prelogin", data = "<data>")] #[post("/accounts/prelogin", data = "<data>")]
fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> JsonResult { fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
let data: PreloginData = data.into_inner().data; let data: PreloginData = data.into_inner().data;
let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn) { let (kdf_type, kdf_iter) = match User::find_by_mail(&data.Email, &conn) {
@@ -620,10 +611,10 @@ fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> JsonResult {
None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT), None => (User::CLIENT_KDF_TYPE_DEFAULT, User::CLIENT_KDF_ITER_DEFAULT),
}; };
Ok(Json(json!({ Json(json!({
"Kdf": kdf_type, "Kdf": kdf_type,
"KdfIterations": kdf_iter "KdfIterations": kdf_iter
}))) }))
} }
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]

View File

@@ -13,7 +13,7 @@ use crate::{
api::{self, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordData, UpdateType}, api::{self, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordData, UpdateType},
auth::Headers, auth::Headers,
crypto, crypto,
db::{models::*, DbConn}, db::{models::*, DbConn, DbPool},
CONFIG, CONFIG,
}; };
@@ -25,7 +25,7 @@ pub fn routes() -> Vec<Route> {
// whether the user is an owner/admin of the relevant org, and if so, // whether the user is an owner/admin of the relevant org, and if so,
// allows the operation unconditionally. // allows the operation unconditionally.
// //
// bitwarden_rs factors in the org owner/admin status as part of // vaultwarden factors in the org owner/admin status as part of
// determining the write accessibility of a cipher, so most // determining the write accessibility of a cipher, so most
// admin/non-admin implementations can be shared. // admin/non-admin implementations can be shared.
routes![ routes![
@@ -77,6 +77,15 @@ pub fn routes() -> Vec<Route> {
] ]
} }
pub fn purge_trashed_ciphers(pool: DbPool) {
debug!("Purging trashed ciphers");
if let Ok(conn) = pool.get() {
Cipher::purge_trash(&conn);
} else {
error!("Failed to get DB connection while purging trashed ciphers")
}
}
#[derive(FromForm, Default)] #[derive(FromForm, Default)]
struct SyncData { struct SyncData {
#[form(field = "excludeDomains")] #[form(field = "excludeDomains")]
@@ -84,57 +93,57 @@ struct SyncData {
} }
#[get("/sync?<data..>")] #[get("/sync?<data..>")]
fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult { fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> Json<Value> {
let user_json = headers.user.to_json(&conn); let user_json = headers.user.to_json(&conn);
let folders = Folder::find_by_user(&headers.user.uuid, &conn); let folders = Folder::find_by_user(&headers.user.uuid, &conn);
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect(); let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn); let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn);
let collections_json: Vec<Value> = collections.iter() let collections_json: Vec<Value> =
.map(|c| c.to_json_details(&headers.user.uuid, &conn)) collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect();
.collect();
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn); let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect(); let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn); let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
let ciphers_json: Vec<Value> = ciphers let ciphers_json: Vec<Value> =
.iter() ciphers.iter().map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn)).collect();
.map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn))
.collect(); let sends = Send::find_by_user(&headers.user.uuid, &conn);
let sends_json: Vec<Value> = sends.iter().map(|s| s.to_json()).collect();
let domains_json = if data.exclude_domains { let domains_json = if data.exclude_domains {
Value::Null Value::Null
} else { } else {
api::core::_get_eq_domains(headers, true).unwrap().into_inner() api::core::_get_eq_domains(headers, true).into_inner()
}; };
Ok(Json(json!({ Json(json!({
"Profile": user_json, "Profile": user_json,
"Folders": folders_json, "Folders": folders_json,
"Collections": collections_json, "Collections": collections_json,
"Policies": policies_json, "Policies": policies_json,
"Ciphers": ciphers_json, "Ciphers": ciphers_json,
"Domains": domains_json, "Domains": domains_json,
"Sends": sends_json,
"unofficialServer": true,
"Object": "sync" "Object": "sync"
}))) }))
} }
#[get("/ciphers")] #[get("/ciphers")]
fn get_ciphers(headers: Headers, conn: DbConn) -> JsonResult { fn get_ciphers(headers: Headers, conn: DbConn) -> Json<Value> {
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn); let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
let ciphers_json: Vec<Value> = ciphers let ciphers_json: Vec<Value> =
.iter() ciphers.iter().map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn)).collect();
.map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn))
.collect();
Ok(Json(json!({ Json(json!({
"Data": ciphers_json, "Data": ciphers_json,
"Object": "list", "Object": "list",
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[get("/ciphers/<uuid>")] #[get("/ciphers/<uuid>")]
@@ -271,26 +280,12 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
/// allowed to delete or share such ciphers to an org, however. /// allowed to delete or share such ciphers to an org, however.
/// ///
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership /// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
fn enforce_personal_ownership_policy( fn enforce_personal_ownership_policy(data: &CipherData, headers: &Headers, conn: &DbConn) -> EmptyResult {
data: &CipherData,
headers: &Headers,
conn: &DbConn
) -> EmptyResult {
if data.OrganizationId.is_none() { if data.OrganizationId.is_none() {
let user_uuid = &headers.user.uuid; let user_uuid = &headers.user.uuid;
for policy in OrgPolicy::find_by_user(user_uuid, conn) { let policy_type = OrgPolicyType::PersonalOwnership;
if policy.enabled && policy.has_type(OrgPolicyType::PersonalOwnership) { if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
let org_uuid = &policy.org_uuid; err!("Due to an Enterprise Policy, you are restricted from saving items to your personal vault.")
match UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
Some(user) =>
if user.atype < UserOrgType::Admin &&
user.has_status(UserOrgStatus::Confirmed) {
err!("Due to an Enterprise Policy, you are restricted \
from saving items to your personal vault.")
},
None => err!("Error looking up user type"),
}
}
} }
} }
Ok(()) Ok(())
@@ -309,11 +304,12 @@ pub fn update_cipher_from_data(
// Check that the client isn't updating an existing cipher with stale data. // Check that the client isn't updating an existing cipher with stale data.
if let Some(dt) = data.LastKnownRevisionDate { if let Some(dt) = data.LastKnownRevisionDate {
match NaiveDateTime::parse_from_str(&dt, "%+") { // ISO 8601 format match NaiveDateTime::parse_from_str(&dt, "%+") {
Err(err) => // ISO 8601 format
warn!("Error parsing LastKnownRevisionDate '{}': {}", dt, err), Err(err) => warn!("Error parsing LastKnownRevisionDate '{}': {}", dt, err),
Ok(dt) if cipher.updated_at.signed_duration_since(dt).num_seconds() > 1 => Ok(dt) if cipher.updated_at.signed_duration_since(dt).num_seconds() > 1 => {
err!("The client copy of this cipher is out of date. Resync the client and try again."), err!("The client copy of this cipher is out of date. Resync the client and try again.")
}
Ok(_) => (), Ok(_) => (),
} }
} }
@@ -386,10 +382,7 @@ pub fn update_cipher_from_data(
// But, we at least know we do not need to store and return this specific key. // But, we at least know we do not need to store and return this specific key.
fn _clean_cipher_data(mut json_data: Value) -> Value { fn _clean_cipher_data(mut json_data: Value) -> Value {
if json_data.is_array() { if json_data.is_array() {
json_data.as_array_mut() json_data.as_array_mut().unwrap().iter_mut().for_each(|ref mut f| {
.unwrap()
.iter_mut()
.for_each(|ref mut f| {
f.as_object_mut().unwrap().remove("Response"); f.as_object_mut().unwrap().remove("Response");
}); });
}; };
@@ -413,13 +406,13 @@ pub fn update_cipher_from_data(
data["Uris"] = _clean_cipher_data(data["Uris"].clone()); data["Uris"] = _clean_cipher_data(data["Uris"].clone());
} }
data data
}, }
None => err!("Data missing"), None => err!("Data missing"),
}; };
cipher.name = data.Name; cipher.name = data.Name;
cipher.notes = data.Notes; cipher.notes = data.Notes;
cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string() ); cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string());
cipher.data = type_data.to_string(); cipher.data = type_data.to_string();
cipher.password_history = data.PasswordHistory.map(|f| f.to_string()); cipher.password_history = data.PasswordHistory.map(|f| f.to_string());
@@ -594,11 +587,8 @@ fn post_collections_admin(
} }
let posted_collections: HashSet<String> = data.CollectionIds.iter().cloned().collect(); let posted_collections: HashSet<String> = data.CollectionIds.iter().cloned().collect();
let current_collections: HashSet<String> = cipher let current_collections: HashSet<String> =
.get_collections(&headers.user.uuid, &conn) cipher.get_collections(&headers.user.uuid, &conn).iter().cloned().collect();
.iter()
.cloned()
.collect();
for collection in posted_collections.symmetric_difference(&current_collections) { for collection in posted_collections.symmetric_difference(&current_collections) {
match Collection::find_by_uuid(&collection, &conn) { match Collection::find_by_uuid(&collection, &conn) {
@@ -834,7 +824,8 @@ fn post_attachment(
let file_name = HEXLOWER.encode(&crypto::get_random(vec![0; 10])); let file_name = HEXLOWER.encode(&crypto::get_random(vec![0; 10]));
let path = base_path.join(&file_name); let path = base_path.join(&file_name);
let size = match field.data.save().memory_threshold(0).size_limit(size_limit).with_path(path.clone()) { let size =
match field.data.save().memory_threshold(0).size_limit(size_limit).with_path(path.clone()) {
SaveResult::Full(SavedData::File(_, size)) => size as i32, SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => { SaveResult::Full(other) => {
std::fs::remove_file(path).ok(); std::fs::remove_file(path).ok();
@@ -986,12 +977,22 @@ fn delete_cipher_selected_admin(data: JsonUpcase<Value>, headers: Headers, conn:
} }
#[post("/ciphers/delete-admin", data = "<data>")] #[post("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_post_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_selected_post_admin(
data: JsonUpcase<Value>,
headers: Headers,
conn: DbConn,
nt: Notify,
) -> EmptyResult {
delete_cipher_selected_post(data, headers, conn, nt) delete_cipher_selected_post(data, headers, conn, nt)
} }
#[put("/ciphers/delete-admin", data = "<data>")] #[put("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_put_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult { fn delete_cipher_selected_put_admin(
data: JsonUpcase<Value>,
headers: Headers,
conn: DbConn,
nt: Notify,
) -> EmptyResult {
delete_cipher_selected_put(data, headers, conn, nt) delete_cipher_selected_put(data, headers, conn, nt)
} }
@@ -1142,7 +1143,13 @@ fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_del
Ok(()) Ok(())
} }
fn _delete_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, soft_delete: bool, nt: Notify) -> EmptyResult { fn _delete_multiple_ciphers(
data: JsonUpcase<Value>,
headers: Headers,
conn: DbConn,
soft_delete: bool,
nt: Notify,
) -> EmptyResult {
let data: Value = data.into_inner().data; let data: Value = data.into_inner().data;
let uuids = match data.get("Ids") { let uuids = match data.get("Ids") {
@@ -1194,7 +1201,7 @@ fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: &Headers, conn: &
for uuid in uuids { for uuid in uuids {
match _restore_cipher_by_uuid(uuid, headers, conn, nt) { match _restore_cipher_by_uuid(uuid, headers, conn, nt) {
Ok(json) => ciphers.push(json.into_inner()), Ok(json) => ciphers.push(json.into_inner()),
err => return err err => return err,
} }
} }

View File

@@ -8,28 +8,20 @@ use crate::{
}; };
pub fn routes() -> Vec<rocket::Route> { pub fn routes() -> Vec<rocket::Route> {
routes![ routes![get_folders, get_folder, post_folders, post_folder, put_folder, delete_folder_post, delete_folder,]
get_folders,
get_folder,
post_folders,
post_folder,
put_folder,
delete_folder_post,
delete_folder,
]
} }
#[get("/folders")] #[get("/folders")]
fn get_folders(headers: Headers, conn: DbConn) -> JsonResult { fn get_folders(headers: Headers, conn: DbConn) -> Json<Value> {
let folders = Folder::find_by_user(&headers.user.uuid, &conn); let folders = Folder::find_by_user(&headers.user.uuid, &conn);
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect(); let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
Ok(Json(json!({ Json(json!({
"Data": folders_json, "Data": folders_json,
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[get("/folders/<uuid>")] #[get("/folders/<uuid>")]

View File

@@ -2,17 +2,15 @@ mod accounts;
mod ciphers; mod ciphers;
mod folders; mod folders;
mod organizations; mod organizations;
mod sends;
pub mod two_factor; pub mod two_factor;
pub use ciphers::purge_trashed_ciphers;
pub use sends::purge_sends;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut mod_routes = routes![ let mut mod_routes =
clear_device_token, routes![clear_device_token, put_device_token, get_eq_domains, post_eq_domains, put_eq_domains, hibp_breach,];
put_device_token,
get_eq_domains,
post_eq_domains,
put_eq_domains,
hibp_breach,
];
let mut routes = Vec::new(); let mut routes = Vec::new();
routes.append(&mut accounts::routes()); routes.append(&mut accounts::routes());
@@ -20,6 +18,7 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut folders::routes()); routes.append(&mut folders::routes());
routes.append(&mut organizations::routes()); routes.append(&mut organizations::routes());
routes.append(&mut two_factor::routes()); routes.append(&mut two_factor::routes());
routes.append(&mut sends::routes());
routes.append(&mut mod_routes); routes.append(&mut mod_routes);
routes routes
@@ -28,19 +27,21 @@ pub fn routes() -> Vec<Route> {
// //
// Move this somewhere else // Move this somewhere else
// //
use rocket::response::Response;
use rocket::Route; use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value; use serde_json::Value;
use crate::{ use crate::{
api::{EmptyResult, JsonResult, JsonUpcase}, api::{JsonResult, JsonUpcase},
auth::Headers, auth::Headers,
db::DbConn, db::DbConn,
error::Error, error::Error,
util::get_reqwest_client,
}; };
#[put("/devices/identifier/<uuid>/clear-token")] #[put("/devices/identifier/<uuid>/clear-token")]
fn clear_device_token(uuid: String) -> EmptyResult { fn clear_device_token<'a>(uuid: String) -> Response<'a> {
// This endpoint doesn't have auth header // This endpoint doesn't have auth header
let _ = uuid; let _ = uuid;
@@ -49,11 +50,11 @@ fn clear_device_token(uuid: String) -> EmptyResult {
// This only clears push token // This only clears push token
// https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109 // https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109
// https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37 // https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37
Ok(()) Response::new()
} }
#[put("/devices/identifier/<uuid>/token", data = "<data>")] #[put("/devices/identifier/<uuid>/token", data = "<data>")]
fn put_device_token(uuid: String, data: JsonUpcase<Value>, headers: Headers) -> JsonResult { fn put_device_token(uuid: String, data: JsonUpcase<Value>, headers: Headers) -> Json<Value> {
let _data: Value = data.into_inner().data; let _data: Value = data.into_inner().data;
// Data has a single string value "PushToken" // Data has a single string value "PushToken"
let _ = uuid; let _ = uuid;
@@ -61,13 +62,13 @@ fn put_device_token(uuid: String, data: JsonUpcase<Value>, headers: Headers) ->
// TODO: This should save the push token, but we don't have push functionality // TODO: This should save the push token, but we don't have push functionality
Ok(Json(json!({ Json(json!({
"Id": headers.device.uuid, "Id": headers.device.uuid,
"Name": headers.device.name, "Name": headers.device.name,
"Type": headers.device.atype, "Type": headers.device.atype,
"Identifier": headers.device.uuid, "Identifier": headers.device.uuid,
"CreationDate": crate::util::format_date(&headers.device.created_at), "CreationDate": crate::util::format_date(&headers.device.created_at),
}))) }))
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
@@ -81,11 +82,11 @@ struct GlobalDomain {
const GLOBAL_DOMAINS: &str = include_str!("../../static/global_domains.json"); const GLOBAL_DOMAINS: &str = include_str!("../../static/global_domains.json");
#[get("/settings/domains")] #[get("/settings/domains")]
fn get_eq_domains(headers: Headers) -> JsonResult { fn get_eq_domains(headers: Headers) -> Json<Value> {
_get_eq_domains(headers, false) _get_eq_domains(headers, false)
} }
fn _get_eq_domains(headers: Headers, no_excluded: bool) -> JsonResult { fn _get_eq_domains(headers: Headers, no_excluded: bool) -> Json<Value> {
let user = headers.user; let user = headers.user;
use serde_json::from_str; use serde_json::from_str;
@@ -102,11 +103,11 @@ fn _get_eq_domains(headers: Headers, no_excluded: bool) -> JsonResult {
globals.retain(|g| !g.Excluded); globals.retain(|g| !g.Excluded);
} }
Ok(Json(json!({ Json(json!({
"EquivalentDomains": equivalent_domains, "EquivalentDomains": equivalent_domains,
"GlobalEquivalentDomains": globals, "GlobalEquivalentDomains": globals,
"Object": "domains", "Object": "domains",
}))) }))
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
@@ -141,22 +142,15 @@ fn put_eq_domains(data: JsonUpcase<EquivDomainData>, headers: Headers, conn: DbC
#[get("/hibp/breach?<username>")] #[get("/hibp/breach?<username>")]
fn hibp_breach(username: String) -> JsonResult { fn hibp_breach(username: String) -> JsonResult {
let user_agent = "Bitwarden_RS";
let url = format!( let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{}?truncateResponse=false&includeUnverified=false", "https://haveibeenpwned.com/api/v3/breachedaccount/{}?truncateResponse=false&includeUnverified=false",
username username
); );
use reqwest::{blocking::Client, header::USER_AGENT};
if let Some(api_key) = crate::CONFIG.hibp_api_key() { if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let hibp_client = Client::builder().build()?; let hibp_client = get_reqwest_client();
let res = hibp_client let res = hibp_client.get(&url).header("hibp-api-key", api_key).send()?;
.get(&url)
.header(USER_AGENT, user_agent)
.header("hibp-api-key", api_key)
.send()?;
// If we get a 404, return a 404, it means no breached accounts // If we get a 404, return a 404, it means no breached accounts
if res.status() == 404 { if res.status() == 404 {

View File

@@ -5,7 +5,7 @@ use serde_json::Value;
use crate::{ use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType}, api::{EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType},
auth::{decode_invite, AdminHeaders, Headers, OwnerHeaders, ManagerHeaders, ManagerHeadersLoose}, auth::{decode_invite, AdminHeaders, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders},
db::{models::*, DbConn}, db::{models::*, DbConn},
mail, CONFIG, mail, CONFIG,
}; };
@@ -192,8 +192,8 @@ fn post_organization(
// GET /api/collections?writeOnly=false // GET /api/collections?writeOnly=false
#[get("/collections")] #[get("/collections")]
fn get_user_collections(headers: Headers, conn: DbConn) -> JsonResult { fn get_user_collections(headers: Headers, conn: DbConn) -> Json<Value> {
Ok(Json(json!({ Json(json!({
"Data": "Data":
Collection::find_by_user_uuid(&headers.user.uuid, &conn) Collection::find_by_user_uuid(&headers.user.uuid, &conn)
.iter() .iter()
@@ -201,12 +201,12 @@ fn get_user_collections(headers: Headers, conn: DbConn) -> JsonResult {
.collect::<Value>(), .collect::<Value>(),
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[get("/organizations/<org_id>/collections")] #[get("/organizations/<org_id>/collections")]
fn get_org_collections(org_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn get_org_collections(org_id: String, _headers: AdminHeaders, conn: DbConn) -> Json<Value> {
Ok(Json(json!({ Json(json!({
"Data": "Data":
Collection::find_by_organization(&org_id, &conn) Collection::find_by_organization(&org_id, &conn)
.iter() .iter()
@@ -214,7 +214,7 @@ fn get_org_collections(org_id: String, _headers: AdminHeaders, conn: DbConn) ->
.collect::<Value>(), .collect::<Value>(),
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[post("/organizations/<org_id>/collections", data = "<data>")] #[post("/organizations/<org_id>/collections", data = "<data>")]
@@ -333,7 +333,12 @@ fn post_organization_collection_delete_user(
} }
#[delete("/organizations/<org_id>/collections/<col_id>")] #[delete("/organizations/<org_id>/collections/<col_id>")]
fn delete_organization_collection(org_id: String, col_id: String, _headers: ManagerHeaders, conn: DbConn) -> EmptyResult { fn delete_organization_collection(
org_id: String,
col_id: String,
_headers: ManagerHeaders,
conn: DbConn,
) -> EmptyResult {
match Collection::find_by_uuid(&col_id, &conn) { match Collection::find_by_uuid(&col_id, &conn) {
None => err!("Collection not found"), None => err!("Collection not found"),
Some(collection) => { Some(collection) => {
@@ -426,9 +431,7 @@ fn put_collection_users(
continue; continue;
} }
CollectionUser::save(&user.user_uuid, &coll_id, CollectionUser::save(&user.user_uuid, &coll_id, d.ReadOnly, d.HidePasswords, &conn)?;
d.ReadOnly, d.HidePasswords,
&conn)?;
} }
Ok(()) Ok(())
@@ -441,30 +444,28 @@ struct OrgIdData {
} }
#[get("/ciphers/organization-details?<data..>")] #[get("/ciphers/organization-details?<data..>")]
fn get_org_details(data: Form<OrgIdData>, headers: Headers, conn: DbConn) -> JsonResult { fn get_org_details(data: Form<OrgIdData>, headers: Headers, conn: DbConn) -> Json<Value> {
let ciphers = Cipher::find_by_org(&data.organization_id, &conn); let ciphers = Cipher::find_by_org(&data.organization_id, &conn);
let ciphers_json: Vec<Value> = ciphers let ciphers_json: Vec<Value> =
.iter() ciphers.iter().map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn)).collect();
.map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn))
.collect();
Ok(Json(json!({ Json(json!({
"Data": ciphers_json, "Data": ciphers_json,
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[get("/organizations/<org_id>/users")] #[get("/organizations/<org_id>/users")]
fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) -> JsonResult { fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) -> Json<Value> {
let users = UserOrganization::find_by_org(&org_id, &conn); let users = UserOrganization::find_by_org(&org_id, &conn);
let users_json: Vec<Value> = users.iter().map(|c| c.to_json_user_details(&conn)).collect(); let users_json: Vec<Value> = users.iter().map(|c| c.to_json_user_details(&conn)).collect();
Ok(Json(json!({ Json(json!({
"Data": users_json, "Data": users_json,
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -544,9 +545,7 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
match Collection::find_by_uuid_and_org(&col.Id, &org_id, &conn) { match Collection::find_by_uuid_and_org(&col.Id, &org_id, &conn) {
None => err!("Collection not found in Organization"), None => err!("Collection not found in Organization"),
Some(collection) => { Some(collection) => {
CollectionUser::save(&user.uuid, &collection.uuid, CollectionUser::save(&user.uuid, &collection.uuid, col.ReadOnly, col.HidePasswords, &conn)?;
col.ReadOnly, col.HidePasswords,
&conn)?;
} }
} }
} }
@@ -655,7 +654,7 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
} }
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let mut org_name = String::from("bitwarden_rs"); let mut org_name = CONFIG.invitation_org_name();
if let Some(org_id) = &claims.org_id { if let Some(org_id) = &claims.org_id {
org_name = match Organization::find_by_uuid(&org_id, &conn) { org_name = match Organization::find_by_uuid(&org_id, &conn) {
Some(org) => org.name, Some(org) => org.name,
@@ -801,9 +800,13 @@ fn edit_user(
match Collection::find_by_uuid_and_org(&col.Id, &org_id, &conn) { match Collection::find_by_uuid_and_org(&col.Id, &org_id, &conn) {
None => err!("Collection not found in Organization"), None => err!("Collection not found in Organization"),
Some(collection) => { Some(collection) => {
CollectionUser::save(&user_to_edit.user_uuid, &collection.uuid, CollectionUser::save(
col.ReadOnly, col.HidePasswords, &user_to_edit.user_uuid,
&conn)?; &collection.uuid,
col.ReadOnly,
col.HidePasswords,
&conn,
)?;
} }
} }
} }
@@ -899,15 +902,7 @@ fn post_org_import(
.into_iter() .into_iter()
.map(|cipher_data| { .map(|cipher_data| {
let mut cipher = Cipher::new(cipher_data.Type, cipher_data.Name.clone()); let mut cipher = Cipher::new(cipher_data.Type, cipher_data.Name.clone());
update_cipher_from_data( update_cipher_from_data(&mut cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::CipherCreate)
&mut cipher,
cipher_data,
&headers,
false,
&conn,
&nt,
UpdateType::CipherCreate,
)
.ok(); .ok();
cipher cipher
}) })
@@ -930,15 +925,15 @@ fn post_org_import(
} }
#[get("/organizations/<org_id>/policies")] #[get("/organizations/<org_id>/policies")]
fn list_policies(org_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn list_policies(org_id: String, _headers: AdminHeaders, conn: DbConn) -> Json<Value> {
let policies = OrgPolicy::find_by_org(&org_id, &conn); let policies = OrgPolicy::find_by_org(&org_id, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect(); let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({ Json(json!({
"Data": policies_json, "Data": policies_json,
"Object": "list", "Object": "list",
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[get("/organizations/<org_id>/policies/token?<token>")] #[get("/organizations/<org_id>/policies/token?<token>")]
@@ -989,7 +984,13 @@ struct PolicyData {
} }
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")] #[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
fn put_policy(org_id: String, pol_type: i32, data: Json<PolicyData>, _headers: AdminHeaders, conn: DbConn) -> JsonResult { fn put_policy(
org_id: String,
pol_type: i32,
data: Json<PolicyData>,
_headers: AdminHeaders,
conn: DbConn,
) -> JsonResult {
let data: PolicyData = data.into_inner(); let data: PolicyData = data.into_inner();
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) { let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
@@ -1017,8 +1018,8 @@ fn get_organization_tax(org_id: String, _headers: Headers, _conn: DbConn) -> Emp
} }
#[get("/plans")] #[get("/plans")]
fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult { fn get_plans(_headers: Headers, _conn: DbConn) -> Json<Value> {
Ok(Json(json!({ Json(json!({
"Object": "list", "Object": "list",
"Data": [ "Data": [
{ {
@@ -1065,17 +1066,17 @@ fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
} }
], ],
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[get("/plans/sales-tax-rates")] #[get("/plans/sales-tax-rates")]
fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> JsonResult { fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> Json<Value> {
// Prevent a 404 error, which also causes Javascript errors. // Prevent a 404 error, which also causes Javascript errors.
Ok(Json(json!({ Json(json!({
"Object": "list", "Object": "list",
"Data": [], "Data": [],
"ContinuationToken": null "ContinuationToken": null
}))) }))
} }
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
@@ -1127,8 +1128,7 @@ fn import(org_id: String, data: JsonUpcase<OrgImportData>, headers: Headers, con
// If user is not part of the organization, but it exists // If user is not part of the organization, but it exists
} else if UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn).is_none() { } else if UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn).is_none() {
if let Some (user) = User::find_by_mail(&user_data.Email, &conn) { if let Some(user) = User::find_by_mail(&user_data.Email, &conn) {
let user_org_status = if CONFIG.mail_enabled() { let user_org_status = if CONFIG.mail_enabled() {
UserOrgStatus::Invited as i32 UserOrgStatus::Invited as i32
} else { } else {
@@ -1164,7 +1164,7 @@ fn import(org_id: String, data: JsonUpcase<OrgImportData>, headers: Headers, con
// If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true) // If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true)
if data.OverwriteExisting { if data.OverwriteExisting {
for user_org in UserOrganization::find_by_org_and_type(&org_id, UserOrgType::User as i32, &conn) { for user_org in UserOrganization::find_by_org_and_type(&org_id, UserOrgType::User as i32, &conn) {
if let Some (user_email) = User::find_by_uuid(&user_org.user_uuid, &conn).map(|u| u.email) { if let Some(user_email) = User::find_by_uuid(&user_org.user_uuid, &conn).map(|u| u.email) {
if !data.Users.iter().any(|u| u.Email == user_email) { if !data.Users.iter().any(|u| u.Email == user_email) {
user_org.delete(&conn)?; user_org.delete(&conn)?;
} }

391
src/api/core/sends.rs Normal file
View File

@@ -0,0 +1,391 @@
use std::{io::Read, path::Path};
use chrono::{DateTime, Duration, Utc};
use multipart::server::{save::SavedData, Multipart, SaveResult};
use rocket::{http::ContentType, Data};
use rocket_contrib::json::Json;
use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
auth::{Headers, Host},
db::{models::*, DbConn, DbPool},
CONFIG,
};
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
pub fn routes() -> Vec<rocket::Route> {
routes![post_send, post_send_file, post_access, post_access_file, put_send, delete_send, put_remove_password]
}
pub fn purge_sends(pool: DbPool) {
debug!("Purging sends");
if let Ok(conn) = pool.get() {
Send::purge(&conn);
} else {
error!("Failed to get DB connection while purging sends")
}
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendData {
pub Type: i32,
pub Key: String,
pub Password: Option<String>,
pub MaxAccessCount: Option<i32>,
pub ExpirationDate: Option<DateTime<Utc>>,
pub DeletionDate: DateTime<Utc>,
pub Disabled: bool,
// Data field
pub Name: String,
pub Notes: Option<String>,
pub Text: Option<Value>,
pub File: Option<Value>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
/// an org with this policy enabled isn't allowed to create new Sends or
/// modify existing ones, but is allowed to delete them.
///
/// Ref: https://bitwarden.com/help/article/policies/#disable-send
fn enforce_disable_send_policy(headers: &Headers, conn: &DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let policy_type = OrgPolicyType::DisableSend;
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
}
Ok(())
}
fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
let data_val = if data.Type == SendType::Text as i32 {
data.Text
} else if data.Type == SendType::File as i32 {
data.File
} else {
err!("Invalid Send type")
};
let data_str = if let Some(mut d) = data_val {
d.as_object_mut().and_then(|o| o.remove("Response"));
serde_json::to_string(&d)?
} else {
err!("Send data not provided");
};
if data.DeletionDate > Utc::now() + Duration::days(31) {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
let mut send = Send::new(data.Type, data.Name, data_str, data.Key, data.DeletionDate.naive_utc());
send.user_uuid = Some(user_uuid);
send.notes = data.Notes;
send.max_access_count = data.MaxAccessCount;
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.disabled = data.Disabled;
send.atype = data.Type;
send.set_password(data.Password.as_deref());
Ok(send)
}
#[post("/sends", data = "<data>")]
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
let data: SendData = data.into_inner().data;
if data.Type == SendType::File as i32 {
err!("File sends should use /api/sends/file")
}
let mut send = create_send(data, headers.user.uuid.clone())?;
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
Ok(Json(send.to_json()))
}
#[post("/sends/file", format = "multipart/form-data", data = "<data>")]
fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
let boundary = content_type.params().next().expect("No boundary provided").1;
let mut mpart = Multipart::with_body(data.open(), boundary);
// First entry is the SendData JSON
let mut model_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "model" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let mut buf = String::new();
model_entry.data.read_to_string(&mut buf)?;
let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?;
// Get the file length and add an extra 10% to avoid issues
const SIZE_110_MB: u64 = 115_343_360;
let size_limit = match CONFIG.user_attachment_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn);
if left <= 0 {
err!("Attachment size limit reached! Delete some files to open space")
}
std::cmp::Ord::max(left as u64, SIZE_110_MB)
}
None => SIZE_110_MB,
};
// Create the Send
let mut send = create_send(data.data, headers.user.uuid.clone())?;
let file_id: String = data_encoding::HEXLOWER.encode(&crate::crypto::get_random(vec![0; 32]));
if send.atype != SendType::File as i32 {
err!("Send content is not a file");
}
let file_path = Path::new(&CONFIG.sends_folder()).join(&send.uuid).join(&file_id);
// Read the data entry and save the file
let mut data_entry = match mpart.read_entry()? {
Some(e) if &*e.headers.name == "data" => e,
Some(_) => err!("Invalid entry name"),
None => err!("No model entry present"),
};
let size = match data_entry.data.save().memory_threshold(0).size_limit(size_limit).with_path(&file_path) {
SaveResult::Full(SavedData::File(_, size)) => size as i32,
SaveResult::Full(other) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment is not a file: {:?}", other));
}
SaveResult::Partial(_, reason) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Attachment size limit exceeded with this file: {:?}", reason));
}
SaveResult::Error(e) => {
std::fs::remove_file(&file_path).ok();
err!(format!("Error: {:?}", e));
}
};
// Set ID and sizes
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
}
send.data = serde_json::to_string(&data_value)?;
// Save the changes in the database
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
Ok(Json(send.to_json()))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendAccessData {
pub Password: Option<String>,
}
#[post("/sends/access/<access_id>", data = "<data>")]
fn post_access(access_id: String, data: JsonUpcase<SendAccessData>, conn: DbConn) -> JsonResult {
let mut send = match Send::find_by_access_id(&access_id, &conn) {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
if let Some(max_access_count) = send.max_access_count {
if send.access_count >= max_access_count {
err_code!(SEND_INACCESSIBLE_MSG, 404);
}
}
if let Some(expiration) = send.expiration_date {
if Utc::now().naive_utc() >= expiration {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
}
if Utc::now().naive_utc() >= send.deletion_date {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
if send.disabled {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
if send.password_hash.is_some() {
match data.into_inner().data.Password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password."),
None => err_code!("Password not provided", 401),
}
}
// Files are incremented during the download
if send.atype == SendType::Text as i32 {
send.access_count += 1;
}
send.save(&conn)?;
Ok(Json(send.to_json_access()))
}
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
fn post_access_file(
send_id: String,
file_id: String,
data: JsonUpcase<SendAccessData>,
host: Host,
conn: DbConn,
) -> JsonResult {
let mut send = match Send::find_by_uuid(&send_id, &conn) {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
};
if let Some(max_access_count) = send.max_access_count {
if send.access_count >= max_access_count {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
}
if let Some(expiration) = send.expiration_date {
if Utc::now().naive_utc() >= expiration {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
}
if Utc::now().naive_utc() >= send.deletion_date {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
if send.disabled {
err_code!(SEND_INACCESSIBLE_MSG, 404)
}
if send.password_hash.is_some() {
match data.into_inner().data.Password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password."),
None => err_code!("Password not provided", 401),
}
}
send.access_count += 1;
send.save(&conn)?;
Ok(Json(json!({
"Object": "send-fileDownload",
"Id": file_id,
"Url": format!("{}/sends/{}/{}", &host.host, send_id, file_id)
})))
}
#[put("/sends/<id>", data = "<data>")]
fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
let data: SendData = data.into_inner().data;
let mut send = match Send::find_by_uuid(&id, &conn) {
Some(s) => s,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
if send.atype != data.Type {
err!("Sends can't change type")
}
// When updating a file Send, we receive nulls in the File field, as it's immutable,
// so we only need to update the data field in the Text case
if data.Type == SendType::Text as i32 {
let data_str = if let Some(mut d) = data.Text {
d.as_object_mut().and_then(|d| d.remove("Response"));
serde_json::to_string(&d)?
} else {
err!("Send data not provided");
};
send.data = data_str;
}
if data.DeletionDate > Utc::now() + Duration::days(31) {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
send.name = data.Name;
send.akey = data.Key;
send.deletion_date = data.DeletionDate.naive_utc();
send.notes = data.Notes;
send.max_access_count = data.MaxAccessCount;
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.disabled = data.Disabled;
// Only change the value if it's present
if let Some(password) = data.Password {
send.set_password(Some(&password));
}
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
Ok(Json(send.to_json()))
}
#[delete("/sends/<id>")]
fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
let send = match Send::find_by_uuid(&id, &conn) {
Some(s) => s,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.delete(&conn)?;
nt.send_user_update(UpdateType::SyncSendDelete, &headers.user);
Ok(())
}
#[put("/sends/<id>/remove-password")]
fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
enforce_disable_send_policy(&headers, &conn)?;
let mut send = match Send::find_by_uuid(&id, &conn) {
Some(s) => s,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.set_password(None);
send.save(&conn)?;
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
Ok(Json(send.to_json()))
}

View File

@@ -17,11 +17,7 @@ use crate::{
pub use crate::config::CONFIG; pub use crate::config::CONFIG;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![ routes![generate_authenticator, activate_authenticator, activate_authenticator_put,]
generate_authenticator,
activate_authenticator,
activate_authenticator_put,
]
} }
#[post("/two-factor/get-authenticator", data = "<data>")] #[post("/two-factor/get-authenticator", data = "<data>")]
@@ -141,7 +137,7 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
// The amount of steps back and forward in time // The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes. // Also check if we need to disable time drifted TOTP codes.
// If that is the case, we set the steps to 0 so only the current TOTP is valid. // If that is the case, we set the steps to 0 so only the current TOTP is valid.
let steps: i64 = if CONFIG.authenticator_disable_time_drift() { 0 } else { 1 }; let steps = !CONFIG.authenticator_disable_time_drift() as i64;
for step in -steps..=steps { for step in -steps..=steps {
let time_step = current_timestamp / 30i64 + step; let time_step = current_timestamp / 30i64 + step;
@@ -163,22 +159,11 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
twofactor.save(&conn)?; twofactor.save(&conn)?;
return Ok(()); return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used as i64 { } else if generated == totp_code && time_step <= twofactor.last_used as i64 {
warn!( warn!("This or a TOTP code within {} steps back and forward has already been used!", steps);
"This or a TOTP code within {} steps back and forward has already been used!", err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
steps
);
err!(format!(
"Invalid TOTP code! Server time: {} IP: {}",
current_time.format("%F %T UTC"),
ip.ip
));
} }
} }
// Else no valide code received, deny access // Else no valide code received, deny access
err!(format!( err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
"Invalid TOTP code! Server time: {} IP: {}",
current_time.format("%F %T UTC"),
ip.ip
));
} }

View File

@@ -12,6 +12,7 @@ use crate::{
DbConn, DbConn,
}, },
error::MapResult, error::MapResult,
util::get_reqwest_client,
CONFIG, CONFIG,
}; };
@@ -59,7 +60,11 @@ impl DuoData {
ik.replace_range(digits.., replaced); ik.replace_range(digits.., replaced);
sk.replace_range(digits.., replaced); sk.replace_range(digits.., replaced);
Self { host, ik, sk } Self {
host,
ik,
sk,
}
} }
} }
@@ -185,9 +190,7 @@ fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbC
} }
fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult { fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> EmptyResult {
const AGENT: &str = "bitwarden_rs:Duo/1.0 (Rust)"; use reqwest::{header, Method};
use reqwest::{blocking::Client, header::*, Method};
use std::str::FromStr; use std::str::FromStr;
// https://duo.com/docs/authapi#api-details // https://duo.com/docs/authapi#api-details
@@ -199,11 +202,13 @@ fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData) -> Em
let m = Method::from_str(method).unwrap_or_default(); let m = Method::from_str(method).unwrap_or_default();
Client::new() let client = get_reqwest_client();
client
.request(m, &url) .request(m, &url)
.basic_auth(username, Some(password)) .basic_auth(username, Some(password))
.header(USER_AGENT, AGENT) .header(header::USER_AGENT, "vaultwarden:Duo/1.0 (Rust)")
.header(DATE, date) .header(header::DATE, date)
.send()? .send()?
.error_for_status()?; .error_for_status()?;

View File

@@ -125,11 +125,7 @@ fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, conn: DbConn) -
let twofactor_data = EmailTokenData::new(data.Email, generated_token); let twofactor_data = EmailTokenData::new(data.Email, generated_token);
// Uses EmailVerificationChallenge as type to show that it's not verified yet. // Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new( let twofactor = TwoFactor::new(user.uuid, TwoFactorType::EmailVerificationChallenge, twofactor_data.to_json());
user.uuid,
TwoFactorType::EmailVerificationChallenge,
twofactor_data.to_json(),
);
twofactor.save(&conn)?; twofactor.save(&conn)?;
mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?; mail::send_token(&twofactor_data.email, &twofactor_data.last_token.map_res("Token is empty")?)?;
@@ -186,7 +182,8 @@ fn email(data: JsonUpcase<EmailData>, headers: Headers, conn: DbConn) -> JsonRes
/// Validate the email code when used as TwoFactor token mechanism /// Validate the email code when used as TwoFactor token mechanism
pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult { pub fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(&data)?; let mut email_data = EmailTokenData::from_json(&data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(&user_uuid, TwoFactorType::Email as i32, &conn).map_res("Two factor not found")?; let mut twofactor = TwoFactor::find_by_user_and_type(&user_uuid, TwoFactorType::Email as i32, &conn)
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token { let issued_token = match &email_data.last_token {
Some(t) => t, Some(t) => t,
_ => err!("No token available"), _ => err!("No token available"),

View File

@@ -20,13 +20,7 @@ pub mod u2f;
pub mod yubikey; pub mod yubikey;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut routes = routes![ let mut routes = routes![get_twofactor, get_recover, recover, disable_twofactor, disable_twofactor_put,];
get_twofactor,
get_recover,
recover,
disable_twofactor,
disable_twofactor_put,
];
routes.append(&mut authenticator::routes()); routes.append(&mut authenticator::routes());
routes.append(&mut duo::routes()); routes.append(&mut duo::routes());
@@ -38,15 +32,15 @@ pub fn routes() -> Vec<Route> {
} }
#[get("/two-factor")] #[get("/two-factor")]
fn get_twofactor(headers: Headers, conn: DbConn) -> JsonResult { fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn); let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn);
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect(); let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Ok(Json(json!({ Json(json!({
"Data": twofactors_json, "Data": twofactors_json,
"Object": "list", "Object": "list",
"ContinuationToken": null, "ContinuationToken": null,
}))) }))
} }
#[post("/two-factor/get-recover", data = "<data>")] #[post("/two-factor/get-recover", data = "<data>")]

View File

@@ -28,13 +28,7 @@ static APP_ID: Lazy<String> = Lazy::new(|| format!("{}/app-id.json", &CONFIG.dom
static U2F: Lazy<U2f> = Lazy::new(|| U2f::new(APP_ID.clone())); static U2F: Lazy<U2f> = Lazy::new(|| U2f::new(APP_ID.clone()));
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![ routes![generate_u2f, generate_u2f_challenge, activate_u2f, activate_u2f_put, delete_u2f,]
generate_u2f,
generate_u2f_challenge,
activate_u2f,
activate_u2f_put,
delete_u2f,
]
} }
#[post("/two-factor/get-u2f", data = "<data>")] #[post("/two-factor/get-u2f", data = "<data>")]
@@ -131,12 +125,12 @@ struct RegisterResponseCopy {
pub error_code: Option<NumberOrString>, pub error_code: Option<NumberOrString>,
} }
impl Into<RegisterResponse> for RegisterResponseCopy { impl From<RegisterResponseCopy> for RegisterResponse {
fn into(self) -> RegisterResponse { fn from(r: RegisterResponseCopy) -> RegisterResponse {
RegisterResponse { RegisterResponse {
registration_data: self.registration_data, registration_data: r.registration_data,
version: self.version, version: r.version,
client_data: self.client_data, client_data: r.client_data,
} }
} }
} }
@@ -161,10 +155,7 @@ fn activate_u2f(data: JsonUpcase<EnableU2FData>, headers: Headers, conn: DbConn)
let response: RegisterResponseCopy = serde_json::from_str(&data.DeviceResponse)?; let response: RegisterResponseCopy = serde_json::from_str(&data.DeviceResponse)?;
let error_code = response let error_code = response.error_code.clone().map_or("0".into(), NumberOrString::into_string);
.error_code
.clone()
.map_or("0".into(), NumberOrString::into_string);
if error_code != "0" { if error_code != "0" {
err!("Error registering U2F token") err!("Error registering U2F token")
@@ -300,20 +291,13 @@ fn _old_parse_registrations(registations: &str) -> Vec<Registration> {
let regs: Vec<Value> = serde_json::from_str(registations).expect("Can't parse Registration data"); let regs: Vec<Value> = serde_json::from_str(registations).expect("Can't parse Registration data");
regs.into_iter() regs.into_iter().map(|r| serde_json::from_value(r).unwrap()).map(|Helper(r)| r).collect()
.map(|r| serde_json::from_value(r).unwrap())
.map(|Helper(r)| r)
.collect()
} }
pub fn generate_u2f_login(user_uuid: &str, conn: &DbConn) -> ApiResult<U2fSignRequest> { pub fn generate_u2f_login(user_uuid: &str, conn: &DbConn) -> ApiResult<U2fSignRequest> {
let challenge = _create_u2f_challenge(user_uuid, TwoFactorType::U2fLoginChallenge, conn); let challenge = _create_u2f_challenge(user_uuid, TwoFactorType::U2fLoginChallenge, conn);
let registrations: Vec<_> = get_u2f_registrations(user_uuid, conn)? let registrations: Vec<_> = get_u2f_registrations(user_uuid, conn)?.1.into_iter().map(|r| r.reg).collect();
.1
.into_iter()
.map(|r| r.reg)
.collect();
if registrations.is_empty() { if registrations.is_empty() {
err!("No U2F devices registered") err!("No U2F devices registered")

View File

@@ -11,16 +11,17 @@ use once_cell::sync::Lazy;
use regex::Regex; use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header, Url}; use reqwest::{blocking::Client, blocking::Response, header, Url};
use rocket::{http::ContentType, http::Cookie, response::Content, Route}; use rocket::{http::ContentType, http::Cookie, response::Content, Route};
use soup::prelude::*;
use crate::{error::Error, util::Cached, CONFIG}; use crate::{
error::Error,
util::{get_reqwest_client_builder, Cached},
CONFIG,
};
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![icon] routes![icon]
} }
const ALLOWED_CHARS: &str = "_-.";
static CLIENT: Lazy<Client> = Lazy::new(|| { static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers // Generate the default headers
let mut default_headers = header::HeaderMap::new(); let mut default_headers = header::HeaderMap::new();
@@ -28,31 +29,47 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en-US,en;q=0.8")); default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en-US,en;q=0.8"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache")); default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache")); default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::ACCEPT, header::HeaderValue::from_static("text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8")); default_headers.insert(
header::ACCEPT,
header::HeaderValue::from_static(
"text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8",
),
);
// Reuse the client between requests // Reuse the client between requests
Client::builder() get_reqwest_client_builder()
.timeout(Duration::from_secs(CONFIG.icon_download_timeout())) .timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(default_headers) .default_headers(default_headers)
.build() .build()
.unwrap() .expect("Failed to build icon client")
}); });
// Build Regex only once since this takes a lot of time. // Build Regex only once since this takes a lot of time.
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap()); static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap());
static ICON_REL_BLACKLIST: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)mask-icon").unwrap());
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap()); static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex. // Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new())); static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new()));
#[get("/<domain>/icon.png")] #[get("/<domain>/icon.png")]
fn icon(domain: String) -> Option<Cached<Content<Vec<u8>>>> { fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(&domain) { if !is_valid_domain(&domain) {
warn!("Invalid domain: {}", domain); warn!("Invalid domain: {}", domain);
return None; return Cached::ttl(
Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
);
} }
get_icon(&domain).map(|icon| Cached::long(Content(ContentType::new("image", "x-icon"), icon))) match get_icon(&domain) {
Some((icon, icon_type)) => {
Cached::ttl(Content(ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl())
}
_ => Cached::ttl(Content(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), CONFIG.icon_cache_negttl()),
}
} }
/// Returns if the domain provided is valid or not. /// Returns if the domain provided is valid or not.
@@ -60,6 +77,8 @@ fn icon(domain: String) -> Option<Cached<Content<Vec<u8>>>> {
/// This does some manual checks and makes use of Url to do some basic checking. /// This does some manual checks and makes use of Url to do some basic checking.
/// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255. /// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255.
fn is_valid_domain(domain: &str) -> bool { fn is_valid_domain(domain: &str) -> bool {
const ALLOWED_CHARS: &str = "_-.";
// If parsing the domain fails using Url, it will not work with reqwest. // If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = Url::parse(format!("https://{}", domain).as_str()) { if let Err(parse_error) = Url::parse(format!("https://{}", domain).as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error); debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
@@ -70,7 +89,10 @@ fn is_valid_domain(domain: &str) -> bool {
|| domain.starts_with('-') || domain.starts_with('-')
|| domain.ends_with('-') || domain.ends_with('-')
{ {
debug!("Domain validation error: '{}' is either empty, contains '..', starts with an '.', starts or ends with a '-'", domain); debug!(
"Domain validation error: '{}' is either empty, contains '..', starts with an '.', starts or ends with a '-'",
domain
);
return false; return false;
} else if domain.len() > 255 { } else if domain.len() > 255 {
debug!("Domain validation error: '{}' exceeds 255 characters", domain); debug!("Domain validation error: '{}' exceeds 255 characters", domain);
@@ -239,7 +261,7 @@ fn is_domain_blacklisted(domain: &str) -> bool {
is_blacklisted is_blacklisted
} }
fn get_icon(domain: &str) -> Option<Vec<u8>> { fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain); let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
// Check for expiration of negatively cached copy // Check for expiration of negatively cached copy
@@ -248,7 +270,11 @@ fn get_icon(domain: &str) -> Option<Vec<u8>> {
} }
if let Some(icon) = get_cached_icon(&path) { if let Some(icon) = get_cached_icon(&path) {
return Some(icon); let icon_type = match get_icon_type(&icon) {
Some(x) => x,
_ => "x-icon",
};
return Some((icon, icon_type.to_string()));
} }
if CONFIG.disable_icon_download() { if CONFIG.disable_icon_download() {
@@ -257,9 +283,9 @@ fn get_icon(domain: &str) -> Option<Vec<u8>> {
// Get the icon, or None in case of error // Get the icon, or None in case of error
match download_icon(&domain) { match download_icon(&domain) {
Ok(icon) => { Ok((icon, icon_type)) => {
save_icon(&path, &icon); save_icon(&path, &icon);
Some(icon) Some((icon, icon_type.unwrap_or("x-icon").to_string()))
} }
Err(e) => { Err(e) => {
error!("Error downloading icon: {:?}", e); error!("Error downloading icon: {:?}", e);
@@ -320,7 +346,6 @@ fn icon_is_expired(path: &str) -> bool {
expired.unwrap_or(true) expired.unwrap_or(true)
} }
#[derive(Debug)]
struct Icon { struct Icon {
priority: u8, priority: u8,
href: String, href: String,
@@ -328,7 +353,54 @@ struct Icon {
impl Icon { impl Icon {
const fn new(priority: u8, href: String) -> Self { const fn new(priority: u8, href: String) -> Self {
Self { href, priority } Self {
href,
priority,
}
}
}
fn get_favicons_node(node: &std::rc::Rc<markup5ever_rcdom::Node>, icons: &mut Vec<Icon>, url: &Url) {
if let markup5ever_rcdom::NodeData::Element {
name,
attrs,
..
} = &node.data
{
if name.local.as_ref() == "link" {
let mut has_rel = false;
let mut href = None;
let mut sizes = None;
let attrs = attrs.borrow();
for attr in attrs.iter() {
let attr_name = attr.name.local.as_ref();
let attr_value = attr.value.as_ref();
if attr_name == "rel" && ICON_REL_REGEX.is_match(attr_value) && !ICON_REL_BLACKLIST.is_match(attr_value)
{
has_rel = true;
} else if attr_name == "href" {
href = Some(attr_value);
} else if attr_name == "sizes" {
sizes = Some(attr_value);
}
}
if has_rel {
if let Some(inner_href) = href {
if let Ok(full_href) = url.join(&inner_href).map(|h| h.into_string()) {
let priority = get_icon_priority(&full_href, sizes);
icons.push(Icon::new(priority, full_href));
}
}
}
}
}
// TODO: Might want to limit the recursion depth?
for child in node.children.borrow().iter() {
get_favicons_node(child, icons, url);
} }
} }
@@ -431,30 +503,14 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// 512KB should be more than enough for the HTML, though as we only really need // 512KB should be more than enough for the HTML, though as we only really need
// the HTML header, it could potentially be reduced even further // the HTML header, it could potentially be reduced even further
let limited_reader = content.take(512 * 1024); let mut limited_reader = content.take(512 * 1024);
let soup = Soup::from_reader(limited_reader)?; use html5ever::tendril::TendrilSink;
// Search for and filter let dom = html5ever::parse_document(markup5ever_rcdom::RcDom::default(), Default::default())
let favicons = soup .from_utf8()
.tag("link") .read_from(&mut limited_reader)?;
.attr("rel", ICON_REL_REGEX.clone()) // Only use icon rels
.attr_name("href") // Make sure there is a href
.find_all();
// Loop through all the found icons and determine it's priority get_favicons_node(&dom.document, &mut iconlist, &url);
for favicon in favicons {
let sizes = favicon.get("sizes");
let href = favicon.get("href").unwrap();
// Skip invalid url's
let full_href = match url.join(&href) {
Ok(h) => h.into_string(),
_ => continue,
};
let priority = get_icon_priority(&full_href, sizes);
iconlist.push(Icon::new(priority, full_href))
}
} else { } else {
// Add the default favicon.ico to the list with just the given domain // Add the default favicon.ico to the list with just the given domain
iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain))); iconlist.push(Icon::new(35, format!("{}/favicon.ico", ssldomain)));
@@ -465,10 +521,10 @@ fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
iconlist.sort_by_key(|x| x.priority); iconlist.sort_by_key(|x| x.priority);
// There always is an icon in the list, so no need to check if it exists, and just return the first one // There always is an icon in the list, so no need to check if it exists, and just return the first one
Ok(IconUrlResult{ Ok(IconUrlResult {
iconlist, iconlist,
cookies: cookie_str, cookies: cookie_str,
referer referer,
}) })
} }
@@ -489,9 +545,7 @@ fn get_page_with_cookies(url: &str, cookie_str: &str, referer: &str) -> Result<R
client = client.header("Referer", referer) client = client.header("Referer", referer)
} }
client.send()? client.send()?.error_for_status().map_err(Into::into)
.error_for_status()
.map_err(Into::into)
} }
/// Returns a Integer with the priority of the type of the icon which to prefer. /// Returns a Integer with the priority of the type of the icon which to prefer.
@@ -506,7 +560,7 @@ fn get_page_with_cookies(url: &str, cookie_str: &str, referer: &str) -> Result<R
/// priority1 = get_icon_priority("http://example.com/path/to/a/favicon.png", "32x32"); /// priority1 = get_icon_priority("http://example.com/path/to/a/favicon.png", "32x32");
/// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", ""); /// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", "");
/// ``` /// ```
fn get_icon_priority(href: &str, sizes: Option<String>) -> u8 { fn get_icon_priority(href: &str, sizes: Option<&str>) -> u8 {
// Check if there is a dimension set // Check if there is a dimension set
let (width, height) = parse_sizes(sizes); let (width, height) = parse_sizes(sizes);
@@ -554,7 +608,7 @@ fn get_icon_priority(href: &str, sizes: Option<String>) -> u8 {
/// let (width, height) = parse_sizes("x128x128"); // (128, 128) /// let (width, height) = parse_sizes("x128x128"); // (128, 128)
/// let (width, height) = parse_sizes("32"); // (0, 0) /// let (width, height) = parse_sizes("32"); // (0, 0)
/// ``` /// ```
fn parse_sizes(sizes: Option<String>) -> (u16, u16) { fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
let mut width: u16 = 0; let mut width: u16 = 0;
let mut height: u16 = 0; let mut height: u16 = 0;
@@ -573,7 +627,7 @@ fn parse_sizes(sizes: Option<String>) -> (u16, u16) {
(width, height) (width, height)
} }
fn download_icon(domain: &str) -> Result<Vec<u8>, Error> { fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
if is_domain_blacklisted(domain) { if is_domain_blacklisted(domain) {
err!("Domain is blacklisted", domain) err!("Domain is blacklisted", domain)
} }
@@ -581,6 +635,7 @@ fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
let icon_result = get_icon_url(&domain)?; let icon_result = get_icon_url(&domain)?;
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let mut icon_type: Option<&str> = None;
use data_url::DataUrl; use data_url::DataUrl;
@@ -592,29 +647,43 @@ fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
Ok((body, _fragment)) => { Ok((body, _fragment)) => {
// Also check if the size is atleast 67 bytes, which seems to be the smallest png i could create // Also check if the size is atleast 67 bytes, which seems to be the smallest png i could create
if body.len() >= 67 { if body.len() >= 67 {
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&body);
if icon_type.is_none() {
debug!("Icon from {} data:image uri, is not a valid image type", domain);
continue;
}
info!("Extracted icon from data:image uri for {}", domain);
buffer = body; buffer = body;
break; break;
} }
} }
_ => warn!("data uri is invalid"), _ => warn!("Extracted icon from data:image uri is invalid"),
}; };
} else { } else {
match get_page_with_cookies(&icon.href, &icon_result.cookies, &icon_result.referer) { match get_page_with_cookies(&icon.href, &icon_result.cookies, &icon_result.referer) {
Ok(mut res) => { Ok(mut res) => {
info!("Downloaded icon from {}", icon.href);
res.copy_to(&mut buffer)?; res.copy_to(&mut buffer)?;
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
continue;
}
info!("Downloaded icon from {}", icon.href);
break; break;
}, }
_ => warn!("Download failed for {}", icon.href), _ => warn!("Download failed for {}", icon.href),
}; };
} }
} }
if buffer.is_empty() { if buffer.is_empty() {
err!("Empty response") err!("Empty response downloading icon")
} }
Ok(buffer) Ok((buffer, icon_type))
} }
fn save_icon(path: &str, icon: &[u8]) { fn save_icon(path: &str, icon: &[u8]) {
@@ -626,7 +695,18 @@ fn save_icon(path: &str, icon: &[u8]) {
create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache"); create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache");
} }
Err(e) => { Err(e) => {
info!("Icon save error: {:?}", e); warn!("Icon save error: {:?}", e);
} }
} }
} }
fn get_icon_type(bytes: &[u8]) -> Option<&'static str> {
match bytes {
[137, 80, 78, 71, ..] => Some("png"),
[0, 0, 1, 0, ..] => Some("x-icon"),
[82, 73, 70, 70, ..] => Some("webp"),
[255, 216, 255, ..] => Some("jpeg"),
[66, 77, ..] => Some("bmp"),
_ => None,
}
}

View File

@@ -72,7 +72,8 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
"Kdf": user.client_kdf_type, "Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter, "KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing "ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": "api offline_access" "scope": "api offline_access",
"unofficialServer": true,
}))) })))
} }
@@ -87,34 +88,28 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
let username = data.username.as_ref().unwrap(); let username = data.username.as_ref().unwrap();
let user = match User::find_by_mail(username, &conn) { let user = match User::find_by_mail(username, &conn) {
Some(user) => user, Some(user) => user,
None => err!( None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
"Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username)
),
}; };
// Check password // Check password
let password = data.password.as_ref().unwrap(); let password = data.password.as_ref().unwrap();
if !user.check_valid_password(password) { if !user.check_valid_password(password) {
err!( err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username))
"Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username)
)
} }
// Check if the user is disabled // Check if the user is disabled
if !user.enabled { if !user.enabled {
err!( err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username)
)
} }
let now = Local::now(); let now = Local::now();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() { if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
let now = now.naive_utc(); let now = now.naive_utc();
if user.last_verifying_at.is_none() || now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds() > CONFIG.signups_verify_resend_time() as i64 { if user.last_verifying_at.is_none()
|| now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds()
> CONFIG.signups_verify_resend_time() as i64
{
let resend_limit = CONFIG.signups_verify_resend_limit() as i32; let resend_limit = CONFIG.signups_verify_resend_limit() as i32;
if resend_limit == 0 || user.login_verify_count < resend_limit { if resend_limit == 0 || user.login_verify_count < resend_limit {
// We want to send another email verification if we require signups to verify // We want to send another email verification if we require signups to verify
@@ -134,10 +129,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
} }
// We still want the login to fail until they actually verified the email address // We still want the login to fail until they actually verified the email address
err!( err!("Please verify your email before trying again.", format!("IP: {}. Username: {}.", ip.ip, username))
"Please verify your email before trying again.",
format!("IP: {}. Username: {}.", ip.ip, username)
)
} }
let (mut device, new_device) = get_device(&data, &conn, &user); let (mut device, new_device) = get_device(&data, &conn, &user);
@@ -172,7 +164,8 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
"Kdf": user.client_kdf_type, "Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter, "KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false,// TODO: Same as above "ResetMasterPassword": false,// TODO: Same as above
"scope": "api offline_access" "scope": "api offline_access",
"unofficialServer": true,
}); });
if let Some(token) = twofactor_token { if let Some(token) = twofactor_token {
@@ -234,9 +227,7 @@ fn twofactor_auth(
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA token not provided"), None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA token not provided"),
}; };
let selected_twofactor = twofactors let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
.into_iter()
.find(|tf| tf.atype == selected_id && tf.enabled);
use crate::api::core::two_factor as _tf; use crate::api::core::two_factor as _tf;
use crate::crypto::ct_eq; use crate::crypto::ct_eq;
@@ -245,18 +236,26 @@ fn twofactor_auth(
let mut remember = data.two_factor_remember.unwrap_or(0); let mut remember = data.two_factor_remember.unwrap_or(0);
match TwoFactorType::from_i32(selected_id) { match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => _tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?, Some(TwoFactorType::Authenticator) => {
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn)?
}
Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?, Some(TwoFactorType::U2f) => _tf::u2f::validate_u2f_login(user_uuid, twofactor_code, conn)?,
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?, Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?)?,
Some(TwoFactorType::Duo) => _tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?, Some(TwoFactorType::Duo) => {
Some(TwoFactorType::Email) => _tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn)?, _tf::duo::validate_duo_login(data.username.as_ref().unwrap(), twofactor_code, conn)?
}
Some(TwoFactorType::Email) => {
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn)?
}
Some(TwoFactorType::Remember) => { Some(TwoFactorType::Remember) => {
match device.twofactor_remember { match device.twofactor_remember {
Some(ref code) if !CONFIG.disable_2fa_remember() && ct_eq(code, twofactor_code) => { Some(ref code) if !CONFIG.disable_2fa_remember() && ct_eq(code, twofactor_code) => {
remember = 1; // Make sure we also return the token here, otherwise it will only remember the first time remember = 1; // Make sure we also return the token here, otherwise it will only remember the first time
} }
_ => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA Remember token not provided"), _ => {
err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn)?, "2FA Remember token not provided")
}
} }
} }
_ => err!("Invalid two factor provider"), _ => err!("Invalid two factor provider"),

View File

@@ -10,6 +10,8 @@ use serde_json::Value;
pub use crate::api::{ pub use crate::api::{
admin::routes as admin_routes, admin::routes as admin_routes,
core::purge_sends,
core::purge_trashed_ciphers,
core::routes as core_routes, core::routes as core_routes,
icons::routes as icons_routes, icons::routes as icons_routes,
identity::routes as identity_routes, identity::routes as identity_routes,
@@ -53,9 +55,9 @@ impl NumberOrString {
use std::num::ParseIntError as PIE; use std::num::ParseIntError as PIE;
match self { match self {
NumberOrString::Number(n) => Ok(n), NumberOrString::Number(n) => Ok(n),
NumberOrString::String(s) => s NumberOrString::String(s) => {
.parse() s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
.map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string())), }
} }
} }
} }

View File

@@ -4,12 +4,7 @@ use rocket::Route;
use rocket_contrib::json::Json; use rocket_contrib::json::Json;
use serde_json::Value as JsonValue; use serde_json::Value as JsonValue;
use crate::{ use crate::{api::EmptyResult, auth::Headers, db::DbConn, Error, CONFIG};
api::{EmptyResult, JsonResult},
auth::Headers,
db::DbConn,
Error, CONFIG,
};
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![negotiate, websockets_err] routes![negotiate, websockets_err]
@@ -19,19 +14,23 @@ static SHOW_WEBSOCKETS_MSG: AtomicBool = AtomicBool::new(true);
#[get("/hub")] #[get("/hub")]
fn websockets_err() -> EmptyResult { fn websockets_err() -> EmptyResult {
if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed).is_ok() { if CONFIG.websocket_enabled()
err!(" && SHOW_WEBSOCKETS_MSG.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed).is_ok()
{
err!(
"
########################################################### ###########################################################
'/notifications/hub' should be proxied to the websocket server or notifications won't work. '/notifications/hub' should be proxied to the websocket server or notifications won't work.
Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false. Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false.
###########################################################################################\n") ###########################################################################################\n"
)
} else { } else {
Err(Error::empty()) Err(Error::empty())
} }
} }
#[post("/hub/negotiate")] #[post("/hub/negotiate")]
fn negotiate(_headers: Headers, _conn: DbConn) -> JsonResult { fn negotiate(_headers: Headers, _conn: DbConn) -> Json<JsonValue> {
use crate::crypto; use crate::crypto;
use data_encoding::BASE64URL; use data_encoding::BASE64URL;
@@ -47,10 +46,10 @@ fn negotiate(_headers: Headers, _conn: DbConn) -> JsonResult {
// Rocket SSE support: https://github.com/SergioBenitez/Rocket/issues/33 // Rocket SSE support: https://github.com/SergioBenitez/Rocket/issues/33
// {"transport":"ServerSentEvents", "transferFormats":["Text"]}, // {"transport":"ServerSentEvents", "transferFormats":["Text"]},
// {"transport":"LongPolling", "transferFormats":["Text","Binary"]} // {"transport":"LongPolling", "transferFormats":["Text","Binary"]}
Ok(Json(json!({ Json(json!({
"connectionId": conn_id, "connectionId": conn_id,
"availableTransports": available_transports "availableTransports": available_transports
}))) }))
} }
// //
@@ -120,7 +119,7 @@ fn convert_option<T: Into<Value>>(option: Option<T>) -> Value {
} }
// Server WebSocket handler // Server WebSocket handler
pub struct WSHandler { pub struct WsHandler {
out: Sender, out: Sender,
user_uuid: Option<String>, user_uuid: Option<String>,
users: WebSocketUsers, users: WebSocketUsers,
@@ -140,7 +139,7 @@ const PING: Token = Token(1);
const ACCESS_TOKEN_KEY: &str = "access_token="; const ACCESS_TOKEN_KEY: &str = "access_token=";
impl WSHandler { impl WsHandler {
fn err(&self, msg: &'static str) -> ws::Result<()> { fn err(&self, msg: &'static str) -> ws::Result<()> {
self.out.close(ws::CloseCode::Invalid)?; self.out.close(ws::CloseCode::Invalid)?;
@@ -166,8 +165,8 @@ impl WSHandler {
if let Some(params) = path.split('?').nth(1) { if let Some(params) = path.split('?').nth(1) {
let params_iter = params.split('&').take(1); let params_iter = params.split('&').take(1);
for val in params_iter { for val in params_iter {
if val.starts_with(ACCESS_TOKEN_KEY) { if let Some(stripped) = val.strip_prefix(ACCESS_TOKEN_KEY) {
return Some(val[ACCESS_TOKEN_KEY.len()..].into()); return Some(stripped.into());
} }
} }
}; };
@@ -176,7 +175,7 @@ impl WSHandler {
} }
} }
impl Handler for WSHandler { impl Handler for WsHandler {
fn on_open(&mut self, hs: Handshake) -> ws::Result<()> { fn on_open(&mut self, hs: Handshake) -> ws::Result<()> {
// Path == "/notifications/hub?id=<id>==&access_token=<access_token>" // Path == "/notifications/hub?id=<id>==&access_token=<access_token>"
// //
@@ -204,9 +203,7 @@ impl Handler for WSHandler {
let handler_insert = self.out.clone(); let handler_insert = self.out.clone();
let handler_update = self.out.clone(); let handler_update = self.out.clone();
self.users self.users.map.upsert(user_uuid, || vec![handler_insert], |ref mut v| v.push(handler_update));
.map
.upsert(user_uuid, || vec![handler_insert], |ref mut v| v.push(handler_update));
// Schedule a ping to keep the connection alive // Schedule a ping to keep the connection alive
self.out.timeout(PING_MS, PING) self.out.timeout(PING_MS, PING)
@@ -216,7 +213,11 @@ impl Handler for WSHandler {
if let Message::Text(text) = msg.clone() { if let Message::Text(text) = msg.clone() {
let json = &text[..text.len() - 1]; // Remove last char let json = &text[..text.len() - 1]; // Remove last char
if let Ok(InitialMessage { protocol, version }) = from_str::<InitialMessage>(json) { if let Ok(InitialMessage {
protocol,
version,
}) = from_str::<InitialMessage>(json)
{
if &protocol == "messagepack" && version == 1 { if &protocol == "messagepack" && version == 1 {
return self.out.send(&INITIAL_RESPONSE[..]); // Respond to initial message return self.out.send(&INITIAL_RESPONSE[..]); // Respond to initial message
} }
@@ -240,13 +241,13 @@ impl Handler for WSHandler {
} }
} }
struct WSFactory { struct WsFactory {
pub users: WebSocketUsers, pub users: WebSocketUsers,
} }
impl WSFactory { impl WsFactory {
pub fn init() -> Self { pub fn init() -> Self {
WSFactory { WsFactory {
users: WebSocketUsers { users: WebSocketUsers {
map: Arc::new(CHashMap::new()), map: Arc::new(CHashMap::new()),
}, },
@@ -254,11 +255,11 @@ impl WSFactory {
} }
} }
impl Factory for WSFactory { impl Factory for WsFactory {
type Handler = WSHandler; type Handler = WsHandler;
fn connection_made(&mut self, out: Sender) -> Self::Handler { fn connection_made(&mut self, out: Sender) -> Self::Handler {
WSHandler { WsHandler {
out, out,
user_uuid: None, user_uuid: None,
users: self.users.clone(), users: self.users.clone(),
@@ -295,10 +296,7 @@ impl WebSocketUsers {
// NOTE: The last modified date needs to be updated before calling these methods // NOTE: The last modified date needs to be updated before calling these methods
pub fn send_user_update(&self, ut: UpdateType, user: &User) { pub fn send_user_update(&self, ut: UpdateType, user: &User) {
let data = create_update( let data = create_update(
vec![ vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
("UserId".into(), user.uuid.clone().into()),
("Date".into(), serialize_date(user.updated_at)),
],
ut, ut,
); );
@@ -394,6 +392,10 @@ pub enum UpdateType {
LogOut = 11, LogOut = 11,
SyncSendCreate = 12,
SyncSendUpdate = 13,
SyncSendDelete = 14,
None = 100, None = 100,
} }
@@ -401,15 +403,17 @@ use rocket::State;
pub type Notify<'a> = State<'a, WebSocketUsers>; pub type Notify<'a> = State<'a, WebSocketUsers>;
pub fn start_notification_server() -> WebSocketUsers { pub fn start_notification_server() -> WebSocketUsers {
let factory = WSFactory::init(); let factory = WsFactory::init();
let users = factory.users.clone(); let users = factory.users.clone();
if CONFIG.websocket_enabled() { if CONFIG.websocket_enabled() {
thread::spawn(move || { thread::spawn(move || {
let mut settings = ws::Settings::default(); let settings = ws::Settings {
settings.max_connections = 500; max_connections: 500,
settings.queue_size = 2; queue_size: 2,
settings.panic_on_internal = false; panic_on_internal: false,
..Default::default()
};
ws::Builder::new() ws::Builder::new()
.with_settings(settings) .with_settings(settings)

View File

@@ -10,7 +10,7 @@ pub fn routes() -> Vec<Route> {
// If addding more routes here, consider also adding them to // If addding more routes here, consider also adding them to
// crate::utils::LOGGED_ROUTES to make sure they appear in the log // crate::utils::LOGGED_ROUTES to make sure they appear in the log
if CONFIG.web_vault_enabled() { if CONFIG.web_vault_enabled() {
routes![web_index, app_id, web_files, attachments, alive, static_files] routes![web_index, app_id, web_files, attachments, sends, alive, static_files]
} else { } else {
routes![attachments, alive, static_files] routes![attachments, alive, static_files]
} }
@@ -60,6 +60,11 @@ fn attachments(uuid: String, file: PathBuf) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file)).ok() NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file)).ok()
} }
#[get("/sends/<send_id>/<file_id>")]
fn sends(send_id: String, file_id: String) -> Option<NamedFile> {
NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok()
}
#[get("/alive")] #[get("/alive")]
fn alive() -> Json<String> { fn alive() -> Json<String> {
use crate::util::format_date; use crate::util::format_date;
@@ -78,12 +83,15 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
"hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))), "hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))), "bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js"))), "bootstrap-native.js" => {
"md5.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/md5.js"))), Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js")))
}
"identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))), "identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))), "datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))), "datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.5.1.slim.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.5.1.slim.js"))), "jquery-3.5.1.slim.js" => {
Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.5.1.slim.js")))
}
_ => err!(format!("Static file not found: {}", filename)), _ => err!(format!("Static file not found: {}", filename)),
} }
} }

View File

@@ -58,28 +58,28 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
.map_res("Error decoding JWT") .map_res("Error decoding JWT")
} }
pub fn decode_login(token: &str) -> Result<LoginJWTClaims, Error> { pub fn decode_login(token: &str) -> Result<LoginJwtClaims, Error> {
decode_jwt(token, JWT_LOGIN_ISSUER.to_string()) decode_jwt(token, JWT_LOGIN_ISSUER.to_string())
} }
pub fn decode_invite(token: &str) -> Result<InviteJWTClaims, Error> { pub fn decode_invite(token: &str) -> Result<InviteJwtClaims, Error> {
decode_jwt(token, JWT_INVITE_ISSUER.to_string()) decode_jwt(token, JWT_INVITE_ISSUER.to_string())
} }
pub fn decode_delete(token: &str) -> Result<DeleteJWTClaims, Error> { pub fn decode_delete(token: &str) -> Result<DeleteJwtClaims, Error> {
decode_jwt(token, JWT_DELETE_ISSUER.to_string()) decode_jwt(token, JWT_DELETE_ISSUER.to_string())
} }
pub fn decode_verify_email(token: &str) -> Result<VerifyEmailJWTClaims, Error> { pub fn decode_verify_email(token: &str) -> Result<VerifyEmailJwtClaims, Error> {
decode_jwt(token, JWT_VERIFYEMAIL_ISSUER.to_string()) decode_jwt(token, JWT_VERIFYEMAIL_ISSUER.to_string())
} }
pub fn decode_admin(token: &str) -> Result<AdminJWTClaims, Error> { pub fn decode_admin(token: &str) -> Result<AdminJwtClaims, Error> {
decode_jwt(token, JWT_ADMIN_ISSUER.to_string()) decode_jwt(token, JWT_ADMIN_ISSUER.to_string())
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct LoginJWTClaims { pub struct LoginJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -110,7 +110,7 @@ pub struct LoginJWTClaims {
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct InviteJWTClaims { pub struct InviteJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -132,9 +132,9 @@ pub fn generate_invite_claims(
org_id: Option<String>, org_id: Option<String>,
user_org_id: Option<String>, user_org_id: Option<String>,
invited_by_email: Option<String>, invited_by_email: Option<String>,
) -> InviteJWTClaims { ) -> InviteJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
InviteJWTClaims { InviteJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(), exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_INVITE_ISSUER.to_string(), iss: JWT_INVITE_ISSUER.to_string(),
@@ -147,7 +147,7 @@ pub fn generate_invite_claims(
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct DeleteJWTClaims { pub struct DeleteJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -158,9 +158,9 @@ pub struct DeleteJWTClaims {
pub sub: String, pub sub: String,
} }
pub fn generate_delete_claims(uuid: String) -> DeleteJWTClaims { pub fn generate_delete_claims(uuid: String) -> DeleteJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
DeleteJWTClaims { DeleteJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(), exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_DELETE_ISSUER.to_string(), iss: JWT_DELETE_ISSUER.to_string(),
@@ -169,7 +169,7 @@ pub fn generate_delete_claims(uuid: String) -> DeleteJWTClaims {
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct VerifyEmailJWTClaims { pub struct VerifyEmailJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -180,9 +180,9 @@ pub struct VerifyEmailJWTClaims {
pub sub: String, pub sub: String,
} }
pub fn generate_verify_email_claims(uuid: String) -> DeleteJWTClaims { pub fn generate_verify_email_claims(uuid: String) -> DeleteJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
DeleteJWTClaims { DeleteJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::days(5)).timestamp(), exp: (time_now + Duration::days(5)).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(), iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
@@ -191,7 +191,7 @@ pub fn generate_verify_email_claims(uuid: String) -> DeleteJWTClaims {
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct AdminJWTClaims { pub struct AdminJwtClaims {
// Not before // Not before
pub nbf: i64, pub nbf: i64,
// Expiration time // Expiration time
@@ -202,9 +202,9 @@ pub struct AdminJWTClaims {
pub sub: String, pub sub: String,
} }
pub fn generate_admin_claims() -> AdminJWTClaims { pub fn generate_admin_claims() -> AdminJwtClaims {
let time_now = Utc::now().naive_utc(); let time_now = Utc::now().naive_utc();
AdminJWTClaims { AdminJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(20)).timestamp(), exp: (time_now + Duration::minutes(20)).timestamp(),
iss: JWT_ADMIN_ISSUER.to_string(), iss: JWT_ADMIN_ISSUER.to_string(),
@@ -222,13 +222,11 @@ use crate::db::{
DbConn, DbConn,
}; };
pub struct Headers { pub struct Host {
pub host: String, pub host: String,
pub device: Device,
pub user: User,
} }
impl<'a, 'r> FromRequest<'a, 'r> for Headers { impl<'a, 'r> FromRequest<'a, 'r> for Host {
type Error = &'static str; type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> { fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
@@ -262,6 +260,30 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
format!("{}://{}", protocol, host) format!("{}://{}", protocol, host)
}; };
Outcome::Success(Host {
host,
})
}
}
pub struct Headers {
pub host: String,
pub device: Device,
pub user: User,
}
impl<'a, 'r> FromRequest<'a, 'r> for Headers {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
let host = match Host::from_request(request) {
Outcome::Forward(_) => return Outcome::Forward(()),
Outcome::Failure(f) => return Outcome::Failure(f),
Outcome::Success(host) => host.host,
};
// Get access_token // Get access_token
let access_token: &str = match headers.get_one("Authorization") { let access_token: &str = match headers.get_one("Authorization") {
Some(a) => match a.rsplit("Bearer ").next() { Some(a) => match a.rsplit("Bearer ").next() {
@@ -296,10 +318,8 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
}; };
if user.security_stamp != claims.sstamp { if user.security_stamp != claims.sstamp {
if let Some(stamp_exception) = user if let Some(stamp_exception) =
.stamp_exception user.stamp_exception.as_deref().and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
.as_deref()
.and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{ {
let current_route = match request.route().and_then(|r| r.name) { let current_route = match request.route().and_then(|r| r.name) {
Some(name) => name, Some(name) => name,
@@ -317,7 +337,11 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
} }
} }
Outcome::Success(Headers { host, device, user }) Outcome::Success(Headers {
host,
device,
user,
})
} }
} }
@@ -429,12 +453,12 @@ impl<'a, 'r> FromRequest<'a, 'r> for AdminHeaders {
} }
} }
impl Into<Headers> for AdminHeaders { impl From<AdminHeaders> for Headers {
fn into(self) -> Headers { fn from(h: AdminHeaders) -> Headers {
Headers { Headers {
host: self.host, host: h.host,
device: self.device, device: h.device,
user: self.user, user: h.user,
} }
} }
} }
@@ -485,7 +509,11 @@ impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
}; };
if !headers.org_user.has_full_access() { if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn) { match CollectionUser::find_by_collection_and_user(
&col_id,
&headers.org_user.user_uuid,
&conn,
) {
Some(_) => (), Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"), None => err_handler!("The current user isn't a manager for this collection"),
} }
@@ -508,12 +536,12 @@ impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
} }
} }
impl Into<Headers> for ManagerHeaders { impl From<ManagerHeaders> for Headers {
fn into(self) -> Headers { fn from(h: ManagerHeaders) -> Headers {
Headers { Headers {
host: self.host, host: h.host,
device: self.device, device: h.device,
user: self.user, user: h.user,
} }
} }
} }
@@ -550,12 +578,12 @@ impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeadersLoose {
} }
} }
impl Into<Headers> for ManagerHeadersLoose { impl From<ManagerHeadersLoose> for Headers {
fn into(self) -> Headers { fn from(h: ManagerHeadersLoose) -> Headers {
Headers { Headers {
host: self.host, host: h.host,
device: self.device, device: h.device,
user: self.user, user: h.user,
} }
} }
} }
@@ -615,10 +643,10 @@ impl<'a, 'r> FromRequest<'a, 'r> for ClientIp {
None None
}; };
let ip = ip let ip = ip.or_else(|| req.remote().map(|r| r.ip())).unwrap_or_else(|| "0.0.0.0".parse().unwrap());
.or_else(|| req.remote().map(|r| r.ip()))
.unwrap_or_else(|| "0.0.0.0".parse().unwrap());
Outcome::Success(ClientIp { ip }) Outcome::Success(ClientIp {
ip,
})
} }
} }

View File

@@ -299,6 +299,8 @@ make_config! {
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache"); icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
/// Attachments folder /// Attachments folder
attachments_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "attachments"); attachments_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "attachments");
/// Sends folder
sends_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "sends");
/// Templates folder /// Templates folder
templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates"); templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates");
/// Session JWT key /// Session JWT key
@@ -314,6 +316,17 @@ make_config! {
/// Websocket port /// Websocket port
websocket_port: u16, false, def, 3012; websocket_port: u16, false, def, 3012;
}, },
jobs {
/// Job scheduler poll interval |> How often the job scheduler thread checks for jobs to run.
/// Set to 0 to globally disable scheduled jobs.
job_poll_interval_ms: u64, false, def, 30_000;
/// Send purge schedule |> Cron schedule of the job that checks for Sends past their deletion date.
/// Defaults to hourly. Set blank to disable this job.
send_purge_schedule: String, false, def, "0 5 * * * *".to_string();
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
/// Defaults to daily. Set blank to disable this job.
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
},
/// General settings /// General settings
settings { settings {
@@ -337,11 +350,16 @@ make_config! {
/// Per-organization attachment limit (KB) |> Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more /// Per-organization attachment limit (KB) |> Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
org_attachment_limit: i64, true, option; org_attachment_limit: i64, true, option;
/// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item.
/// If unset, trashed items are not auto-deleted. This setting applies globally, so make
/// sure to inform all users of any changes to this setting.
trash_auto_delete_days: i64, true, option;
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from /// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0, /// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
/// otherwise it will delete them and they won't be downloaded again. /// otherwise it will delete them and they won't be downloaded again.
disable_icon_download: bool, true, def, false; disable_icon_download: bool, true, def, false;
/// Allow new signups |> Controls whether new users can register. Users can be invited by the bitwarden_rs admin even if this is disabled /// Allow new signups |> Controls whether new users can register. Users can be invited by the vaultwarden admin even if this is disabled
signups_allowed: bool, true, def, true; signups_allowed: bool, true, def, true;
/// Require email verification on signups. This will prevent logins from succeeding until the address has been verified /// Require email verification on signups. This will prevent logins from succeeding until the address has been verified
signups_verify: bool, true, def, false; signups_verify: bool, true, def, false;
@@ -367,7 +385,7 @@ make_config! {
admin_token: Pass, true, option; admin_token: Pass, true, option;
/// Invitation organization name |> Name shown in the invitation emails that don't come from a specific organization /// Invitation organization name |> Name shown in the invitation emails that don't come from a specific organization
invitation_org_name: String, true, def, "Bitwarden_RS".to_string(); invitation_org_name: String, true, def, "Vaultwarden".to_string();
}, },
/// Advanced settings /// Advanced settings
@@ -416,7 +434,7 @@ make_config! {
/// Log level /// Log level
log_level: String, false, def, "Info".to_string(); log_level: String, false, def, "Info".to_string();
/// Enable DB WAL |> Turning this off might lead to worse performance, but might help if using bitwarden_rs on some exotic filesystems, /// Enable DB WAL |> Turning this off might lead to worse performance, but might help if using vaultwarden on some exotic filesystems,
/// that do not support WAL. Please make sure you read project wiki on the topic before changing this setting. /// that do not support WAL. Please make sure you read project wiki on the topic before changing this setting.
enable_db_wal: bool, false, def, true; enable_db_wal: bool, false, def, true;
@@ -471,7 +489,7 @@ make_config! {
/// From Address /// From Address
smtp_from: String, true, def, String::new(); smtp_from: String, true, def, String::new();
/// From Name /// From Name
smtp_from_name: String, true, def, "Bitwarden_RS".to_string(); smtp_from_name: String, true, def, "Vaultwarden".to_string();
/// Username /// Username
smtp_username: String, true, option; smtp_username: String, true, option;
/// Password /// Password
@@ -509,10 +527,7 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let limit = 256; let limit = 256;
if cfg.database_max_conns < 1 || cfg.database_max_conns > limit { if cfg.database_max_conns < 1 || cfg.database_max_conns > limit {
err!(format!( err!(format!("`DATABASE_MAX_CONNS` contains an invalid value. Ensure it is between 1 and {}.", limit,));
"`DATABASE_MAX_CONNS` contains an invalid value. Ensure it is between 1 and {}.",
limit,
));
} }
let dom = cfg.domain.to_lowercase(); let dom = cfg.domain.to_lowercase();
@@ -853,9 +868,7 @@ fn case_helper<'reg, 'rc>(
rc: &mut RenderContext<'reg, 'rc>, rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output, out: &mut dyn Output,
) -> HelperResult { ) -> HelperResult {
let param = h let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?;
.param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?;
let value = param.value().clone(); let value = param.value().clone();
if h.params().iter().skip(1).any(|x| x.value() == &value) { if h.params().iter().skip(1).any(|x| x.value() == &value) {
@@ -872,21 +885,15 @@ fn js_escape_helper<'reg, 'rc>(
_rc: &mut RenderContext<'reg, 'rc>, _rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output, out: &mut dyn Output,
) -> HelperResult { ) -> HelperResult {
let param = h let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
.param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
let no_quote = h let no_quote = h.param(1).is_some();
.param(1)
.is_some();
let value = param let value =
.value() param.value().as_str().ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
.as_str()
.ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27"); let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
if ! no_quote { if !no_quote {
escaped_value = format!("&quot;{}&quot;", escaped_value); escaped_value = format!("&quot;{}&quot;", escaped_value);
} }

View File

@@ -47,9 +47,7 @@ pub fn get_random_64() -> Vec<u8> {
pub fn get_random(mut array: Vec<u8>) -> Vec<u8> { pub fn get_random(mut array: Vec<u8>) -> Vec<u8> {
use ring::rand::{SecureRandom, SystemRandom}; use ring::rand::{SecureRandom, SystemRandom};
SystemRandom::new() SystemRandom::new().fill(&mut array).expect("Error generating random values");
.fill(&mut array)
.expect("Error generating random values");
array array
} }

View File

@@ -1,6 +1,3 @@
use std::process::Command;
use chrono::prelude::*;
use diesel::r2d2::{ConnectionManager, Pool, PooledConnection}; use diesel::r2d2::{ConnectionManager, Pool, PooledConnection};
use rocket::{ use rocket::{
http::Status, http::Status,
@@ -25,7 +22,6 @@ pub mod __mysql_schema;
#[path = "schemas/postgresql/schema.rs"] #[path = "schemas/postgresql/schema.rs"]
pub mod __postgresql_schema; pub mod __postgresql_schema;
// This is used to generate the main DbConn and DbPool enums, which contain one variant for each database supported // This is used to generate the main DbConn and DbPool enums, which contain one variant for each database supported
macro_rules! generate_connections { macro_rules! generate_connections {
( $( $name:ident: $ty:ty ),+ ) => { ( $( $name:ident: $ty:ty ),+ ) => {
@@ -37,6 +33,7 @@ macro_rules! generate_connections {
pub enum DbConn { $( #[cfg($name)] $name(PooledConnection<ConnectionManager< $ty >>), )+ } pub enum DbConn { $( #[cfg($name)] $name(PooledConnection<ConnectionManager< $ty >>), )+ }
#[allow(non_camel_case_types)] #[allow(non_camel_case_types)]
#[derive(Clone)]
pub enum DbPool { $( #[cfg($name)] $name(Pool<ConnectionManager< $ty >>), )+ } pub enum DbPool { $( #[cfg($name)] $name(Pool<ConnectionManager< $ty >>), )+ }
impl DbPool { impl DbPool {
@@ -109,7 +106,6 @@ impl DbConnType {
} }
} }
#[macro_export] #[macro_export]
macro_rules! db_run { macro_rules! db_run {
// Same for all dbs // Same for all dbs
@@ -134,8 +130,26 @@ macro_rules! db_run {
)+)+ )+)+
} }
}; };
}
// Same for all dbs
( @raw $conn:ident: $body:block ) => {
db_run! { @raw $conn: sqlite, mysql, postgresql $body }
};
// Different code for each db
( @raw $conn:ident: $( $($db:ident),+ $body:block )+ ) => {
#[allow(unused)] use diesel::prelude::*;
#[allow(unused_variables)]
match $conn {
$($(
#[cfg($db)]
crate::db::DbConn::$db(ref $conn) => {
$body
},
)+)+
}
};
}
pub trait FromDb { pub trait FromDb {
type Output; type Output;
@@ -202,23 +216,36 @@ macro_rules! db_object {
// Reexport the models, needs to be after the macros are defined so it can access them // Reexport the models, needs to be after the macros are defined so it can access them
pub mod models; pub mod models;
/// Creates a back-up of the database using sqlite3 /// Creates a back-up of the sqlite database
pub fn backup_database() -> Result<(), Error> { /// MySQL/MariaDB and PostgreSQL are not supported.
pub fn backup_database(conn: &DbConn) -> Result<(), Error> {
db_run! {@raw conn:
postgresql, mysql {
err!("PostgreSQL and MySQL/MariaDB do not support this backup feature");
}
sqlite {
use std::path::Path; use std::path::Path;
let db_url = CONFIG.database_url(); let db_url = CONFIG.database_url();
let db_path = Path::new(&db_url).parent().unwrap(); let db_path = Path::new(&db_url).parent().unwrap().to_string_lossy();
let file_date = chrono::Utc::now().format("%Y%m%d_%H%M%S").to_string();
let now: DateTime<Utc> = Utc::now(); diesel::sql_query(format!("VACUUM INTO '{}/db_{}.sqlite3'", db_path, file_date)).execute(conn)?;
let file_date = now.format("%Y%m%d").to_string();
let backup_command: String = format!("{}{}{}", ".backup 'db_", file_date, ".sqlite3'");
Command::new("sqlite3")
.current_dir(db_path)
.args(&["db.sqlite3", &backup_command])
.output()
.expect("Can't open database, sqlite3 is not available, make sure it's installed and available on the PATH");
Ok(()) Ok(())
}
}
}
/// Get the SQL Server version
pub fn get_sql_server_version(conn: &DbConn) -> String {
db_run! {@raw conn:
postgresql, mysql {
no_arg_sql_function!(version, diesel::sql_types::Text);
diesel::select(version).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
}
sqlite {
no_arg_sql_function!(sqlite_version, diesel::sql_types::Text);
diesel::select(sqlite_version).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
}
}
} }
/// Attempts to retrieve a single connection from the managed database pool. If /// Attempts to retrieve a single connection from the managed database pool. If
@@ -259,8 +286,7 @@ mod sqlite_migrations {
use diesel::{Connection, RunQueryDsl}; use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations) // Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = let connection = diesel::sqlite::SqliteConnection::establish(&crate::CONFIG.database_url())?;
diesel::sqlite::SqliteConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration // Disable Foreign Key Checks during migration
// Scoped to a connection. // Scoped to a connection.
@@ -270,9 +296,7 @@ mod sqlite_migrations {
// Turn on WAL in SQLite // Turn on WAL in SQLite
if crate::CONFIG.enable_db_wal() { if crate::CONFIG.enable_db_wal() {
diesel::sql_query("PRAGMA journal_mode=wal") diesel::sql_query("PRAGMA journal_mode=wal").execute(&connection).expect("Failed to turn on WAL");
.execute(&connection)
.expect("Failed to turn on WAL");
} }
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?; embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
@@ -288,8 +312,7 @@ mod mysql_migrations {
pub fn run_migrations() -> Result<(), super::Error> { pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl}; use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations) // Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = let connection = diesel::mysql::MysqlConnection::establish(&crate::CONFIG.database_url())?;
diesel::mysql::MysqlConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration // Disable Foreign Key Checks during migration
// Scoped to a connection/session. // Scoped to a connection/session.
@@ -310,8 +333,7 @@ mod postgresql_migrations {
pub fn run_migrations() -> Result<(), super::Error> { pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl}; use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations) // Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = let connection = diesel::pg::PgConnection::establish(&crate::CONFIG.database_url())?;
diesel::pg::PgConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration // Disable Foreign Key Checks during migration
// FIXME: Per https://www.postgresql.org/docs/12/sql-set-constraints.html, // FIXME: Per https://www.postgresql.org/docs/12/sql-set-constraints.html,

View File

@@ -4,7 +4,7 @@ use super::Cipher;
use crate::CONFIG; use crate::CONFIG;
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "attachments"] #[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")] #[changeset_options(treat_none_as_null="true")]
#[belongs_to(super::Cipher, foreign_key = "cipher_uuid")] #[belongs_to(super::Cipher, foreign_key = "cipher_uuid")]
@@ -59,7 +59,6 @@ use crate::error::MapResult;
/// Database methods /// Database methods
impl Attachment { impl Attachment {
pub fn save(&self, conn: &DbConn) -> EmptyResult { pub fn save(&self, conn: &DbConn) -> EmptyResult {
db_run! { conn: db_run! { conn:
sqlite, mysql { sqlite, mysql {

View File

@@ -1,20 +1,15 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{Duration, NaiveDateTime, Utc};
use serde_json::Value; use serde_json::Value;
use crate::CONFIG;
use super::{ use super::{
Attachment, Attachment, CollectionCipher, Favorite, FolderCipher, Organization, User, UserOrgStatus, UserOrgType,
CollectionCipher,
Favorite,
FolderCipher,
Organization,
User,
UserOrgStatus,
UserOrgType,
UserOrganization, UserOrganization,
}; };
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "ciphers"] #[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")] #[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
@@ -91,20 +86,20 @@ impl Cipher {
}; };
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null); let password_history_json =
self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let (read_only, hide_passwords) = let (read_only, hide_passwords) = match self.get_access_restrictions(&user_uuid, conn) {
match self.get_access_restrictions(&user_uuid, conn) {
Some((ro, hp)) => (ro, hp), Some((ro, hp)) => (ro, hp),
None => { None => {
error!("Cipher ownership assertion failure"); error!("Cipher ownership assertion failure");
(true, true) (true, true)
}, }
}; };
// Get the type_data or a default to an empty json object '{}'. // Get the type_data or a default to an empty json object '{}'.
// If not passing an empty object, mobile clients will crash. // If not passing an empty object, mobile clients will crash.
let mut type_data_json: Value = serde_json::from_str(&self.data).unwrap_or(json!({})); let mut type_data_json: Value = serde_json::from_str(&self.data).unwrap_or_else(|_| json!({}));
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream // NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// Set the first element of the Uris array as Uri, this is needed several (mobile) clients. // Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
@@ -130,7 +125,7 @@ impl Cipher {
// There are three types of cipher response models in upstream // There are three types of cipher response models in upstream
// Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order // Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order
// of increasing level of detail). bitwarden_rs currently only // of increasing level of detail). vaultwarden currently only
// supports the "cipherDetails" type, though it seems like the // supports the "cipherDetails" type, though it seems like the
// Bitwarden clients will ignore extra fields. // Bitwarden clients will ignore extra fields.
// //
@@ -195,9 +190,7 @@ impl Cipher {
None => { None => {
// Belongs to Organization, need to update affected users // Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
UserOrganization::find_by_cipher_and_org(&self.uuid, &org_uuid, conn) UserOrganization::find_by_cipher_and_org(&self.uuid, &org_uuid, conn).iter().for_each(|user_org| {
.iter()
.for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn); User::update_uuid_revision(&user_org.user_uuid, conn);
user_uuids.push(user_org.user_uuid.clone()) user_uuids.push(user_org.user_uuid.clone())
}); });
@@ -271,6 +264,17 @@ impl Cipher {
Ok(()) Ok(())
} }
/// Purge all ciphers that are old enough to be auto-deleted.
pub fn purge_trash(conn: &DbConn) {
if let Some(auto_delete_days) = CONFIG.trash_auto_delete_days() {
let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days);
for cipher in Self::find_deleted_before(&dt, conn) {
cipher.delete(&conn).ok();
}
}
}
pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn); User::update_uuid_revision(user_uuid, conn);
@@ -511,6 +515,15 @@ impl Cipher {
}} }}
} }
/// Find all ciphers that were deleted before the specified datetime.
pub fn find_deleted_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::deleted_at.lt(dt))
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
pub fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> { pub fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> {
db_run! {conn: { db_run! {conn: {
ciphers_collections::table ciphers_collections::table

View File

@@ -1,9 +1,9 @@
use serde_json::Value; use serde_json::Value;
use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization, User, Cipher}; use super::{Cipher, Organization, User, UserOrgStatus, UserOrgType, UserOrganization};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "collections"] #[table_name = "collections"]
#[belongs_to(Organization, foreign_key = "org_uuid")] #[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]
@@ -13,7 +13,7 @@ db_object! {
pub name: String, pub name: String,
} }
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)] #[derive(Identifiable, Queryable, Insertable, Associations)]
#[table_name = "users_collections"] #[table_name = "users_collections"]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")] #[belongs_to(Collection, foreign_key = "collection_uuid")]
@@ -25,7 +25,7 @@ db_object! {
pub hide_passwords: bool, pub hide_passwords: bool,
} }
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)] #[derive(Identifiable, Queryable, Insertable, Associations)]
#[table_name = "ciphers_collections"] #[table_name = "ciphers_collections"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")] #[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")] #[belongs_to(Collection, foreign_key = "collection_uuid")]
@@ -127,9 +127,7 @@ impl Collection {
} }
pub fn update_users_revision(&self, conn: &DbConn) { pub fn update_users_revision(&self, conn: &DbConn) {
UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn) UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).iter().for_each(|user_org| {
.iter()
.for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn); User::update_uuid_revision(&user_org.user_uuid, conn);
}); });
} }
@@ -170,10 +168,7 @@ impl Collection {
} }
pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid, conn) Self::find_by_user_uuid(user_uuid, conn).into_iter().filter(|c| c.org_uuid == org_uuid).collect()
.into_iter()
.filter(|c| c.org_uuid == org_uuid)
.collect()
} }
pub fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> { pub fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
@@ -284,7 +279,13 @@ impl CollectionUser {
}} }}
} }
pub fn save(user_uuid: &str, collection_uuid: &str, read_only: bool, hide_passwords: bool, conn: &DbConn) -> EmptyResult { pub fn save(
user_uuid: &str,
collection_uuid: &str,
read_only: bool,
hide_passwords: bool,
conn: &DbConn,
) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn); User::update_uuid_revision(&user_uuid, conn);
db_run! { conn: db_run! { conn:
@@ -374,9 +375,7 @@ impl CollectionUser {
} }
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult { pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
CollectionUser::find_by_collection(&collection_uuid, conn) CollectionUser::find_by_collection(&collection_uuid, conn).iter().for_each(|collection| {
.iter()
.for_each(|collection| {
User::update_uuid_revision(&collection.user_uuid, conn); User::update_uuid_revision(&collection.user_uuid, conn);
}); });

View File

@@ -4,7 +4,7 @@ use super::User;
use crate::CONFIG; use crate::CONFIG;
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "devices"] #[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")] #[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
@@ -80,8 +80,8 @@ impl Device {
let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect(); let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client // Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJWTClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER}; use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
let claims = LoginJWTClaims { let claims = LoginJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + *DEFAULT_VALIDITY).timestamp(), exp: (time_now + *DEFAULT_VALIDITY).timestamp(),
iss: JWT_LOGIN_ISSUER.to_string(), iss: JWT_LOGIN_ISSUER.to_string(),

View File

@@ -1,7 +1,7 @@
use super::{Cipher, User}; use super::{Cipher, User};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)] #[derive(Identifiable, Queryable, Insertable, Associations)]
#[table_name = "favorites"] #[table_name = "favorites"]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")] #[belongs_to(Cipher, foreign_key = "cipher_uuid")]
@@ -20,7 +20,7 @@ use crate::error::MapResult;
impl Favorite { impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user. // Returns whether the specified cipher is a favorite of the specified user.
pub fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool { pub fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool {
db_run!{ conn: { db_run! { conn: {
let query = favorites::table let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid)) .filter(favorites::cipher_uuid.eq(cipher_uuid))
.filter(favorites::user_uuid.eq(user_uuid)) .filter(favorites::user_uuid.eq(user_uuid))
@@ -36,7 +36,7 @@ impl Favorite {
match (old, new) { match (old, new) {
(false, true) => { (false, true) => {
User::update_uuid_revision(user_uuid, &conn); User::update_uuid_revision(user_uuid, &conn);
db_run!{ conn: { db_run! { conn: {
diesel::insert_into(favorites::table) diesel::insert_into(favorites::table)
.values(( .values((
favorites::user_uuid.eq(user_uuid), favorites::user_uuid.eq(user_uuid),
@@ -48,7 +48,7 @@ impl Favorite {
} }
(true, false) => { (true, false) => {
User::update_uuid_revision(user_uuid, &conn); User::update_uuid_revision(user_uuid, &conn);
db_run!{ conn: { db_run! { conn: {
diesel::delete( diesel::delete(
favorites::table favorites::table
.filter(favorites::user_uuid.eq(user_uuid)) .filter(favorites::user_uuid.eq(user_uuid))
@@ -59,7 +59,7 @@ impl Favorite {
}} }}
} }
// Otherwise, the favorite status is already what it should be. // Otherwise, the favorite status is already what it should be.
_ => Ok(()) _ => Ok(()),
} }
} }

View File

@@ -4,7 +4,7 @@ use serde_json::Value;
use super::{Cipher, User}; use super::{Cipher, User};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "folders"] #[table_name = "folders"]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]
@@ -16,7 +16,7 @@ db_object! {
pub name: String, pub name: String,
} }
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)] #[derive(Identifiable, Queryable, Insertable, Associations)]
#[table_name = "folders_ciphers"] #[table_name = "folders_ciphers"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")] #[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Folder, foreign_key = "folder_uuid")] #[belongs_to(Folder, foreign_key = "folder_uuid")]
@@ -109,7 +109,6 @@ impl Folder {
User::update_uuid_revision(&self.user_uuid, conn); User::update_uuid_revision(&self.user_uuid, conn);
FolderCipher::delete_all_by_folder(&self.uuid, &conn)?; FolderCipher::delete_all_by_folder(&self.uuid, &conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid))) diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid)))
.execute(conn) .execute(conn)

View File

@@ -6,6 +6,7 @@ mod favorite;
mod folder; mod folder;
mod org_policy; mod org_policy;
mod organization; mod organization;
mod send;
mod two_factor; mod two_factor;
mod user; mod user;
@@ -17,5 +18,6 @@ pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher}; pub use self::folder::{Folder, FolderCipher};
pub use self::org_policy::{OrgPolicy, OrgPolicyType}; pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization}; pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::send::{Send, SendType};
pub use self::two_factor::{TwoFactor, TwoFactorType}; pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::user::{Invitation, User, UserStampException}; pub use self::user::{Invitation, User, UserStampException};

View File

@@ -4,10 +4,10 @@ use crate::api::EmptyResult;
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use super::{Organization, UserOrgStatus}; use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "org_policies"] #[table_name = "org_policies"]
#[belongs_to(Organization, foreign_key = "org_uuid")] #[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]
@@ -20,8 +20,7 @@ db_object! {
} }
} }
#[allow(dead_code)] #[derive(Copy, Clone, num_derive::FromPrimitive)]
#[derive(num_derive::FromPrimitive)]
pub enum OrgPolicyType { pub enum OrgPolicyType {
TwoFactorAuthentication = 0, TwoFactorAuthentication = 0,
MasterPassword = 1, MasterPassword = 1,
@@ -29,6 +28,7 @@ pub enum OrgPolicyType {
// SingleOrg = 3, // Not currently supported. // SingleOrg = 3, // Not currently supported.
// RequireSso = 4, // Not currently supported. // RequireSso = 4, // Not currently supported.
PersonalOwnership = 5, PersonalOwnership = 5,
DisableSend = 6,
} }
/// Local methods /// Local methods
@@ -170,6 +170,24 @@ impl OrgPolicy {
}} }}
} }
/// Returns true if the user belongs to an org that has enabled the specified policy type,
/// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics.
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
// Returns confirmed users only.
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
if policy.enabled && policy.has_type(policy_type) {
let org_uuid = &policy.org_uuid;
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
if user.atype < UserOrgType::Admin {
return true;
}
}
}
}
false
}
/*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult { /*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid))) diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))

View File

@@ -1,11 +1,11 @@
use num_traits::FromPrimitive;
use serde_json::Value; use serde_json::Value;
use std::cmp::Ordering; use std::cmp::Ordering;
use num_traits::FromPrimitive;
use super::{CollectionUser, User, OrgPolicy}; use super::{CollectionUser, OrgPolicy, User};
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "organizations"] #[table_name = "organizations"]
#[primary_key(uuid)] #[primary_key(uuid)]
pub struct Organization { pub struct Organization {
@@ -14,7 +14,7 @@ db_object! {
pub billing_email: String, pub billing_email: String,
} }
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users_organizations"] #[table_name = "users_organizations"]
#[primary_key(uuid)] #[primary_key(uuid)]
pub struct UserOrganization { pub struct UserOrganization {
@@ -35,8 +35,7 @@ pub enum UserOrgStatus {
Confirmed = 2, Confirmed = 2,
} }
#[derive(Copy, Clone, PartialEq, Eq)] #[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
#[derive(num_derive::FromPrimitive)]
pub enum UserOrgType { pub enum UserOrgType {
Owner = 0, Owner = 0,
Admin = 1, Admin = 1,
@@ -90,17 +89,11 @@ impl PartialOrd<i32> for UserOrgType {
} }
fn gt(&self, other: &i32) -> bool { fn gt(&self, other: &i32) -> bool {
match self.partial_cmp(other) { matches!(self.partial_cmp(other), Some(Ordering::Greater))
Some(Ordering::Less) | Some(Ordering::Equal) => false,
_ => true,
}
} }
fn ge(&self, other: &i32) -> bool { fn ge(&self, other: &i32) -> bool {
match self.partial_cmp(other) { matches!(self.partial_cmp(other), Some(Ordering::Greater) | Some(Ordering::Equal))
Some(Ordering::Less) => false,
_ => true,
}
} }
} }
@@ -119,17 +112,11 @@ impl PartialOrd<UserOrgType> for i32 {
} }
fn lt(&self, other: &UserOrgType) -> bool { fn lt(&self, other: &UserOrgType) -> bool {
match self.partial_cmp(other) { matches!(self.partial_cmp(other), Some(Ordering::Less) | None)
Some(Ordering::Less) | None => true,
_ => false,
}
} }
fn le(&self, other: &UserOrgType) -> bool { fn le(&self, other: &UserOrgType) -> bool {
match self.partial_cmp(other) { matches!(self.partial_cmp(other), Some(Ordering::Less) | Some(Ordering::Equal) | None)
Some(Ordering::Less) | Some(Ordering::Equal) | None => true,
_ => false,
}
} }
} }
@@ -202,9 +189,7 @@ use crate::error::MapResult;
/// Database methods /// Database methods
impl Organization { impl Organization {
pub fn save(&self, conn: &DbConn) -> EmptyResult { pub fn save(&self, conn: &DbConn) -> EmptyResult {
UserOrganization::find_by_org(&self.uuid, conn) UserOrganization::find_by_org(&self.uuid, conn).iter().for_each(|user_org| {
.iter()
.for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn); User::update_uuid_revision(&user_org.user_uuid, conn);
}); });
@@ -248,7 +233,6 @@ impl Organization {
UserOrganization::delete_all_by_organization(&self.uuid, &conn)?; UserOrganization::delete_all_by_organization(&self.uuid, &conn)?;
OrgPolicy::delete_all_by_organization(&self.uuid, &conn)?; OrgPolicy::delete_all_by_organization(&self.uuid, &conn)?;
db_run! { conn: { db_run! { conn: {
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid))) diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
.execute(conn) .execute(conn)
@@ -359,11 +343,13 @@ impl UserOrganization {
let collections = CollectionUser::find_by_organization_and_user_uuid(&self.org_uuid, &self.user_uuid, conn); let collections = CollectionUser::find_by_organization_and_user_uuid(&self.org_uuid, &self.user_uuid, conn);
collections collections
.iter() .iter()
.map(|c| json!({ .map(|c| {
json!({
"Id": c.collection_uuid, "Id": c.collection_uuid,
"ReadOnly": c.read_only, "ReadOnly": c.read_only,
"HidePasswords": c.hide_passwords, "HidePasswords": c.hide_passwords,
})) })
})
.collect() .collect()
}; };
@@ -458,8 +444,7 @@ impl UserOrganization {
} }
pub fn has_full_access(&self) -> bool { pub fn has_full_access(&self) -> bool {
(self.access_all || self.atype >= UserOrgType::Admin) && (self.access_all || self.atype >= UserOrgType::Admin) && self.has_status(UserOrgStatus::Confirmed)
self.has_status(UserOrgStatus::Confirmed)
} }
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> { pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {

284
src/db/models/send.rs Normal file
View File

@@ -0,0 +1,284 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::{Organization, User};
db_object! {
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "sends"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)]
pub struct Send {
pub uuid: String,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
pub name: String,
pub notes: Option<String>,
pub atype: i32,
pub data: String,
pub akey: String,
pub password_hash: Option<Vec<u8>>,
password_salt: Option<Vec<u8>>,
password_iter: Option<i32>,
pub max_access_count: Option<i32>,
pub access_count: i32,
pub creation_date: NaiveDateTime,
pub revision_date: NaiveDateTime,
pub expiration_date: Option<NaiveDateTime>,
pub deletion_date: NaiveDateTime,
pub disabled: bool,
}
}
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
pub enum SendType {
Text = 0,
File = 1,
}
impl Send {
pub fn new(atype: i32, name: String, data: String, akey: String, deletion_date: NaiveDateTime) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
user_uuid: None,
organization_uuid: None,
name,
notes: None,
atype,
data,
akey,
password_hash: None,
password_salt: None,
password_iter: None,
max_access_count: None,
access_count: 0,
creation_date: now,
revision_date: now,
expiration_date: None,
deletion_date,
disabled: false,
}
}
pub fn set_password(&mut self, password: Option<&str>) {
const PASSWORD_ITER: i32 = 100_000;
if let Some(password) = password {
self.password_iter = Some(PASSWORD_ITER);
let salt = crate::crypto::get_random_64();
let hash = crate::crypto::hash_password(password.as_bytes(), &salt, PASSWORD_ITER as u32);
self.password_salt = Some(salt);
self.password_hash = Some(hash);
} else {
self.password_iter = None;
self.password_salt = None;
self.password_hash = None;
}
}
pub fn check_password(&self, password: &str) -> bool {
match (&self.password_hash, &self.password_salt, self.password_iter) {
(Some(hash), Some(salt), Some(iter)) => {
crate::crypto::verify_password_hash(password.as_bytes(), salt, hash, iter as u32)
}
_ => false,
}
}
pub fn to_json(&self) -> Value {
use crate::util::format_date;
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
json!({
"Id": self.uuid,
"AccessId": BASE64URL_NOPAD.encode(Uuid::parse_str(&self.uuid).unwrap_or_default().as_bytes()),
"Type": self.atype,
"Name": self.name,
"Notes": self.notes,
"Text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"Key": self.akey,
"MaxAccessCount": self.max_access_count,
"AccessCount": self.access_count,
"Password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)),
"Disabled": self.disabled,
"RevisionDate": format_date(&self.revision_date),
"ExpirationDate": self.expiration_date.as_ref().map(format_date),
"DeletionDate": format_date(&self.deletion_date),
"Object": "send",
})
}
pub fn to_json_access(&self) -> Value {
use crate::util::format_date;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
json!({
"Id": self.uuid,
"Type": self.atype,
"Name": self.name,
"Text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"ExpirationDate": self.expiration_date.as_ref().map(format_date),
"Object": "send-access",
})
}
}
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
impl Send {
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
self.revision_date = Utc::now().naive_utc();
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(sends::table)
.values(SendDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(sends::table)
.filter(sends::uuid.eq(&self.uuid))
.set(SendDb::to_db(self))
.execute(conn)
.map_res("Error saving send")
}
Err(e) => Err(e.into()),
}.map_res("Error saving send")
}
postgresql {
let value = SendDb::to_db(self);
diesel::insert_into(sends::table)
.values(&value)
.on_conflict(sends::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving send")
}
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
if self.atype == SendType::File as i32 {
std::fs::remove_dir_all(std::path::Path::new(&crate::CONFIG.sends_folder()).join(&self.uuid)).ok();
}
db_run! { conn: {
diesel::delete(sends::table.filter(sends::uuid.eq(&self.uuid)))
.execute(conn)
.map_res("Error deleting send")
}}
}
/// Purge all sends that are past their deletion date.
pub fn purge(conn: &DbConn) {
for send in Self::find_by_past_deletion_date(&conn) {
send.delete(&conn).ok();
}
}
pub fn update_users_revision(&self, conn: &DbConn) {
match &self.user_uuid {
Some(user_uuid) => {
User::update_uuid_revision(&user_uuid, conn);
}
None => {
// Belongs to Organization, not implemented
}
}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, &conn) {
send.delete(&conn)?;
}
Ok(())
}
pub fn find_by_access_id(access_id: &str, conn: &DbConn) -> Option<Self> {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
let uuid_vec = match BASE64URL_NOPAD.decode(access_id.as_bytes()) {
Ok(v) => v,
Err(_) => return None,
};
let uuid = match Uuid::from_slice(&uuid_vec) {
Ok(u) => u.to_string(),
Err(_) => return None,
};
Self::find_by_uuid(&uuid, conn)
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
.first::<SendDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::user_uuid.eq(user_uuid))
.load::<SendDb>(conn).expect("Error loading sends").from_db()
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::organization_uuid.eq(org_uuid))
.load::<SendDb>(conn).expect("Error loading sends").from_db()
}}
}
pub fn find_by_past_deletion_date(conn: &DbConn) -> Vec<Self> {
let now = Utc::now().naive_utc();
db_run! {conn: {
sends::table
.filter(sends::deletion_date.lt(now))
.load::<SendDb>(conn).expect("Error loading sends").from_db()
}}
}
}

View File

@@ -7,7 +7,7 @@ use crate::error::MapResult;
use super::User; use super::User;
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "twofactor"] #[table_name = "twofactor"]
#[belongs_to(User, foreign_key = "user_uuid")] #[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)] #[primary_key(uuid)]

View File

@@ -5,7 +5,7 @@ use crate::crypto;
use crate::CONFIG; use crate::CONFIG;
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users"] #[table_name = "users"]
#[changeset_options(treat_none_as_null="true")] #[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)] #[primary_key(uuid)]
@@ -47,7 +47,7 @@ db_object! {
} }
#[derive(Debug, Identifiable, Queryable, Insertable)] #[derive(Identifiable, Queryable, Insertable)]
#[table_name = "invitations"] #[table_name = "invitations"]
#[primary_key(email)] #[primary_key(email)]
pub struct Invitation { pub struct Invitation {
@@ -64,7 +64,7 @@ enum UserStatus {
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct UserStampException { pub struct UserStampException {
pub route: String, pub route: String,
pub security_stamp: String pub security_stamp: String,
} }
/// Local methods /// Local methods
@@ -162,7 +162,7 @@ impl User {
pub fn set_stamp_exception(&mut self, route_exception: &str) { pub fn set_stamp_exception(&mut self, route_exception: &str) {
let stamp_exception = UserStampException { let stamp_exception = UserStampException {
route: route_exception.to_string(), route: route_exception.to_string(),
security_stamp: self.security_stamp.to_string() security_stamp: self.security_stamp.to_string(),
}; };
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default()); self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
} }
@@ -177,7 +177,7 @@ impl User {
} }
} }
use super::{Cipher, Device, Favorite, Folder, TwoFactor, UserOrgType, UserOrganization}; use super::{Cipher, Device, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization};
use crate::db::DbConn; use crate::db::DbConn;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -263,6 +263,7 @@ impl User {
} }
} }
Send::delete_all_by_user(&self.uuid, conn)?;
UserOrganization::delete_all_by_user(&self.uuid, conn)?; UserOrganization::delete_all_by_user(&self.uuid, conn)?;
Cipher::delete_all_by_user(&self.uuid, conn)?; Cipher::delete_all_by_user(&self.uuid, conn)?;
Favorite::delete_all_by_user(&self.uuid, conn)?; Favorite::delete_all_by_user(&self.uuid, conn)?;
@@ -340,14 +341,16 @@ impl User {
pub fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> { pub fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn) { match Device::find_latest_active_by_user(&self.uuid, conn) {
Some(device) => Some(device.updated_at), Some(device) => Some(device.updated_at),
None => None None => None,
} }
} }
} }
impl Invitation { impl Invitation {
pub const fn new(email: String) -> Self { pub const fn new(email: String) -> Self {
Self { email } Self {
email,
}
} }
pub fn save(&self, conn: &DbConn) -> EmptyResult { pub fn save(&self, conn: &DbConn) -> EmptyResult {

View File

@@ -102,6 +102,29 @@ table! {
} }
} }
table! {
sends (uuid) {
uuid -> Text,
user_uuid -> Nullable<Text>,
organization_uuid -> Nullable<Text>,
name -> Text,
notes -> Nullable<Text>,
atype -> Integer,
data -> Text,
akey -> Text,
password_hash -> Nullable<Binary>,
password_salt -> Nullable<Binary>,
password_iter -> Nullable<Integer>,
max_access_count -> Nullable<Integer>,
access_count -> Integer,
creation_date -> Datetime,
revision_date -> Datetime,
expiration_date -> Nullable<Datetime>,
deletion_date -> Datetime,
disabled -> Bool,
}
}
table! { table! {
twofactor (uuid) { twofactor (uuid) {
uuid -> Text, uuid -> Text,
@@ -176,6 +199,8 @@ joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid)); joinable!(org_policies -> organizations (org_uuid));
joinable!(sends -> organizations (organization_uuid));
joinable!(sends -> users (user_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -193,6 +218,7 @@ allow_tables_to_appear_in_same_query!(
invitations, invitations,
org_policies, org_policies,
organizations, organizations,
sends,
twofactor, twofactor,
users, users,
users_collections, users_collections,

View File

@@ -102,6 +102,29 @@ table! {
} }
} }
table! {
sends (uuid) {
uuid -> Text,
user_uuid -> Nullable<Text>,
organization_uuid -> Nullable<Text>,
name -> Text,
notes -> Nullable<Text>,
atype -> Integer,
data -> Text,
akey -> Text,
password_hash -> Nullable<Binary>,
password_salt -> Nullable<Binary>,
password_iter -> Nullable<Integer>,
max_access_count -> Nullable<Integer>,
access_count -> Integer,
creation_date -> Timestamp,
revision_date -> Timestamp,
expiration_date -> Nullable<Timestamp>,
deletion_date -> Timestamp,
disabled -> Bool,
}
}
table! { table! {
twofactor (uuid) { twofactor (uuid) {
uuid -> Text, uuid -> Text,
@@ -176,6 +199,8 @@ joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid)); joinable!(org_policies -> organizations (org_uuid));
joinable!(sends -> organizations (organization_uuid));
joinable!(sends -> users (user_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -193,6 +218,7 @@ allow_tables_to_appear_in_same_query!(
invitations, invitations,
org_policies, org_policies,
organizations, organizations,
sends,
twofactor, twofactor,
users, users,
users_collections, users_collections,

View File

@@ -102,6 +102,29 @@ table! {
} }
} }
table! {
sends (uuid) {
uuid -> Text,
user_uuid -> Nullable<Text>,
organization_uuid -> Nullable<Text>,
name -> Text,
notes -> Nullable<Text>,
atype -> Integer,
data -> Text,
akey -> Text,
password_hash -> Nullable<Binary>,
password_salt -> Nullable<Binary>,
password_iter -> Nullable<Integer>,
max_access_count -> Nullable<Integer>,
access_count -> Integer,
creation_date -> Timestamp,
revision_date -> Timestamp,
expiration_date -> Nullable<Timestamp>,
deletion_date -> Timestamp,
disabled -> Bool,
}
}
table! { table! {
twofactor (uuid) { twofactor (uuid) {
uuid -> Text, uuid -> Text,
@@ -176,6 +199,8 @@ joinable!(folders -> users (user_uuid));
joinable!(folders_ciphers -> ciphers (cipher_uuid)); joinable!(folders_ciphers -> ciphers (cipher_uuid));
joinable!(folders_ciphers -> folders (folder_uuid)); joinable!(folders_ciphers -> folders (folder_uuid));
joinable!(org_policies -> organizations (org_uuid)); joinable!(org_policies -> organizations (org_uuid));
joinable!(sends -> organizations (organization_uuid));
joinable!(sends -> users (user_uuid));
joinable!(twofactor -> users (user_uuid)); joinable!(twofactor -> users (user_uuid));
joinable!(users_collections -> collections (collection_uuid)); joinable!(users_collections -> collections (collection_uuid));
joinable!(users_collections -> users (user_uuid)); joinable!(users_collections -> users (user_uuid));
@@ -193,6 +218,7 @@ allow_tables_to_appear_in_same_query!(
invitations, invitations,
org_policies, org_policies,
organizations, organizations,
sends,
twofactor, twofactor,
users, users,
users_collections, users_collections,

View File

@@ -33,16 +33,16 @@ macro_rules! make_error {
}; };
} }
use diesel::r2d2::PoolError as R2d2Err;
use diesel::result::Error as DieselErr; use diesel::result::Error as DieselErr;
use diesel::ConnectionError as DieselConErr; use diesel::ConnectionError as DieselConErr;
use diesel_migrations::RunMigrationsError as DieselMigErr; use diesel_migrations::RunMigrationsError as DieselMigErr;
use diesel::r2d2::PoolError as R2d2Err;
use handlebars::RenderError as HbErr; use handlebars::RenderError as HbErr;
use jsonwebtoken::errors::Error as JWTErr; use jsonwebtoken::errors::Error as JwtErr;
use regex::Error as RegexErr; use regex::Error as RegexErr;
use reqwest::Error as ReqErr; use reqwest::Error as ReqErr;
use serde_json::{Error as SerdeErr, Value}; use serde_json::{Error as SerdeErr, Value};
use std::io::Error as IOErr; use std::io::Error as IoErr;
use std::time::SystemTimeError as TimeErr; use std::time::SystemTimeError as TimeErr;
use u2f::u2ferror::U2fError as U2fErr; use u2f::u2ferror::U2fError as U2fErr;
@@ -72,10 +72,10 @@ make_error! {
R2d2Error(R2d2Err): _has_source, _api_error, R2d2Error(R2d2Err): _has_source, _api_error,
U2fError(U2fErr): _has_source, _api_error, U2fError(U2fErr): _has_source, _api_error,
SerdeError(SerdeErr): _has_source, _api_error, SerdeError(SerdeErr): _has_source, _api_error,
JWTError(JWTErr): _has_source, _api_error, JWtError(JwtErr): _has_source, _api_error,
TemplError(HbErr): _has_source, _api_error, TemplError(HbErr): _has_source, _api_error,
//WsError(ws::Error): _has_source, _api_error, //WsError(ws::Error): _has_source, _api_error,
IOError(IOErr): _has_source, _api_error, IoError(IoErr): _has_source, _api_error,
TimeError(TimeErr): _has_source, _api_error, TimeError(TimeErr): _has_source, _api_error,
ReqError(ReqErr): _has_source, _api_error, ReqError(ReqErr): _has_source, _api_error,
RegexError(RegexErr): _has_source, _api_error, RegexError(RegexErr): _has_source, _api_error,
@@ -152,6 +152,7 @@ impl<S> MapResult<S> for Option<S> {
} }
} }
#[allow(clippy::unnecessary_wraps)]
const fn _has_source<T>(e: T) -> Option<T> { const fn _has_source<T>(e: T) -> Option<T> {
Some(e) Some(e)
} }
@@ -197,11 +198,7 @@ impl<'r> Responder<'r> for Error {
let code = Status::from_code(self.error_code).unwrap_or(Status::BadRequest); let code = Status::from_code(self.error_code).unwrap_or(Status::BadRequest);
Response::build() Response::build().status(code).header(ContentType::JSON).sized_body(Cursor::new(format!("{}", self))).ok()
.status(code)
.header(ContentType::JSON)
.sized_body(Cursor::new(format!("{}", self)))
.ok()
} }
} }
@@ -220,6 +217,18 @@ macro_rules! err {
}}; }};
} }
#[macro_export]
macro_rules! err_code {
($msg:expr, $err_code: literal) => {{
error!("{}", $msg);
return Err(crate::error::Error::new($msg, $msg).with_code($err_code));
}};
($usr_msg:expr, $log_value:expr, $err_code: literal) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err(crate::error::Error::new($usr_msg, $log_value).with_code($err_code));
}};
}
#[macro_export] #[macro_export]
macro_rules! err_discard { macro_rules! err_discard {
($msg:expr, $data:expr) => {{ ($msg:expr, $data:expr) => {{

View File

@@ -1,4 +1,4 @@
use std::{str::FromStr}; use std::str::FromStr;
use chrono::{DateTime, Local}; use chrono::{DateTime, Local};
use percent_encoding::{percent_encode, NON_ALPHANUMERIC}; use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
@@ -30,10 +30,10 @@ fn mailer() -> SmtpTransport {
let smtp_client = if CONFIG.smtp_ssl() { let smtp_client = if CONFIG.smtp_ssl() {
let mut tls_parameters = TlsParameters::builder(host); let mut tls_parameters = TlsParameters::builder(host);
if CONFIG.smtp_accept_invalid_hostnames() { if CONFIG.smtp_accept_invalid_hostnames() {
tls_parameters.dangerous_accept_invalid_hostnames(true); tls_parameters = tls_parameters.dangerous_accept_invalid_hostnames(true);
} }
if CONFIG.smtp_accept_invalid_certs() { if CONFIG.smtp_accept_invalid_certs() {
tls_parameters.dangerous_accept_invalid_certs(true); tls_parameters = tls_parameters.dangerous_accept_invalid_certs(true);
} }
let tls_parameters = tls_parameters.build().unwrap(); let tls_parameters = tls_parameters.build().unwrap();
@@ -58,15 +58,17 @@ fn mailer() -> SmtpTransport {
let smtp_client = match CONFIG.smtp_auth_mechanism() { let smtp_client = match CONFIG.smtp_auth_mechanism() {
Some(mechanism) => { Some(mechanism) => {
let allowed_mechanisms = vec![SmtpAuthMechanism::Plain, SmtpAuthMechanism::Login, SmtpAuthMechanism::Xoauth2]; let allowed_mechanisms = [SmtpAuthMechanism::Plain, SmtpAuthMechanism::Login, SmtpAuthMechanism::Xoauth2];
let mut selected_mechanisms = vec![]; let mut selected_mechanisms = vec![];
for wanted_mechanism in mechanism.split(',') { for wanted_mechanism in mechanism.split(',') {
for m in &allowed_mechanisms { for m in &allowed_mechanisms {
if m.to_string().to_lowercase() == wanted_mechanism.trim_matches(|c| c == '"' || c == '\'' || c == ' ').to_lowercase() { if m.to_string().to_lowercase()
== wanted_mechanism.trim_matches(|c| c == '"' || c == '\'' || c == ' ').to_lowercase()
{
selected_mechanisms.push(*m); selected_mechanisms.push(*m);
} }
} }
}; }
if !selected_mechanisms.is_empty() { if !selected_mechanisms.is_empty() {
smtp_client.authentication(selected_mechanisms) smtp_client.authentication(selected_mechanisms)
@@ -115,7 +117,7 @@ pub fn send_password_hint(address: &str, hint: Option<String>) -> EmptyResult {
let (subject, body_html, body_text) = get_text(template_name, json!({ "hint": hint, "url": CONFIG.domain() }))?; let (subject, body_html, body_text) = get_text(template_name, json!({ "hint": hint, "url": CONFIG.domain() }))?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult { pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
@@ -132,7 +134,7 @@ pub fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult { pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
@@ -149,7 +151,7 @@ pub fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_welcome(address: &str) -> EmptyResult { pub fn send_welcome(address: &str) -> EmptyResult {
@@ -160,7 +162,7 @@ pub fn send_welcome(address: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult { pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
@@ -176,7 +178,7 @@ pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_invite( pub fn send_invite(
@@ -200,15 +202,15 @@ pub fn send_invite(
"email/send_org_invite", "email/send_org_invite",
json!({ json!({
"url": CONFIG.domain(), "url": CONFIG.domain(),
"org_id": org_id.unwrap_or_else(|| "_".to_string()), "org_id": org_id.as_deref().unwrap_or("_"),
"org_user_id": org_user_id.unwrap_or_else(|| "_".to_string()), "org_user_id": org_user_id.as_deref().unwrap_or("_"),
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(), "email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"org_name": org_name, "org_name": org_name,
"token": invite_token, "token": invite_token,
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult { pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
@@ -221,7 +223,7 @@ pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str)
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult { pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
@@ -233,7 +235,7 @@ pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>, device: &str) -> EmptyResult { pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>, device: &str) -> EmptyResult {
@@ -251,7 +253,7 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>,
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_token(address: &str, token: &str) -> EmptyResult { pub fn send_token(address: &str, token: &str) -> EmptyResult {
@@ -263,7 +265,7 @@ pub fn send_token(address: &str, token: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_change_email(address: &str, token: &str) -> EmptyResult { pub fn send_change_email(address: &str, token: &str) -> EmptyResult {
@@ -275,7 +277,7 @@ pub fn send_change_email(address: &str, token: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
pub fn send_test(address: &str) -> EmptyResult { pub fn send_test(address: &str) -> EmptyResult {
@@ -286,10 +288,10 @@ pub fn send_test(address: &str) -> EmptyResult {
}), }),
)?; )?;
send_email(address, &subject, &body_html, &body_text) send_email(address, &subject, body_html, body_text)
} }
fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) -> EmptyResult { fn send_email(address: &str, subject: &str, body_html: String, body_text: String) -> EmptyResult {
let address_split: Vec<&str> = address.rsplitn(2, '@').collect(); let address_split: Vec<&str> = address.rsplitn(2, '@').collect();
if address_split.len() != 2 { if address_split.len() != 2 {
err!("Invalid email address (no @)"); err!("Invalid email address (no @)");
@@ -306,47 +308,37 @@ fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) ->
// We force Base64 encoding because in the past we had issues with different encodings. // We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64) .header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/html; charset=utf-8".parse()?)) .header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(String::from(body_html)); .body(body_html);
let text = SinglePart::builder() let text = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings. // We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64) .header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/plain; charset=utf-8".parse()?)) .header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(String::from(body_text)); .body(body_text);
let smtp_from = &CONFIG.smtp_from(); let smtp_from = &CONFIG.smtp_from();
let email = Message::builder() let email = Message::builder()
.message_id(Some(format!("<{}@{}>", crate::util::get_uuid(), smtp_from.split('@').collect::<Vec<&str>>()[1] ))) .message_id(Some(format!("<{}@{}>", crate::util::get_uuid(), smtp_from.split('@').collect::<Vec<&str>>()[1])))
.to(Mailbox::new(None, Address::from_str(&address)?)) .to(Mailbox::new(None, Address::from_str(&address)?))
.from(Mailbox::new( .from(Mailbox::new(Some(CONFIG.smtp_from_name()), Address::from_str(smtp_from)?))
Some(CONFIG.smtp_from_name()),
Address::from_str(smtp_from)?,
))
.subject(subject) .subject(subject)
.multipart( .multipart(MultiPart::alternative().singlepart(text).singlepart(html))?;
MultiPart::alternative()
.singlepart(text)
.singlepart(html)
)?;
match mailer().send(&email) { match mailer().send(&email) {
Ok(_) => Ok(()), Ok(_) => Ok(()),
// Match some common errors and make them more user friendly // Match some common errors and make them more user friendly
Err(e) => match e { Err(e) => {
lettre::transport::smtp::Error::Client(x) => { if e.is_client() {
err!(format!("SMTP Client error: {}", x)); err!(format!("SMTP Client error: {}", e));
}, } else if e.is_transient() {
lettre::transport::smtp::Error::Transient(x) => { err!(format!("SMTP 4xx error: {:?}", e));
err!(format!("SMTP 4xx error: {:?}", x.message)); } else if e.is_permanent() {
}, err!(format!("SMTP 5xx error: {:?}", e));
lettre::transport::smtp::Error::Permanent(x) => { } else if e.is_timeout() {
err!(format!("SMTP 5xx error: {:?}", x.message)); err!(format!("SMTP timeout error: {:?}", e));
}, } else {
lettre::transport::smtp::Error::Io(x) => { Err(e.into())
err!(format!("SMTP IO error: {}", x)); }
},
// Fallback for all other errors
_ => Err(e.into())
} }
} }
} }

View File

@@ -16,6 +16,7 @@ extern crate diesel;
#[macro_use] #[macro_use]
extern crate diesel_migrations; extern crate diesel_migrations;
use job_scheduler::{Job, JobScheduler};
use std::{ use std::{
fs::create_dir_all, fs::create_dir_all,
panic, panic,
@@ -23,10 +24,9 @@ use std::{
process::{exit, Command}, process::{exit, Command},
str::FromStr, str::FromStr,
thread, thread,
time::Duration,
}; };
use structopt::StructOpt;
#[macro_use] #[macro_use]
mod error; mod error;
mod api; mod api;
@@ -40,14 +40,7 @@ mod util;
pub use config::CONFIG; pub use config::CONFIG;
pub use error::{Error, MapResult}; pub use error::{Error, MapResult};
pub use util::is_running_in_docker;
#[derive(Debug, StructOpt)]
#[structopt(name = "bitwarden_rs", about = "A Bitwarden API server written in Rust")]
struct Opt {
/// Prints the app version
#[structopt(short, long)]
version: bool,
}
fn main() { fn main() {
parse_args(); parse_args();
@@ -57,34 +50,47 @@ fn main() {
let level = LF::from_str(&CONFIG.log_level()).expect("Valid log level"); let level = LF::from_str(&CONFIG.log_level()).expect("Valid log level");
init_logging(level).ok(); init_logging(level).ok();
let extra_debug = match level { let extra_debug = matches!(level, LF::Trace | LF::Debug);
LF::Trace | LF::Debug => true,
_ => false,
};
check_data_folder();
check_rsa_keys(); check_rsa_keys();
check_web_vault(); check_web_vault();
create_icon_cache_folder(); create_icon_cache_folder();
launch_rocket(extra_debug); let pool = create_db_pool();
schedule_jobs(pool.clone());
launch_rocket(pool, extra_debug); // Blocks until program termination.
} }
const HELP: &str = "\
Alternative implementation of the Bitwarden server API written in Rust
USAGE:
vaultwarden
FLAGS:
-h, --help Prints help information
-v, --version Prints the app version
";
fn parse_args() { fn parse_args() {
let opt = Opt::from_args(); const NO_VERSION: &str = "(Version info from Git not present)";
if opt.version { let mut pargs = pico_args::Arguments::from_env();
if let Some(version) = option_env!("BWRS_VERSION") {
println!("bitwarden_rs {}", version); if pargs.contains(["-h", "--help"]) {
} else { println!("vaultwarden {}", option_env!("BWRS_VERSION").unwrap_or(NO_VERSION));
println!("bitwarden_rs (Version info from Git not present)"); print!("{}", HELP);
} exit(0);
} else if pargs.contains(["-v", "--version"]) {
println!("vaultwarden {}", option_env!("BWRS_VERSION").unwrap_or(NO_VERSION));
exit(0); exit(0);
} }
} }
fn launch_info() { fn launch_info() {
println!("/--------------------------------------------------------------------\\"); println!("/--------------------------------------------------------------------\\");
println!("| Starting Bitwarden_RS |"); println!("| Starting Vaultwarden |");
if let Some(version) = option_env!("BWRS_VERSION") { if let Some(version) = option_env!("BWRS_VERSION") {
println!("|{:^68}|", format!("Version {}", version)); println!("|{:^68}|", format!("Version {}", version));
@@ -96,7 +102,7 @@ fn launch_info() {
println!("| Send usage/configuration questions or feature requests to: |"); println!("| Send usage/configuration questions or feature requests to: |");
println!("| https://bitwardenrs.discourse.group/ |"); println!("| https://bitwardenrs.discourse.group/ |");
println!("| Report suspected bugs/issues in the software itself at: |"); println!("| Report suspected bugs/issues in the software itself at: |");
println!("| https://github.com/dani-garcia/bitwarden_rs/issues/new |"); println!("| https://github.com/dani-garcia/vaultwarden/issues/new |");
println!("\\--------------------------------------------------------------------/\n"); println!("\\--------------------------------------------------------------------/\n");
} }
@@ -121,7 +127,9 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
// Enable smtp debug logging only specifically for smtp when need. // Enable smtp debug logging only specifically for smtp when need.
// This can contain sensitive information we do not want in the default debug/trace logging. // This can contain sensitive information we do not want in the default debug/trace logging.
if CONFIG.smtp_debug() { if CONFIG.smtp_debug() {
println!("[WARNING] SMTP Debugging is enabled (SMTP_DEBUG=true). Sensitive information could be disclosed via logs!"); println!(
"[WARNING] SMTP Debugging is enabled (SMTP_DEBUG=true). Sensitive information could be disclosed via logs!"
);
println!("[WARNING] Only enable SMTP_DEBUG during troubleshooting!\n"); println!("[WARNING] Only enable SMTP_DEBUG during troubleshooting!\n");
logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Debug) logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Debug)
} else { } else {
@@ -199,7 +207,7 @@ fn chain_syslog(logger: fern::Dispatch) -> fern::Dispatch {
let syslog_fmt = syslog::Formatter3164 { let syslog_fmt = syslog::Formatter3164 {
facility: syslog::Facility::LOG_USER, facility: syslog::Facility::LOG_USER,
hostname: None, hostname: None,
process: "bitwarden_rs".into(), process: "vaultwarden".into(),
pid: 0, pid: 0,
}; };
@@ -212,9 +220,28 @@ fn chain_syslog(logger: fern::Dispatch) -> fern::Dispatch {
} }
} }
fn create_dir(path: &str, description: &str) {
// Try to create the specified dir, if it doesn't already exist.
let err_msg = format!("Error creating {} directory '{}'", description, path);
create_dir_all(path).expect(&err_msg);
}
fn create_icon_cache_folder() { fn create_icon_cache_folder() {
// Try to create the icon cache folder, and generate an error if it could not. create_dir(&CONFIG.icon_cache_folder(), "icon cache");
create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache directory"); }
fn check_data_folder() {
let data_folder = &CONFIG.data_folder();
let path = Path::new(data_folder);
if !path.exists() {
error!("Data folder '{}' doesn't exist.", data_folder);
if is_running_in_docker() {
error!("Verify that your data volume is mounted at the correct location.");
} else {
error!("Create the data folder and try again.");
}
exit(1);
}
} }
fn check_rsa_keys() { fn check_rsa_keys() {
@@ -273,22 +300,27 @@ fn check_web_vault() {
let index_path = Path::new(&CONFIG.web_vault_folder()).join("index.html"); let index_path = Path::new(&CONFIG.web_vault_folder()).join("index.html");
if !index_path.exists() { if !index_path.exists() {
error!("Web vault is not found at '{}'. To install it, please follow the steps in: ", CONFIG.web_vault_folder()); error!(
error!("https://github.com/dani-garcia/bitwarden_rs/wiki/Building-binary#install-the-web-vault"); "Web vault is not found at '{}'. To install it, please follow the steps in: ",
CONFIG.web_vault_folder()
);
error!("https://github.com/dani-garcia/vaultwarden/wiki/Building-binary#install-the-web-vault");
error!("You can also set the environment variable 'WEB_VAULT_ENABLED=false' to disable it"); error!("You can also set the environment variable 'WEB_VAULT_ENABLED=false' to disable it");
exit(1); exit(1);
} }
} }
fn launch_rocket(extra_debug: bool) { fn create_db_pool() -> db::DbPool {
let pool = match util::retry_db(db::DbPool::from_config, CONFIG.db_connection_retries()) { match util::retry_db(db::DbPool::from_config, CONFIG.db_connection_retries()) {
Ok(p) => p, Ok(p) => p,
Err(e) => { Err(e) => {
error!("Error creating database pool: {:?}", e); error!("Error creating database pool: {:?}", e);
exit(1); exit(1);
} }
}; }
}
fn launch_rocket(pool: db::DbPool, extra_debug: bool) {
let basepath = &CONFIG.domain_path(); let basepath = &CONFIG.domain_path();
// If adding more paths here, consider also adding them to // If adding more paths here, consider also adding them to
@@ -303,7 +335,7 @@ fn launch_rocket(extra_debug: bool) {
.manage(pool) .manage(pool)
.manage(api::start_notification_server()) .manage(api::start_notification_server())
.attach(util::AppHeaders()) .attach(util::AppHeaders())
.attach(util::CORS()) .attach(util::Cors())
.attach(util::BetterLogging(extra_debug)) .attach(util::BetterLogging(extra_debug))
.launch(); .launch();
@@ -311,3 +343,40 @@ fn launch_rocket(extra_debug: bool) {
// The launch will restore the original logging level // The launch will restore the original logging level
error!("Launch error {:#?}", result); error!("Launch error {:#?}", result);
} }
fn schedule_jobs(pool: db::DbPool) {
if CONFIG.job_poll_interval_ms() == 0 {
info!("Job scheduler disabled.");
return;
}
thread::Builder::new()
.name("job-scheduler".to_string())
.spawn(move || {
let mut sched = JobScheduler::new();
// Purge sends that are past their deletion date.
if !CONFIG.send_purge_schedule().is_empty() {
sched.add(Job::new(CONFIG.send_purge_schedule().parse().unwrap(), || {
api::purge_sends(pool.clone());
}));
}
// Purge trashed items that are old enough to be auto-deleted.
if !CONFIG.trash_purge_schedule().is_empty() {
sched.add(Job::new(CONFIG.trash_purge_schedule().parse().unwrap(), || {
api::purge_trashed_ciphers(pool.clone());
}));
}
// Periodically check for jobs to run. We probably won't need any
// jobs that run more often than once a minute, so a default poll
// interval of 30 seconds should be sufficient. Users who want to
// schedule jobs to run more frequently for some reason can reduce
// the poll interval accordingly.
loop {
sched.tick();
thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms()));
}
})
.expect("Error spawning job scheduler thread");
}

View File

@@ -772,7 +772,8 @@
"stackoverflow.com", "stackoverflow.com",
"serverfault.com", "serverfault.com",
"mathoverflow.net", "mathoverflow.net",
"askubuntu.com" "askubuntu.com",
"stackapps.com"
], ],
"Excluded": false "Excluded": false
}, },
@@ -894,5 +895,13 @@
"sony.com" "sony.com"
], ],
"Excluded": false "Excluded": false
},
{
"Type": 85,
"Domains": [
"protonmail.com",
"protonvpn.com"
],
"Excluded": false
} }
] ]

Binary file not shown.

After

Width:  |  Height:  |  Size: 331 B

View File

@@ -1,402 +0,0 @@
/*
* JavaScript MD5
* https://github.com/blueimp/JavaScript-MD5
*
* Copyright 2011, Sebastian Tschan
* https://blueimp.net
*
* Licensed under the MIT license:
* https://opensource.org/licenses/MIT
*
* Based on
* A JavaScript implementation of the RSA Data Security, Inc. MD5 Message
* Digest Algorithm, as defined in RFC 1321.
* Version 2.2 Copyright (C) Paul Johnston 1999 - 2009
* Other contributors: Greg Holt, Andrew Kepert, Ydnar, Lostinet
* Distributed under the BSD License
* See http://pajhome.org.uk/crypt/md5 for more info.
*/
/* global define */
/* eslint-disable strict */
;(function($) {
'use strict'
/**
* Add integers, wrapping at 2^32.
* This uses 16-bit operations internally to work around bugs in interpreters.
*
* @param {number} x First integer
* @param {number} y Second integer
* @returns {number} Sum
*/
function safeAdd(x, y) {
var lsw = (x & 0xffff) + (y & 0xffff)
var msw = (x >> 16) + (y >> 16) + (lsw >> 16)
return (msw << 16) | (lsw & 0xffff)
}
/**
* Bitwise rotate a 32-bit number to the left.
*
* @param {number} num 32-bit number
* @param {number} cnt Rotation count
* @returns {number} Rotated number
*/
function bitRotateLeft(num, cnt) {
return (num << cnt) | (num >>> (32 - cnt))
}
/**
* Basic operation the algorithm uses.
*
* @param {number} q q
* @param {number} a a
* @param {number} b b
* @param {number} x x
* @param {number} s s
* @param {number} t t
* @returns {number} Result
*/
function md5cmn(q, a, b, x, s, t) {
return safeAdd(bitRotateLeft(safeAdd(safeAdd(a, q), safeAdd(x, t)), s), b)
}
/**
* Basic operation the algorithm uses.
*
* @param {number} a a
* @param {number} b b
* @param {number} c c
* @param {number} d d
* @param {number} x x
* @param {number} s s
* @param {number} t t
* @returns {number} Result
*/
function md5ff(a, b, c, d, x, s, t) {
return md5cmn((b & c) | (~b & d), a, b, x, s, t)
}
/**
* Basic operation the algorithm uses.
*
* @param {number} a a
* @param {number} b b
* @param {number} c c
* @param {number} d d
* @param {number} x x
* @param {number} s s
* @param {number} t t
* @returns {number} Result
*/
function md5gg(a, b, c, d, x, s, t) {
return md5cmn((b & d) | (c & ~d), a, b, x, s, t)
}
/**
* Basic operation the algorithm uses.
*
* @param {number} a a
* @param {number} b b
* @param {number} c c
* @param {number} d d
* @param {number} x x
* @param {number} s s
* @param {number} t t
* @returns {number} Result
*/
function md5hh(a, b, c, d, x, s, t) {
return md5cmn(b ^ c ^ d, a, b, x, s, t)
}
/**
* Basic operation the algorithm uses.
*
* @param {number} a a
* @param {number} b b
* @param {number} c c
* @param {number} d d
* @param {number} x x
* @param {number} s s
* @param {number} t t
* @returns {number} Result
*/
function md5ii(a, b, c, d, x, s, t) {
return md5cmn(c ^ (b | ~d), a, b, x, s, t)
}
/**
* Calculate the MD5 of an array of little-endian words, and a bit length.
*
* @param {Array} x Array of little-endian words
* @param {number} len Bit length
* @returns {Array<number>} MD5 Array
*/
function binlMD5(x, len) {
/* append padding */
x[len >> 5] |= 0x80 << len % 32
x[(((len + 64) >>> 9) << 4) + 14] = len
var i
var olda
var oldb
var oldc
var oldd
var a = 1732584193
var b = -271733879
var c = -1732584194
var d = 271733878
for (i = 0; i < x.length; i += 16) {
olda = a
oldb = b
oldc = c
oldd = d
a = md5ff(a, b, c, d, x[i], 7, -680876936)
d = md5ff(d, a, b, c, x[i + 1], 12, -389564586)
c = md5ff(c, d, a, b, x[i + 2], 17, 606105819)
b = md5ff(b, c, d, a, x[i + 3], 22, -1044525330)
a = md5ff(a, b, c, d, x[i + 4], 7, -176418897)
d = md5ff(d, a, b, c, x[i + 5], 12, 1200080426)
c = md5ff(c, d, a, b, x[i + 6], 17, -1473231341)
b = md5ff(b, c, d, a, x[i + 7], 22, -45705983)
a = md5ff(a, b, c, d, x[i + 8], 7, 1770035416)
d = md5ff(d, a, b, c, x[i + 9], 12, -1958414417)
c = md5ff(c, d, a, b, x[i + 10], 17, -42063)
b = md5ff(b, c, d, a, x[i + 11], 22, -1990404162)
a = md5ff(a, b, c, d, x[i + 12], 7, 1804603682)
d = md5ff(d, a, b, c, x[i + 13], 12, -40341101)
c = md5ff(c, d, a, b, x[i + 14], 17, -1502002290)
b = md5ff(b, c, d, a, x[i + 15], 22, 1236535329)
a = md5gg(a, b, c, d, x[i + 1], 5, -165796510)
d = md5gg(d, a, b, c, x[i + 6], 9, -1069501632)
c = md5gg(c, d, a, b, x[i + 11], 14, 643717713)
b = md5gg(b, c, d, a, x[i], 20, -373897302)
a = md5gg(a, b, c, d, x[i + 5], 5, -701558691)
d = md5gg(d, a, b, c, x[i + 10], 9, 38016083)
c = md5gg(c, d, a, b, x[i + 15], 14, -660478335)
b = md5gg(b, c, d, a, x[i + 4], 20, -405537848)
a = md5gg(a, b, c, d, x[i + 9], 5, 568446438)
d = md5gg(d, a, b, c, x[i + 14], 9, -1019803690)
c = md5gg(c, d, a, b, x[i + 3], 14, -187363961)
b = md5gg(b, c, d, a, x[i + 8], 20, 1163531501)
a = md5gg(a, b, c, d, x[i + 13], 5, -1444681467)
d = md5gg(d, a, b, c, x[i + 2], 9, -51403784)
c = md5gg(c, d, a, b, x[i + 7], 14, 1735328473)
b = md5gg(b, c, d, a, x[i + 12], 20, -1926607734)
a = md5hh(a, b, c, d, x[i + 5], 4, -378558)
d = md5hh(d, a, b, c, x[i + 8], 11, -2022574463)
c = md5hh(c, d, a, b, x[i + 11], 16, 1839030562)
b = md5hh(b, c, d, a, x[i + 14], 23, -35309556)
a = md5hh(a, b, c, d, x[i + 1], 4, -1530992060)
d = md5hh(d, a, b, c, x[i + 4], 11, 1272893353)
c = md5hh(c, d, a, b, x[i + 7], 16, -155497632)
b = md5hh(b, c, d, a, x[i + 10], 23, -1094730640)
a = md5hh(a, b, c, d, x[i + 13], 4, 681279174)
d = md5hh(d, a, b, c, x[i], 11, -358537222)
c = md5hh(c, d, a, b, x[i + 3], 16, -722521979)
b = md5hh(b, c, d, a, x[i + 6], 23, 76029189)
a = md5hh(a, b, c, d, x[i + 9], 4, -640364487)
d = md5hh(d, a, b, c, x[i + 12], 11, -421815835)
c = md5hh(c, d, a, b, x[i + 15], 16, 530742520)
b = md5hh(b, c, d, a, x[i + 2], 23, -995338651)
a = md5ii(a, b, c, d, x[i], 6, -198630844)
d = md5ii(d, a, b, c, x[i + 7], 10, 1126891415)
c = md5ii(c, d, a, b, x[i + 14], 15, -1416354905)
b = md5ii(b, c, d, a, x[i + 5], 21, -57434055)
a = md5ii(a, b, c, d, x[i + 12], 6, 1700485571)
d = md5ii(d, a, b, c, x[i + 3], 10, -1894986606)
c = md5ii(c, d, a, b, x[i + 10], 15, -1051523)
b = md5ii(b, c, d, a, x[i + 1], 21, -2054922799)
a = md5ii(a, b, c, d, x[i + 8], 6, 1873313359)
d = md5ii(d, a, b, c, x[i + 15], 10, -30611744)
c = md5ii(c, d, a, b, x[i + 6], 15, -1560198380)
b = md5ii(b, c, d, a, x[i + 13], 21, 1309151649)
a = md5ii(a, b, c, d, x[i + 4], 6, -145523070)
d = md5ii(d, a, b, c, x[i + 11], 10, -1120210379)
c = md5ii(c, d, a, b, x[i + 2], 15, 718787259)
b = md5ii(b, c, d, a, x[i + 9], 21, -343485551)
a = safeAdd(a, olda)
b = safeAdd(b, oldb)
c = safeAdd(c, oldc)
d = safeAdd(d, oldd)
}
return [a, b, c, d]
}
/**
* Convert an array of little-endian words to a string
*
* @param {Array<number>} input MD5 Array
* @returns {string} MD5 string
*/
function binl2rstr(input) {
var i
var output = ''
var length32 = input.length * 32
for (i = 0; i < length32; i += 8) {
output += String.fromCharCode((input[i >> 5] >>> i % 32) & 0xff)
}
return output
}
/**
* Convert a raw string to an array of little-endian words
* Characters >255 have their high-byte silently ignored.
*
* @param {string} input Raw input string
* @returns {Array<number>} Array of little-endian words
*/
function rstr2binl(input) {
var i
var output = []
output[(input.length >> 2) - 1] = undefined
for (i = 0; i < output.length; i += 1) {
output[i] = 0
}
var length8 = input.length * 8
for (i = 0; i < length8; i += 8) {
output[i >> 5] |= (input.charCodeAt(i / 8) & 0xff) << i % 32
}
return output
}
/**
* Calculate the MD5 of a raw string
*
* @param {string} s Input string
* @returns {string} Raw MD5 string
*/
function rstrMD5(s) {
return binl2rstr(binlMD5(rstr2binl(s), s.length * 8))
}
/**
* Calculates the HMAC-MD5 of a key and some data (raw strings)
*
* @param {string} key HMAC key
* @param {string} data Raw input string
* @returns {string} Raw MD5 string
*/
function rstrHMACMD5(key, data) {
var i
var bkey = rstr2binl(key)
var ipad = []
var opad = []
var hash
ipad[15] = opad[15] = undefined
if (bkey.length > 16) {
bkey = binlMD5(bkey, key.length * 8)
}
for (i = 0; i < 16; i += 1) {
ipad[i] = bkey[i] ^ 0x36363636
opad[i] = bkey[i] ^ 0x5c5c5c5c
}
hash = binlMD5(ipad.concat(rstr2binl(data)), 512 + data.length * 8)
return binl2rstr(binlMD5(opad.concat(hash), 512 + 128))
}
/**
* Convert a raw string to a hex string
*
* @param {string} input Raw input string
* @returns {string} Hex encoded string
*/
function rstr2hex(input) {
var hexTab = '0123456789abcdef'
var output = ''
var x
var i
for (i = 0; i < input.length; i += 1) {
x = input.charCodeAt(i)
output += hexTab.charAt((x >>> 4) & 0x0f) + hexTab.charAt(x & 0x0f)
}
return output
}
/**
* Encode a string as UTF-8
*
* @param {string} input Input string
* @returns {string} UTF8 string
*/
function str2rstrUTF8(input) {
return unescape(encodeURIComponent(input))
}
/**
* Encodes input string as raw MD5 string
*
* @param {string} s Input string
* @returns {string} Raw MD5 string
*/
function rawMD5(s) {
return rstrMD5(str2rstrUTF8(s))
}
/**
* Encodes input string as Hex encoded string
*
* @param {string} s Input string
* @returns {string} Hex encoded string
*/
function hexMD5(s) {
return rstr2hex(rawMD5(s))
}
/**
* Calculates the raw HMAC-MD5 for the given key and data
*
* @param {string} k HMAC key
* @param {string} d Input string
* @returns {string} Raw MD5 string
*/
function rawHMACMD5(k, d) {
return rstrHMACMD5(str2rstrUTF8(k), str2rstrUTF8(d))
}
/**
* Calculates the Hex encoded HMAC-MD5 for the given key and data
*
* @param {string} k HMAC key
* @param {string} d Input string
* @returns {string} Raw MD5 string
*/
function hexHMACMD5(k, d) {
return rstr2hex(rawHMACMD5(k, d))
}
/**
* Calculates MD5 value for a given string.
* If a key is provided, calculates the HMAC-MD5 value.
* Returns a Hex encoded string unless the raw argument is given.
*
* @param {string} string Input string
* @param {string} [key] HMAC key
* @param {boolean} [raw] Raw output switch
* @returns {string} MD5 output
*/
function md5(string, key, raw) {
if (!key) {
if (!raw) {
return hexMD5(string)
}
return rawMD5(string)
}
if (!raw) {
return hexHMACMD5(key, string)
}
return rawHMACMD5(key, string)
}
if (typeof define === 'function' && define.amd) {
define(function() {
return md5
})
} else if (typeof module === 'object' && module.exports) {
module.exports = md5
} else {
$.md5 = md5
}
})(this)

View File

@@ -5,7 +5,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="robots" content="noindex,nofollow" /> <meta name="robots" content="noindex,nofollow" />
<link rel="icon" type="image/png" href="{{urlpath}}/bwrs_static/shield-white.png"> <link rel="icon" type="image/png" href="{{urlpath}}/bwrs_static/shield-white.png">
<title>Bitwarden_rs Admin Panel</title> <title>Vaultwarden Admin Panel</title>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" /> <link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" />
<style> <style>
body { body {
@@ -20,7 +20,6 @@
width: auto; width: auto;
} }
</style> </style>
<script src="{{urlpath}}/bwrs_static/md5.js"></script>
<script src="{{urlpath}}/bwrs_static/identicon.js"></script> <script src="{{urlpath}}/bwrs_static/identicon.js"></script>
<script> <script>
function reload() { window.location.reload(); } function reload() { window.location.reload(); }
@@ -28,8 +27,17 @@
text && alert(text); text && alert(text);
reload_page && reload(); reload_page && reload();
} }
function identicon(email) { async function sha256(message) {
const data = new Identicon(md5(email), { size: 48, format: 'svg' }); // https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest
const msgUint8 = new TextEncoder().encode(message);
const hashBuffer = await crypto.subtle.digest('SHA-256', msgUint8);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
return hashHex;
}
async function identicon(email) {
const hash = await sha256(email);
const data = new Identicon(hash, { size: 48, format: 'svg' });
return "data:image/svg+xml;base64," + data.toString(); return "data:image/svg+xml;base64," + data.toString();
} }
function toggleVis(input_id) { function toggleVis(input_id) {
@@ -75,7 +83,7 @@
<body class="bg-light"> <body class="bg-light">
<nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top"> <nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top">
<div class="container-xl"> <div class="container-xl">
<a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Bitwarden_rs Admin</a> <a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Vaultwarden Admin</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse" <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse"
aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation"> aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span> <span class="navbar-toggler-icon"></span>

View File

@@ -2,7 +2,7 @@
<div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow"> <div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-2">Diagnostics</h6> <h6 class="border-bottom pb-2 mb-2">Diagnostics</h6>
<h3>Version</h3> <h3>Versions</h3>
<div class="row"> <div class="row">
<div class="col-md"> <div class="col-md">
<dl class="row"> <dl class="row">
@@ -20,6 +20,7 @@
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span> <span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span>
</dd> </dd>
{{#if diagnostics.web_vault_enabled}}
<dt class="col-sm-5">Web Installed <dt class="col-sm-5">Web Installed
<span class="badge badge-success d-none" id="web-success" title="Latest version is installed.">Ok</span> <span class="badge badge-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge badge-warning d-none" id="web-warning" title="There seems to be an update available.">Update</span> <span class="badge badge-warning d-none" id="web-warning" title="There seems to be an update available.">Update</span>
@@ -35,6 +36,17 @@
<span id="web-latest">{{diagnostics.latest_web_build}}</span> <span id="web-latest">{{diagnostics.latest_web_build}}</span>
</dd> </dd>
{{/unless}} {{/unless}}
{{/if}}
{{#unless diagnostics.web_vault_enabled}}
<dt class="col-sm-5">Web Installed</dt>
<dd class="col-sm-7">
<span id="web-installed">Web Vault is disabled</span>
</dd>
{{/unless}}
<dt class="col-sm-5">Database</dt>
<dd class="col-sm-7">
<span><b>{{diagnostics.db_type}}:</b> {{diagnostics.db_version}}</span>
</dd>
</dl> </dl>
</div> </div>
</div> </div>
@@ -46,35 +58,65 @@
<dt class="col-sm-5">Running within Docker</dt> <dt class="col-sm-5">Running within Docker</dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
{{#if diagnostics.running_within_docker}} {{#if diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>Yes</b></span> <span class="d-block"><b>Yes</b></span>
{{/if}} {{/if}}
{{#unless diagnostics.running_within_docker}} {{#unless diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
{{/unless}} {{/unless}}
</dd> </dd>
<dt class="col-sm-5">Uses a proxy</dt> <dt class="col-sm-5">Uses a reverse proxy</dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
{{#if diagnostics.uses_proxy}} {{#if diagnostics.ip_header_exists}}
<span id="running-docker" class="d-block"><b>Yes</b></span> <span class="d-block" title="IP Header found."><b>Yes</b></span>
{{/if}} {{/if}}
{{#unless diagnostics.uses_proxy}} {{#unless diagnostics.ip_header_exists}}
<span id="running-docker" class="d-block"><b>No</b></span> <span class="d-block" title="No IP Header found."><b>No</b></span>
{{/unless}} {{/unless}}
</dd> </dd>
{{!-- Only show this if the IP Header Exists --}}
{{#if diagnostics.ip_header_exists}}
<dt class="col-sm-5">IP header
{{#if diagnostics.ip_header_match}}
<span class="badge badge-success" title="IP_HEADER config seems to be valid.">Match</span>
{{/if}}
{{#unless diagnostics.ip_header_match}}
<span class="badge badge-danger" title="IP_HEADER config seems to be invalid. IP's in the log could be invalid. Please fix.">No Match</span>
{{/unless}}
</dt>
<dd class="col-sm-7">
{{#if diagnostics.ip_header_match}}
<span class="d-block"><b>Config/Server:</b> {{ diagnostics.ip_header_name }}</span>
{{/if}}
{{#unless diagnostics.ip_header_match}}
<span class="d-block"><b>Config:</b> {{ diagnostics.ip_header_config }}</span>
<span class="d-block"><b>Server:</b> {{ diagnostics.ip_header_name }}</span>
{{/unless}}
</dd>
{{/if}}
{{!-- End if IP Header Exists --}}
<dt class="col-sm-5">Internet access <dt class="col-sm-5">Internet access
{{#if diagnostics.has_http_access}} {{#if diagnostics.has_http_access}}
<span class="badge badge-success" id="internet-success" title="We have internet access!">Ok</span> <span class="badge badge-success" title="We have internet access!">Ok</span>
{{/if}} {{/if}}
{{#unless diagnostics.has_http_access}} {{#unless diagnostics.has_http_access}}
<span class="badge badge-danger" id="internet-warning" title="There seems to be no internet access. Please fix.">Error</span> <span class="badge badge-danger" title="There seems to be no internet access. Please fix.">Error</span>
{{/unless}} {{/unless}}
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
{{#if diagnostics.has_http_access}} {{#if diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>Yes</b></span> <span class="d-block"><b>Yes</b></span>
{{/if}} {{/if}}
{{#unless diagnostics.has_http_access}} {{#unless diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Internet access via a proxy</dt>
<dd class="col-sm-7">
{{#if diagnostics.uses_proxy}}
<span class="d-block" title="Internet access goes via a proxy (HTTPS_PROXY or HTTP_PROXY is configured)."><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.uses_proxy}}
<span class="d-block" title="We have direct internet access, no outgoing proxy configured."><b>No</b></span>
{{/unless}} {{/unless}}
</dd> </dd>
<dt class="col-sm-5">DNS (github.com) <dt class="col-sm-5">DNS (github.com)
@@ -84,7 +126,10 @@
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="dns-resolved">{{diagnostics.dns_resolved}}</span> <span id="dns-resolved">{{diagnostics.dns_resolved}}</span>
</dd> </dd>
<dt class="col-sm-5">Date & Time (Local)</dt>
<dd class="col-sm-7">
<span><b>Server:</b> {{diagnostics.server_time_local}}</span>
</dd>
<dt class="col-sm-5">Date & Time (UTC) <dt class="col-sm-5">Date & Time (UTC)
<span class="badge badge-success d-none" id="time-success" title="Time offsets seem to be correct.">Ok</span> <span class="badge badge-success d-none" id="time-success" title="Time offsets seem to be correct.">Ok</span>
<span class="badge badge-danger d-none" id="time-warning" title="Time offsets are too mouch at drift.">Error</span> <span class="badge badge-danger d-none" id="time-warning" title="Time offsets are too mouch at drift.">Error</span>
@@ -114,8 +159,8 @@
<dl class="row"> <dl class="row">
<dd class="col-sm-12"> <dd class="col-sm-12">
If you need support please check the following links first before you create a new issue: If you need support please check the following links first before you create a new issue:
<a href="https://bitwardenrs.discourse.group/" target="_blank" rel="noreferrer">Bitwarden_RS Forum</a> <a href="https://bitwardenrs.discourse.group/" target="_blank" rel="noreferrer">Vaultwarden Forum</a>
| <a href="https://github.com/dani-garcia/bitwarden_rs/discussions" target="_blank" rel="noreferrer">Github Discussions</a> | <a href="https://github.com/dani-garcia/vaultwarden/discussions" target="_blank" rel="noreferrer">Github Discussions</a>
</dd> </dd>
</dl> </dl>
<dl class="row"> <dl class="row">
@@ -177,7 +222,7 @@
} }
// ================================ // ================================
// Version check for both bitwarden_rs and web-vault // Version check for both vaultwarden and web-vault
let serverInstalled = document.getElementById('server-installed').innerText; let serverInstalled = document.getElementById('server-installed').innerText;
let serverLatest = document.getElementById('server-latest').innerText; let serverLatest = document.getElementById('server-latest').innerText;
let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', ''); let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', '');
@@ -260,19 +305,21 @@
async function generateSupportString() { async function generateSupportString() {
supportString = "### Your environment (Generated via diagnostics page)\n"; supportString = "### Your environment (Generated via diagnostics page)\n";
supportString += "* Bitwarden_rs version: v{{ version }}\n"; supportString += "* Vaultwarden version: v{{ version }}\n";
supportString += "* Web-vault version: v{{ diagnostics.web_vault_version }}\n"; supportString += "* Web-vault version: v{{ diagnostics.web_vault_version }}\n";
supportString += "* Running within Docker: {{ diagnostics.running_within_docker }}\n"; supportString += "* Running within Docker: {{ diagnostics.running_within_docker }}\n";
supportString += "* Uses a reverse proxy: {{ diagnostics.ip_header_exists }}\n";
{{#if diagnostics.ip_header_exists}}
supportString += "* IP Header check: {{ diagnostics.ip_header_match }} ({{ diagnostics.ip_header_name }})\n";
{{/if}}
supportString += "* Internet access: {{ diagnostics.has_http_access }}\n"; supportString += "* Internet access: {{ diagnostics.has_http_access }}\n";
supportString += "* Uses a proxy: {{ diagnostics.uses_proxy }}\n"; supportString += "* Internet access via a proxy: {{ diagnostics.uses_proxy }}\n";
supportString += "* DNS Check: " + dnsCheck + "\n"; supportString += "* DNS Check: " + dnsCheck + "\n";
supportString += "* Time Check: " + timeCheck + "\n"; supportString += "* Time Check: " + timeCheck + "\n";
supportString += "* Domain Configuration Check: " + domainCheck + "\n"; supportString += "* Domain Configuration Check: " + domainCheck + "\n";
supportString += "* HTTPS Check: " + httpsCheck + "\n"; supportString += "* HTTPS Check: " + httpsCheck + "\n";
supportString += "* Database type: {{ diagnostics.db_type }}\n"; supportString += "* Database type: {{ diagnostics.db_type }}\n";
{{#case diagnostics.db_type "MySQL" "PostgreSQL"}} supportString += "* Database version: {{ diagnostics.db_version }}\n";
supportString += "* Database version: [PLEASE PROVIDE DATABASE VERSION]\n";
{{/case}}
supportString += "* Clients used: \n"; supportString += "* Clients used: \n";
supportString += "* Reverse proxy and version: \n"; supportString += "* Reverse proxy and version: \n";
supportString += "* Other relevant information: \n"; supportString += "* Other relevant information: \n";

View File

@@ -73,9 +73,11 @@
return false; return false;
} }
document.querySelectorAll("img.identicon").forEach(function (e, i) { (async () => {
e.src = identicon(e.dataset.src); for (let e of document.querySelectorAll("img.identicon")) {
}); e.src = await identicon(e.dataset.src);
}
})();
document.addEventListener("DOMContentLoaded", function(event) { document.addEventListener("DOMContentLoaded", function(event) {
$('#orgs-table').DataTable({ $('#orgs-table').DataTable({

View File

@@ -116,7 +116,11 @@
data-target="#g_database">Backup Database</button></div> data-target="#g_database">Backup Database</button></div>
<div id="g_database" class="card-body collapse" data-parent="#config-form"> <div id="g_database" class="card-body collapse" data-parent="#config-form">
<div class="small mb-3"> <div class="small mb-3">
NOTE: A local installation of sqlite3 is required for this section to work. WARNING: This function only creates a backup copy of the SQLite database.
This does not include any configuration or file attachment data that may
also be needed to fully restore a vaultwarden instance. For details on
how to perform complete backups, refer to the wiki page on
<a href="https://github.com/dani-garcia/vaultwarden/wiki/Backing-up-your-vault">backups</a>.
</div> </div>
<button type="button" class="btn btn-primary" onclick="backupDatabase();">Backup Database</button> <button type="button" class="btn btn-primary" onclick="backupDatabase();">Backup Database</button>
</div> </div>

View File

@@ -206,9 +206,11 @@
"3": { "name": "Manager", "color": "green" }, "3": { "name": "Manager", "color": "green" },
}; };
document.querySelectorAll("img.identicon").forEach(function (e, i) { (async () => {
e.src = identicon(e.dataset.src); for (let e of document.querySelectorAll("img.identicon")) {
}); e.src = await identicon(e.dataset.src);
}
})();
document.querySelectorAll("[data-orgtype]").forEach(function (e, i) { document.querySelectorAll("[data-orgtype]").forEach(function (e, i) {
let orgtype = OrgTypes[e.dataset.orgtype]; let orgtype = OrgTypes[e.dataset.orgtype];

View File

@@ -5,4 +5,4 @@ To finalize changing your email address enter the following code in web vault: {
If you did not try to change an email address, you can safely ignore this email. If you did not try to change an email address, you can safely ignore this email.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ Your Email Change
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -113,7 +113,7 @@ Your Email Change
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -7,4 +7,4 @@ Delete Your Account: {{url}}/#/verify-recover-delete?userId={{user_id}}&token={{
If you did not request this email to delete your account, you can safely ignore this email. If you did not request this email to delete your account, you can safely ignore this email.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ Delete Your Account
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -121,7 +121,7 @@ Delete Your Account
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -1,7 +1,7 @@
Invitation to {{{org_name}}} accepted Invitation to {{{org_name}}} accepted
<!----------------> <!---------------->
Your invitation for *{{email}}* to join *{{org_name}}* was accepted. Your invitation for *{{email}}* to join *{{org_name}}* was accepted.
Please log in via {{url}} to the bitwarden_rs server and confirm them from the organization management page. Please log in via {{url}} to the vaultwarden server and confirm them from the organization management page.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ Invitation to {{{org_name}}} accepted
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -101,7 +101,7 @@ Invitation to {{{org_name}}} accepted
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top"> <td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Please <a href="{{url}}/">log in</a> to the bitwarden_rs server and confirm them from the organization management page. Please <a href="{{url}}/">log in</a> to the vaultwarden server and confirm them from the organization management page.
</td> </td>
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
@@ -118,7 +118,7 @@ Invitation to {{{org_name}}} accepted
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -4,4 +4,4 @@ Your invitation to join *{{org_name}}* was confirmed.
It will now appear under the Organizations the next time you log in to the web vault at {{url}}. It will now appear under the Organizations the next time you log in to the web vault at {{url}}.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ Invitation to {{{org_name}}} confirmed
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -114,7 +114,7 @@ Invitation to {{{org_name}}} confirmed
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -9,4 +9,4 @@ Your account was just logged into from a new device.
You can deauthorize all devices that have access to your account from the web vault ( {{url}} ) under Settings > My Account > Deauthorize Sessions. You can deauthorize all devices that have access to your account from the web vault ( {{url}} ) under Settings > My Account > Deauthorize Sessions.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ New Device Logged In From {{{device}}}
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -128,7 +128,7 @@ New Device Logged In From {{{device}}}
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -7,4 +7,4 @@ If you cannot remember your master password, there is no way to recover your dat
If you did not request your master password hint you can safely ignore this email. If you did not request your master password hint you can safely ignore this email.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

View File

@@ -4,7 +4,7 @@ Sorry, you have no password hint...
<head> <head>
<meta name="viewport" content="width=device-width" /> <meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Bitwarden_rs</title> <title>Vaultwarden</title>
</head> </head>
<body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6"> <body style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; height: 100%; line-height: 25px; width: 100% !important;" bgcolor="#f6f6f6">
<style type="text/css"> <style type="text/css">
@@ -118,7 +118,7 @@ Sorry, you have no password hint...
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top"> <td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;"> <table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;"> <tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/bitwarden_rs" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td> <td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
</tr> </tr>
</table> </table>
</td> </td>

View File

@@ -10,4 +10,4 @@ If you cannot remember your master password, there is no way to recover your dat
If you did not request your master password hint you can safely ignore this email. If you did not request your master password hint you can safely ignore this email.
=== ===
Github: https://github.com/dani-garcia/bitwarden_rs Github: https://github.com/dani-garcia/vaultwarden

Some files were not shown because too many files have changed in this diff Show More