mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2025-09-10 02:35:58 +03:00
Compare commits
138 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
9e4d372213 | ||
|
d0bf0ab237 | ||
|
e327583aa5 | ||
|
ead2f02cbd | ||
|
c453528dc1 | ||
|
6ae48aa8c2 | ||
|
88643fd9d5 | ||
|
73e0002219 | ||
|
c49ee47de0 | ||
|
14408396bb | ||
|
6cbb724069 | ||
|
a2316ca091 | ||
|
c476e19796 | ||
|
9f393cfd9d | ||
|
450c4d4d97 | ||
|
75e62abed0 | ||
|
97f9eb1320 | ||
|
53cc8a65af | ||
|
f94ac6ca61 | ||
|
cee3fd5ba2 | ||
|
016fe2269e | ||
|
03c0a5e405 | ||
|
cbbed79036 | ||
|
4af81ec50e | ||
|
a5ba67fef2 | ||
|
4cebe1fff4 | ||
|
a984dbbdf3 | ||
|
881524bd54 | ||
|
44da9e6ca7 | ||
|
4c0c8f7432 | ||
|
f67854c59c | ||
|
a1c1b9ab3b | ||
|
395979e834 | ||
|
fce6cb5865 | ||
|
338756550a | ||
|
d014eede9a | ||
|
9930a0d752 | ||
|
9928a5404b | ||
|
a6e0ddcdf1 | ||
|
acab70ed89 | ||
|
c0d149060f | ||
|
344f00d9c9 | ||
|
b26afb970a | ||
|
34ed5ce4b3 | ||
|
9375d5b8c2 | ||
|
e3678b4b56 | ||
|
b4c95fb4ac | ||
|
0bb33e04bb | ||
|
4d33e24099 | ||
|
2cdce04662 | ||
|
756d108f6a | ||
|
ca20b3d80c | ||
|
4ab9362971 | ||
|
4e8828e41a | ||
|
f8d1cfad2a | ||
|
b0a411b733 | ||
|
81741647f3 | ||
|
f36bd72a7f | ||
|
8c10de3edd | ||
|
0ab10a7c43 | ||
|
a1a5e00ff5 | ||
|
8af4b593fa | ||
|
9bef2c120c | ||
|
f7d99c43b5 | ||
|
ca0fd7a31b | ||
|
9e1550af8e | ||
|
a99c9715f6 | ||
|
1a888b5355 | ||
|
10d5c7738a | ||
|
80f23e6d78 | ||
|
d5ed2ce6df | ||
|
5e649f0d0d | ||
|
612c0e9478 | ||
|
0d2b3bfb99 | ||
|
c934838ace | ||
|
4350e9d241 | ||
|
0cdc0cb147 | ||
|
20535065d7 | ||
|
a23f4a704b | ||
|
93f2f74767 | ||
|
37ca202247 | ||
|
37525b1e7e | ||
|
d594b5a266 | ||
|
41add45e67 | ||
|
08b168a0a1 | ||
|
978ef2bc8b | ||
|
881d1f4334 | ||
|
56b4f46d7d | ||
|
f6bd8b3462 | ||
|
1f0f64d961 | ||
|
42ba817a4c | ||
|
dd98fe860b | ||
|
1fe9f101be | ||
|
c68fbb41d2 | ||
|
91e80657e4 | ||
|
2db30f918e | ||
|
cfceac3909 | ||
|
58b046fd10 | ||
|
227779256c | ||
|
89b5f7c98d | ||
|
c666497130 | ||
|
2620a1ac8c | ||
|
ffdcafa044 | ||
|
56ffec40f4 | ||
|
96c2416903 | ||
|
340d42a1ca | ||
|
e19420160f | ||
|
1741316f42 | ||
|
4f08167d6f | ||
|
fef76e2f6f | ||
|
f16d56cb27 | ||
|
120b286f2b | ||
|
7f437b6947 | ||
|
8d6e62e18b | ||
|
d0ec410b73 | ||
|
c546a59c38 | ||
|
e5ec245626 | ||
|
6ea95d1ede | ||
|
88bea44dd8 | ||
|
8ee5d51bd4 | ||
|
c640abbcd7 | ||
|
13598c098f | ||
|
a622b4d2fb | ||
|
403f35b571 | ||
|
3968bc8016 | ||
|
ff66368cb6 | ||
|
3fb419e704 | ||
|
832f838ddd | ||
|
18703bf195 | ||
|
ff8e88a5df | ||
|
39167d333a | ||
|
f707f86c8e | ||
|
cc021a4784 | ||
|
e3c4609c2a | ||
|
89a68741d6 | ||
|
2421d49d9a | ||
|
1db37bf3d0 | ||
|
d75a80bd2d |
@@ -61,6 +61,10 @@
|
||||
## To control this on a per-org basis instead, use the "Disable Send" org policy.
|
||||
# SENDS_ALLOWED=true
|
||||
|
||||
## Controls whether users can enable emergency access to their accounts.
|
||||
## This setting applies globally to all users.
|
||||
# EMERGENCY_ACCESS_ALLOWED=true
|
||||
|
||||
## Job scheduler settings
|
||||
##
|
||||
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
|
||||
@@ -77,6 +81,18 @@
|
||||
## Cron schedule of the job that checks for trashed items to delete permanently.
|
||||
## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
|
||||
# TRASH_PURGE_SCHEDULE="0 5 0 * * *"
|
||||
##
|
||||
## Cron schedule of the job that checks for incomplete 2FA logins.
|
||||
## Defaults to once every minute. Set blank to disable this job.
|
||||
# INCOMPLETE_2FA_SCHEDULE="30 * * * * *"
|
||||
##
|
||||
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
|
||||
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
|
||||
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
|
||||
##
|
||||
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
|
||||
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
|
||||
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
|
||||
|
||||
## Enable extended logging, which shows timestamps and targets in the logs
|
||||
# EXTENDED_LOGGING=true
|
||||
@@ -194,11 +210,13 @@
|
||||
## Name shown in the invitation emails that don't come from a specific organization
|
||||
# INVITATION_ORG_NAME=Vaultwarden
|
||||
|
||||
## Per-organization attachment limit (KB)
|
||||
## Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
|
||||
## Per-organization attachment storage limit (KB)
|
||||
## Max kilobytes of attachment storage allowed per organization.
|
||||
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
|
||||
# ORG_ATTACHMENT_LIMIT=
|
||||
## Per-user attachment limit (KB).
|
||||
## Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more
|
||||
## Per-user attachment storage limit (KB)
|
||||
## Max kilobytes of attachment storage allowed per user.
|
||||
## When this limit is reached, the user will not be allowed to upload further attachments.
|
||||
# USER_ATTACHMENT_LIMIT=
|
||||
|
||||
## Number of days to wait before auto-deleting a trashed item.
|
||||
@@ -206,12 +224,21 @@
|
||||
## This setting applies globally, so make sure to inform all users of any changes to this setting.
|
||||
# TRASH_AUTO_DELETE_DAYS=
|
||||
|
||||
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
|
||||
## resulting in an email notification. An incomplete 2FA login is one where the correct
|
||||
## master password was provided but the required 2FA step was not completed, which
|
||||
## potentially indicates a master password compromise. Set to 0 to disable this check.
|
||||
## This setting applies globally to all users.
|
||||
# INCOMPLETE_2FA_TIME_LIMIT=3
|
||||
|
||||
## Controls the PBBKDF password iterations to apply on the server
|
||||
## The change only applies when the password is changed
|
||||
# PASSWORD_ITERATIONS=100000
|
||||
|
||||
## Whether password hint should be sent into the error response when the client request it
|
||||
# SHOW_PASSWORD_HINT=true
|
||||
## Controls whether a password hint should be shown directly in the web page if
|
||||
## SMTP service is not configured. Not recommended for publicly-accessible instances
|
||||
## as this provides unauthenticated access to potentially sensitive data.
|
||||
# SHOW_PASSWORD_HINT=false
|
||||
|
||||
## Domain settings
|
||||
## The domain must match the address from where you access the server
|
||||
|
90
.github/workflows/build.yml
vendored
90
.github/workflows/build.yml
vendored
@@ -2,36 +2,23 @@ name: Build
|
||||
|
||||
on:
|
||||
push:
|
||||
paths-ignore:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
- ".dockerignore"
|
||||
- ".env.template"
|
||||
- ".gitattributes"
|
||||
- ".gitignore"
|
||||
- "azure-pipelines.yml"
|
||||
- "docker/**"
|
||||
- "hooks/**"
|
||||
- "tools/**"
|
||||
- ".github/FUNDING.yml"
|
||||
- ".github/ISSUE_TEMPLATE/**"
|
||||
- ".github/security-contact.gif"
|
||||
paths:
|
||||
- ".github/workflows/build.yml"
|
||||
- "src/**"
|
||||
- "migrations/**"
|
||||
- "Cargo.*"
|
||||
- "build.rs"
|
||||
- "diesel.toml"
|
||||
- "rust-toolchain"
|
||||
pull_request:
|
||||
# Ignore when there are only changes done too one of these paths
|
||||
paths-ignore:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
- ".dockerignore"
|
||||
- ".env.template"
|
||||
- ".gitattributes"
|
||||
- ".gitignore"
|
||||
- "azure-pipelines.yml"
|
||||
- "docker/**"
|
||||
- "hooks/**"
|
||||
- "tools/**"
|
||||
- ".github/FUNDING.yml"
|
||||
- ".github/ISSUE_TEMPLATE/**"
|
||||
- ".github/security-contact.gif"
|
||||
paths:
|
||||
- ".github/workflows/build.yml"
|
||||
- "src/**"
|
||||
- "migrations/**"
|
||||
- "Cargo.*"
|
||||
- "build.rs"
|
||||
- "diesel.toml"
|
||||
- "rust-toolchain"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
@@ -44,30 +31,22 @@ jobs:
|
||||
matrix:
|
||||
channel:
|
||||
- nightly
|
||||
# - stable
|
||||
target-triple:
|
||||
- x86_64-unknown-linux-gnu
|
||||
# - x86_64-unknown-linux-musl
|
||||
include:
|
||||
- target-triple: x86_64-unknown-linux-gnu
|
||||
host-triple: x86_64-unknown-linux-gnu
|
||||
features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
|
||||
channel: nightly
|
||||
os: ubuntu-18.04
|
||||
os: ubuntu-20.04
|
||||
ext: ""
|
||||
# - target-triple: x86_64-unknown-linux-gnu
|
||||
# host-triple: x86_64-unknown-linux-gnu
|
||||
# features: "sqlite,mysql,postgresql"
|
||||
# channel: stable
|
||||
# os: ubuntu-18.04
|
||||
# ext: ""
|
||||
|
||||
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
# Checkout the repo
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||
# End Checkout the repo
|
||||
|
||||
|
||||
@@ -86,13 +65,13 @@ jobs:
|
||||
|
||||
|
||||
# Enable Rust Caching
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
|
||||
# End Enable Rust Caching
|
||||
|
||||
|
||||
# Uses the rust-toolchain file to determine version
|
||||
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
|
||||
uses: actions-rs/toolchain@v1
|
||||
uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
|
||||
with:
|
||||
profile: minimal
|
||||
target: ${{ matrix.target-triple }}
|
||||
@@ -103,28 +82,28 @@ jobs:
|
||||
# Run cargo tests (In release mode to speed up future builds)
|
||||
# First test all features together, afterwards test them separately.
|
||||
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: test
|
||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
||||
# Test single features
|
||||
# 0: sqlite
|
||||
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: test
|
||||
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
|
||||
if: ${{ matrix.features[0] != '' }}
|
||||
# 1: mysql
|
||||
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: test
|
||||
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
|
||||
if: ${{ matrix.features[1] != '' }}
|
||||
# 2: postgresql
|
||||
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: test
|
||||
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
|
||||
@@ -134,7 +113,7 @@ jobs:
|
||||
|
||||
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
|
||||
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: clippy
|
||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
|
||||
@@ -143,7 +122,7 @@ jobs:
|
||||
|
||||
# Run cargo fmt
|
||||
- name: '`cargo fmt`'
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: fmt
|
||||
args: --all -- --check
|
||||
@@ -152,7 +131,7 @@ jobs:
|
||||
|
||||
# Build the binary
|
||||
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||
uses: actions-rs/cargo@v1
|
||||
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||
with:
|
||||
command: build
|
||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
||||
@@ -161,21 +140,8 @@ jobs:
|
||||
|
||||
# Upload artifact to Github Actions
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v2
|
||||
uses: actions/upload-artifact@27121b0bdffd731efa15d66772be8dc71245d074 # v2.2.4
|
||||
with:
|
||||
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
|
||||
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
|
||||
# End Upload artifact to Github Actions
|
||||
|
||||
|
||||
## This is not used at the moment
|
||||
## We could start using this when we can build static binaries
|
||||
# Upload to github actions release
|
||||
# - name: Release
|
||||
# uses: Shopify/upload-to-release@1
|
||||
# if: startsWith(github.ref, 'refs/tags/')
|
||||
# with:
|
||||
# name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
|
||||
# path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
|
||||
# repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
# End Upload to github actions release
|
||||
|
7
.github/workflows/hadolint.yml
vendored
7
.github/workflows/hadolint.yml
vendored
@@ -2,11 +2,10 @@ name: Hadolint
|
||||
|
||||
on:
|
||||
push:
|
||||
# Ignore when there are only changes done too one of these paths
|
||||
paths:
|
||||
- "docker/**"
|
||||
|
||||
pull_request:
|
||||
# Ignore when there are only changes done too one of these paths
|
||||
paths:
|
||||
- "docker/**"
|
||||
|
||||
@@ -17,7 +16,7 @@ jobs:
|
||||
steps:
|
||||
# Checkout the repo
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||
# End Checkout the repo
|
||||
|
||||
|
||||
@@ -28,7 +27,7 @@ jobs:
|
||||
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
|
||||
sudo chmod +x /usr/local/bin/hadolint
|
||||
env:
|
||||
HADOLINT_VERSION: 2.5.0
|
||||
HADOLINT_VERSION: 2.7.0
|
||||
# End Download hadolint
|
||||
|
||||
# Test Dockerfiles
|
||||
|
119
.github/workflows/release.yml
vendored
Normal file
119
.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- ".github/workflows/release.yml"
|
||||
- "src/**"
|
||||
- "migrations/**"
|
||||
- "hooks/**"
|
||||
- "docker/**"
|
||||
- "Cargo.*"
|
||||
- "build.rs"
|
||||
- "diesel.toml"
|
||||
- "rust-toolchain"
|
||||
|
||||
branches: # Only on paths above
|
||||
- main
|
||||
|
||||
tags: # Always, regardless of paths above
|
||||
- '*'
|
||||
|
||||
jobs:
|
||||
# https://github.com/marketplace/actions/skip-duplicate-actions
|
||||
# Some checks to determine if we need to continue with building a new docker.
|
||||
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
|
||||
skip_check:
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
|
||||
outputs:
|
||||
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||
steps:
|
||||
- name: Skip Duplicates Actions
|
||||
id: skip_check
|
||||
uses: fkirc/skip-duplicate-actions@f75dd6564bb646f95277dc8c3b80612e46a4a1ea # v3.4.1
|
||||
with:
|
||||
cancel_others: 'true'
|
||||
# Only run this when not creating a tag
|
||||
if: ${{ startsWith(github.ref, 'refs/heads/') }}
|
||||
|
||||
docker-build:
|
||||
runs-on: ubuntu-latest
|
||||
needs: skip_check
|
||||
# Start a local docker registry to be used to generate multi-arch images.
|
||||
services:
|
||||
registry:
|
||||
image: registry:2
|
||||
ports:
|
||||
- 5000:5000
|
||||
env:
|
||||
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
|
||||
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
|
||||
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
|
||||
SOURCE_COMMIT: ${{ github.sha }}
|
||||
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
|
||||
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
|
||||
strategy:
|
||||
matrix:
|
||||
base_image: ["debian","alpine"]
|
||||
|
||||
steps:
|
||||
# Checkout the repo
|
||||
- name: Checkout
|
||||
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# Login to Docker Hub
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # v1.10.0
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
# Determine Docker Tag
|
||||
- name: Init Variables
|
||||
id: vars
|
||||
shell: bash
|
||||
run: |
|
||||
# Check which main tag we are going to build determined by github.ref
|
||||
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
|
||||
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
|
||||
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
|
||||
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
|
||||
echo "set-output name=DOCKER_TAG::testing"
|
||||
echo "::set-output name=DOCKER_TAG::testing"
|
||||
fi
|
||||
# End Determine Docker Tag
|
||||
|
||||
- name: Build Debian based images
|
||||
shell: bash
|
||||
env:
|
||||
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
|
||||
run: |
|
||||
./hooks/build
|
||||
if: ${{ matrix.base_image == 'debian' }}
|
||||
|
||||
- name: Push Debian based images
|
||||
shell: bash
|
||||
env:
|
||||
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
|
||||
run: |
|
||||
./hooks/push
|
||||
if: ${{ matrix.base_image == 'debian' }}
|
||||
|
||||
- name: Build Alpine based images
|
||||
shell: bash
|
||||
env:
|
||||
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
|
||||
run: |
|
||||
./hooks/build
|
||||
if: ${{ matrix.base_image == 'alpine' }}
|
||||
|
||||
- name: Push Alpine based images
|
||||
shell: bash
|
||||
env:
|
||||
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
|
||||
run: |
|
||||
./hooks/push
|
||||
if: ${{ matrix.base_image == 'alpine' }}
|
38
.pre-commit-config.yaml
Normal file
38
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.0.1
|
||||
hooks:
|
||||
- id: check-yaml
|
||||
- id: check-json
|
||||
- id: check-toml
|
||||
- id: end-of-file-fixer
|
||||
exclude: "(.*js$|.*css$)"
|
||||
- id: check-case-conflict
|
||||
- id: check-merge-conflict
|
||||
- id: detect-private-key
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: fmt
|
||||
name: fmt
|
||||
description: Format files with cargo fmt.
|
||||
entry: cargo fmt
|
||||
language: system
|
||||
types: [rust]
|
||||
args: ["--", "--check"]
|
||||
- id: cargo-test
|
||||
name: cargo test
|
||||
description: Test the package for errors.
|
||||
entry: cargo test
|
||||
language: system
|
||||
args: ["--features", "sqlite,mysql,postgresql", "--"]
|
||||
types: [rust]
|
||||
pass_filenames: false
|
||||
- id: cargo-clippy
|
||||
name: cargo clippy
|
||||
description: Lint Rust sources
|
||||
entry: cargo clippy
|
||||
language: system
|
||||
args: ["--features", "sqlite,mysql,postgresql", "--", "-D", "warnings"]
|
||||
types: [rust]
|
||||
pass_filenames: false
|
1059
Cargo.lock
generated
1059
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
50
Cargo.toml
50
Cargo.toml
@@ -2,7 +2,9 @@
|
||||
name = "vaultwarden"
|
||||
version = "1.0.0"
|
||||
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
|
||||
edition = "2018"
|
||||
edition = "2021"
|
||||
rust-version = "1.57"
|
||||
resolver = "2"
|
||||
|
||||
repository = "https://github.com/dani-garcia/vaultwarden"
|
||||
readme = "README.md"
|
||||
@@ -32,36 +34,36 @@ rocket = { version = "=0.5.0-dev", features = ["tls"], default-features = false
|
||||
rocket_contrib = "=0.5.0-dev"
|
||||
|
||||
# HTTP client
|
||||
reqwest = { version = "0.11.4", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies"] }
|
||||
reqwest = { version = "0.11.7", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
|
||||
|
||||
# Used for custom short lived cookie jar
|
||||
cookie = "0.15.0"
|
||||
cookie_store = "0.15.0"
|
||||
bytes = "1.0.1"
|
||||
cookie = "0.15.1"
|
||||
cookie_store = "0.15.1"
|
||||
bytes = "1.1.0"
|
||||
url = "2.2.2"
|
||||
|
||||
# multipart/form-data support
|
||||
multipart = { version = "0.18.0", features = ["server"], default-features = false }
|
||||
|
||||
# WebSockets library
|
||||
ws = { version = "0.10.0", package = "parity-ws" }
|
||||
ws = { version = "0.11.1", package = "parity-ws" }
|
||||
|
||||
# MessagePack library
|
||||
rmpv = "0.4.7"
|
||||
rmpv = "1.0.0"
|
||||
|
||||
# Concurrent hashmap implementation
|
||||
chashmap = "2.2.2"
|
||||
|
||||
# A generic serialization/deserialization framework
|
||||
serde = { version = "1.0.126", features = ["derive"] }
|
||||
serde_json = "1.0.64"
|
||||
serde = { version = "1.0.130", features = ["derive"] }
|
||||
serde_json = "1.0.72"
|
||||
|
||||
# Logging
|
||||
log = "0.4.14"
|
||||
fern = { version = "0.6.0", features = ["syslog-4"] }
|
||||
|
||||
# A safe, extensible ORM and Query builder
|
||||
diesel = { version = "1.4.7", features = [ "chrono", "r2d2"] }
|
||||
diesel = { version = "1.4.8", features = [ "chrono", "r2d2"] }
|
||||
diesel_migrations = "1.4.0"
|
||||
|
||||
# Bundled SQLite
|
||||
@@ -76,14 +78,14 @@ uuid = { version = "0.8.2", features = ["v4"] }
|
||||
|
||||
# Date and time libraries
|
||||
chrono = { version = "0.4.19", features = ["serde"] }
|
||||
chrono-tz = "0.5.3"
|
||||
chrono-tz = "0.6.0"
|
||||
time = "0.2.27"
|
||||
|
||||
# Job scheduler
|
||||
job_scheduler = "1.2.1"
|
||||
|
||||
# TOTP library
|
||||
oath = "0.10.2"
|
||||
totp-lite = "1.0.3"
|
||||
|
||||
# Data encoding library
|
||||
data-encoding = "2.3.2"
|
||||
@@ -93,7 +95,7 @@ jsonwebtoken = "7.2.0"
|
||||
|
||||
# U2F library
|
||||
u2f = "0.2.0"
|
||||
webauthn-rs = "0.3.0-alpha.7"
|
||||
webauthn-rs = "0.3.0"
|
||||
|
||||
# Yubico Library
|
||||
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
|
||||
@@ -109,20 +111,20 @@ num-traits = "0.2.14"
|
||||
num-derive = "0.3.3"
|
||||
|
||||
# Email libraries
|
||||
tracing = { version = "0.1.26", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
|
||||
lettre = { version = "0.10.0-rc.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
|
||||
tracing = { version = "0.1.29", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
|
||||
lettre = { version = "0.10.0-rc.4", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
|
||||
|
||||
# Template library
|
||||
handlebars = { version = "4.0.1", features = ["dir_source"] }
|
||||
handlebars = { version = "4.1.5", features = ["dir_source"] }
|
||||
|
||||
# For favicon extraction from main website
|
||||
html5ever = "0.25.1"
|
||||
markup5ever_rcdom = "0.1.0"
|
||||
regex = { version = "1.5.4", features = ["std", "perf"], default-features = false }
|
||||
data-url = "0.1.0"
|
||||
regex = { version = "1.5.4", features = ["std", "perf", "unicode-perl"], default-features = false }
|
||||
data-url = "0.1.1"
|
||||
|
||||
# Used by U2F, JWT and Postgres
|
||||
openssl = "0.10.34"
|
||||
openssl = "0.10.38"
|
||||
|
||||
# URL encoding library
|
||||
percent-encoding = "2.1.0"
|
||||
@@ -133,25 +135,19 @@ idna = "0.2.3"
|
||||
pico-args = "0.4.2"
|
||||
|
||||
# Logging panics to logfile instead stderr only
|
||||
backtrace = "0.3.60"
|
||||
backtrace = "0.3.63"
|
||||
|
||||
# Macro ident concatenation
|
||||
paste = "1.0.5"
|
||||
paste = "1.0.6"
|
||||
|
||||
[patch.crates-io]
|
||||
# Use newest ring
|
||||
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
||||
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
||||
|
||||
# For favicon extraction from main website
|
||||
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = 'eb7330b5296c0d43816d1346211b74182bb4ae37' }
|
||||
|
||||
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
|
||||
# to any issues or PRs for almost a year (as of April 2021). This hopefully
|
||||
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
|
||||
# In particular, `cron` has since implemented parsing of some common syntax
|
||||
# that wasn't previously supported (https://github.com/zslayton/cron/pull/64).
|
||||
job_scheduler = { git = 'https://github.com/jjlin/job_scheduler', rev = 'ee023418dbba2bfe1e30a5fd7d937f9e33739806' }
|
||||
|
||||
# Add support for U2F appid extension compatibility
|
||||
webauthn-rs = { git = 'https://github.com/kanidm/webauthn-rs', rev = '02a99f534127b30c6f4df7f2d42bc24f76dc4211' }
|
||||
|
@@ -1,4 +1,4 @@
|
||||
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
|
||||
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
|
||||
|
||||
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
|
||||
|
||||
|
@@ -1,10 +1,12 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
{% set build_stage_base_image = "rust:1.53" %}
|
||||
{% set build_stage_base_image = "rust:1.55-buster" %}
|
||||
{% if "alpine" in target_file %}
|
||||
{% if "amd64" in target_file %}
|
||||
{% set build_stage_base_image = "clux/muslrust:nightly-2021-06-24" %}
|
||||
{% set build_stage_base_image = "clux/muslrust:nightly-2021-10-23" %}
|
||||
{% set runtime_stage_base_image = "alpine:3.14" %}
|
||||
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
|
||||
{% elif "armv7" in target_file %}
|
||||
@@ -40,12 +42,17 @@
|
||||
{% else %}
|
||||
{% set package_arch_target_param = "" %}
|
||||
{% endif %}
|
||||
{% if "buildx" in target_file %}
|
||||
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
|
||||
{% else %}
|
||||
{% set mount_rust_cache = "" %}
|
||||
{% endif %}
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
{% set vault_version = "2.20.4b" %}
|
||||
{% set vault_image_digest = "sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05" %}
|
||||
{% set vault_version = "2.25.0" %}
|
||||
{% set vault_image_digest = "sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527" %}
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
@@ -75,7 +82,8 @@ ARG DB=sqlite,postgresql
|
||||
{% set features = "sqlite,postgresql" %}
|
||||
{% else %}
|
||||
# Alpine-based ARM (musl) only supports sqlite during compile time.
|
||||
ARG DB=sqlite
|
||||
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
|
||||
ARG DB=sqlite,vendored_openssl
|
||||
{% set features = "sqlite" %}
|
||||
{% endif %}
|
||||
{% else %}
|
||||
@@ -85,22 +93,40 @@ ARG DB=sqlite,mysql,postgresql
|
||||
{% endif %}
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
{# {% if "alpine" not in target_file and "buildx" in target_file %}
|
||||
# Debian based Buildx builds can use some special apt caching to speedup building.
|
||||
# By default Debian based images have some rules to keep docker builds clean, we need to remove this.
|
||||
# See: https://hub.docker.com/r/docker/dockerfile
|
||||
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
|
||||
{% endif %} #}
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
{% if "alpine" in target_file %}
|
||||
ENV USER "root"
|
||||
ENV RUSTFLAGS='-C link-arg=-s'
|
||||
{% if "armv7" in target_file %}
|
||||
{#- https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html -#}
|
||||
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||
{% endif %}
|
||||
{% elif "arm" in target_file %}
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for {{ package_arch_name }} architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
/etc/apt/sources.list.d/deb-src.list \
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture {{ package_arch_name }} \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
@@ -109,24 +135,43 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
libc6-dev{{ package_arch_prefix }} \
|
||||
libpq5{{ package_arch_prefix }} \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev{{ package_arch_prefix }} \
|
||||
libmariadb-dev-compat{{ package_arch_prefix }} \
|
||||
gcc-{{ package_cross_compiler }} \
|
||||
&& mkdir -p ~/.cargo \
|
||||
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
|
||||
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
|
||||
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
ENV CARGO_HOME "/root/.cargo"
|
||||
ENV USER "root"
|
||||
{% endif -%}
|
||||
# Set arm specific environment values
|
||||
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
|
||||
|
||||
{% if "amd64" in target_file and "alpine" not in target_file %}
|
||||
{% elif "amd64" in target_file %}
|
||||
# Install DB packages
|
||||
RUN apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libmariadb-dev{{ package_arch_prefix }} \
|
||||
libpq-dev{{ package_arch_prefix }} \
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libmariadb-dev{{ package_arch_prefix }} \
|
||||
libpq-dev{{ package_arch_prefix }} \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
{% endif %}
|
||||
|
||||
@@ -139,37 +184,14 @@ COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
{% if "alpine" not in target_file %}
|
||||
{% if "arm" in target_file %}
|
||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
|
||||
|
||||
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
|
||||
{% endif -%}
|
||||
{% endif %}
|
||||
{% if package_arch_target is defined %}
|
||||
RUN rustup target add {{ package_arch_target }}
|
||||
RUN {{ mount_rust_cache -}} rustup target add {{ package_arch_target }}
|
||||
{% endif %}
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} \
|
||||
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
@@ -181,7 +203,7 @@ RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
|
||||
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
|
||||
{% if "alpine" in target_file %}
|
||||
{% if "armv7" in target_file %}
|
||||
# hadolint ignore=DL3059
|
||||
@@ -211,6 +233,7 @@ RUN mkdir /data \
|
||||
{% if "alpine" in runtime_stage_base_image %}
|
||||
&& apk add --no-cache \
|
||||
openssl \
|
||||
tzdata \
|
||||
curl \
|
||||
dumb-init \
|
||||
{% if "mysql" in features %}
|
||||
@@ -229,6 +252,7 @@ RUN mkdir /data \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
{% endif %}
|
||||
|
||||
|
@@ -7,3 +7,9 @@ all: $(OBJECTS)
|
||||
|
||||
%/Dockerfile.alpine: Dockerfile.j2 render_template
|
||||
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||
|
||||
%/Dockerfile.buildx: Dockerfile.j2 render_template
|
||||
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||
|
||||
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
|
||||
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,33 +16,42 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.53 as build
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# Install DB packages
|
||||
RUN apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libmariadb-dev \
|
||||
libpq-dev \
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libmariadb-dev \
|
||||
libpq-dev \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
@@ -90,6 +101,7 @@ RUN mkdir /data \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,29 +16,35 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM clux/muslrust:nightly-2021-06-24 as build
|
||||
FROM clux/muslrust:nightly-2021-10-23 as build
|
||||
|
||||
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
|
||||
ARG DB=sqlite,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
ENV USER "root"
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
ENV RUSTFLAGS='-C link-arg=-s'
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
@@ -82,6 +90,7 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
|
||||
RUN mkdir /data \
|
||||
&& apk add --no-cache \
|
||||
openssl \
|
||||
tzdata \
|
||||
curl \
|
||||
dumb-init \
|
||||
postgresql-libs \
|
||||
|
126
docker/amd64/Dockerfile.buildx
Normal file
126
docker/amd64/Dockerfile.buildx
Normal file
@@ -0,0 +1,126 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# Install DB packages
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libmariadb-dev \
|
||||
libpq-dev \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM debian:buster-slim
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
openssl \
|
||||
ca-certificates \
|
||||
curl \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
118
docker/amd64/Dockerfile.buildx.alpine
Normal file
118
docker/amd64/Dockerfile.buildx.alpine
Normal file
@@ -0,0 +1,118 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM clux/muslrust:nightly-2021-10-23 as build
|
||||
|
||||
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
|
||||
ARG DB=sqlite,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
ENV RUSTFLAGS='-C link-arg=-s'
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add x86_64-unknown-linux-musl
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM alpine:3.14
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
ENV SSL_CERT_DIR=/etc/ssl/certs
|
||||
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apk add --no-cache \
|
||||
openssl \
|
||||
tzdata \
|
||||
curl \
|
||||
dumb-init \
|
||||
postgresql-libs \
|
||||
ca-certificates
|
||||
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,32 +16,44 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.53 as build
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for arm64 architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
/etc/apt/sources.list.d/deb-src.list \
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture arm64 \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
libc6-dev:arm64 \
|
||||
libpq5:arm64 \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:arm64 \
|
||||
libmariadb-dev-compat:arm64 \
|
||||
gcc-aarch64-linux-gnu \
|
||||
&& mkdir -p ~/.cargo \
|
||||
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
|
||||
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \
|
||||
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
||||
|
||||
ENV CARGO_HOME "/root/.cargo"
|
||||
ENV USER "root"
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so
|
||||
|
||||
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
||||
RUN rustup target add aarch64-unknown-linux-gnu
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
|
169
docker/arm64/Dockerfile.buildx
Normal file
169
docker/arm64/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for arm64 architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture arm64 \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libssl-dev:arm64 \
|
||||
libc6-dev:arm64 \
|
||||
libpq5:arm64 \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:arm64 \
|
||||
libmariadb-dev-compat:arm64 \
|
||||
gcc-aarch64-linux-gnu \
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
||||
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-gnu
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM balenalib/aarch64-debian:buster
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-start" ]
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
openssl \
|
||||
ca-certificates \
|
||||
curl \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-end" ]
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,32 +16,44 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.53 as build
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for armel architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
/etc/apt/sources.list.d/deb-src.list \
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture armel \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
libc6-dev:armel \
|
||||
libpq5:armel \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:armel \
|
||||
libmariadb-dev-compat:armel \
|
||||
gcc-arm-linux-gnueabi \
|
||||
&& mkdir -p ~/.cargo \
|
||||
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
|
||||
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
||||
|
||||
ENV CARGO_HOME "/root/.cargo"
|
||||
ENV USER "root"
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
|
||||
|
||||
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
||||
RUN rustup target add arm-unknown-linux-gnueabi
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
|
169
docker/armv6/Dockerfile.buildx
Normal file
169
docker/armv6/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for armel architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture armel \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libssl-dev:armel \
|
||||
libc6-dev:armel \
|
||||
libpq5:armel \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:armel \
|
||||
libmariadb-dev-compat:armel \
|
||||
gcc-arm-linux-gnueabi \
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
||||
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-gnueabi
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM balenalib/rpi-debian:buster
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-start" ]
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
openssl \
|
||||
ca-certificates \
|
||||
curl \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-end" ]
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,32 +16,44 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.53 as build
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for armhf architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
/etc/apt/sources.list.d/deb-src.list \
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture armhf \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
||||
libc6-dev:armhf \
|
||||
libpq5:armhf \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:armhf \
|
||||
libmariadb-dev-compat:armhf \
|
||||
gcc-arm-linux-gnueabihf \
|
||||
&& mkdir -p ~/.cargo \
|
||||
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
|
||||
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
||||
|
||||
ENV CARGO_HOME "/root/.cargo"
|
||||
ENV USER "root"
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
|
||||
|
||||
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
||||
RUN rustup target add armv7-unknown-linux-gnueabihf
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
|
@@ -1,3 +1,5 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
@@ -14,29 +16,36 @@
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.20.4b
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.20.4b
|
||||
# [vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05]
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05
|
||||
# [vaultwarden/web-vault:v2.20.4b]
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:894e266d4491494dd5a8a736855a6772aa146fa14206853b11b41cf3f3f64d05 as vault
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM messense/rust-musl-cross:armv7-musleabihf as build
|
||||
|
||||
# Alpine-based ARM (musl) only supports sqlite during compile time.
|
||||
ARG DB=sqlite
|
||||
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
|
||||
ARG DB=sqlite,vendored_openssl
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
# Don't download rust docs
|
||||
RUN rustup set profile minimal
|
||||
|
||||
ENV USER "root"
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
ENV RUSTFLAGS='-C link-arg=-s'
|
||||
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||
|
||||
@@ -87,6 +96,7 @@ RUN [ "cross-build-start" ]
|
||||
RUN mkdir /data \
|
||||
&& apk add --no-cache \
|
||||
openssl \
|
||||
tzdata \
|
||||
curl \
|
||||
dumb-init \
|
||||
ca-certificates
|
||||
|
169
docker/armv7/Dockerfile.buildx
Normal file
169
docker/armv7/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM rust:1.55-buster as build
|
||||
|
||||
# Debian-based builds support multidb
|
||||
ARG DB=sqlite,mysql,postgresql
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
||||
# What we can do is a force install, because nothing important is overlapping each other.
|
||||
#
|
||||
# Install required build libs for armhf architecture.
|
||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||
&& dpkg --add-architecture armhf \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
libssl-dev:armhf \
|
||||
libc6-dev:armhf \
|
||||
libpq5:armhf \
|
||||
libpq-dev \
|
||||
libmariadb3:amd64 \
|
||||
libmariadb-dev:armhf \
|
||||
libmariadb-dev-compat:armhf \
|
||||
gcc-arm-linux-gnueabihf \
|
||||
#
|
||||
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
#
|
||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||
# Without this specific file the ld command will fail and compilation fails with it.
|
||||
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
|
||||
#
|
||||
# Make sure cargo has the right target config
|
||||
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
|
||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
|
||||
|
||||
# Set arm specific environment values
|
||||
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
||||
ENV CROSS_COMPILE="1"
|
||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
||||
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-gnueabihf
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM balenalib/armv7hf-debian:buster
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-start" ]
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apt-get update && apt-get install -y \
|
||||
--no-install-recommends \
|
||||
openssl \
|
||||
ca-certificates \
|
||||
curl \
|
||||
dumb-init \
|
||||
libmariadb-dev-compat \
|
||||
libpq5 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-end" ]
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
125
docker/armv7/Dockerfile.buildx.alpine
Normal file
125
docker/armv7/Dockerfile.buildx.alpine
Normal file
@@ -0,0 +1,125 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# This file was generated using a Jinja2 template.
|
||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||
|
||||
# Using multistage build:
|
||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||
####################### VAULT BUILD IMAGE #######################
|
||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||
# Using the digest instead of the tag name provides better security,
|
||||
# as the digest of an image is immutable, whereas a tag name can later
|
||||
# be changed to point to a malicious image.
|
||||
#
|
||||
# To verify the current digest for a given tag name:
|
||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||
# click the tag name to view the digest of the image it currently points to.
|
||||
# - From the command line:
|
||||
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||
#
|
||||
# - Conversely, to get the tag name from the digest:
|
||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||
# [vaultwarden/web-vault:v2.25.0]
|
||||
#
|
||||
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||
|
||||
########################## BUILD IMAGE ##########################
|
||||
FROM messense/rust-musl-cross:armv7-musleabihf as build
|
||||
|
||||
# Alpine-based ARM (musl) only supports sqlite during compile time.
|
||||
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
|
||||
ARG DB=sqlite,vendored_openssl
|
||||
|
||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
LANG=C.UTF-8 \
|
||||
TZ=UTC \
|
||||
TERM=xterm-256color \
|
||||
CARGO_HOME="/root/.cargo" \
|
||||
USER="root"
|
||||
|
||||
|
||||
# Create CARGO_HOME folder and don't download rust docs
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||
&& rustup set profile minimal
|
||||
|
||||
ENV RUSTFLAGS='-C link-arg=-s'
|
||||
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||
|
||||
# Creates a dummy project used to grab dependencies
|
||||
RUN USER=root cargo new --bin /app
|
||||
WORKDIR /app
|
||||
|
||||
# Copies over *only* your manifests and build files
|
||||
COPY ./Cargo.* ./
|
||||
COPY ./rust-toolchain ./rust-toolchain
|
||||
COPY ./build.rs ./build.rs
|
||||
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-musleabihf
|
||||
|
||||
# Builds your dependencies and removes the
|
||||
# dummy project, except the target folder
|
||||
# This folder contains the compiled dependencies
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf \
|
||||
&& find . -not -path "./target*" -delete
|
||||
|
||||
# Copies the complete project
|
||||
# To avoid copying unneeded files, use .dockerignore
|
||||
COPY . .
|
||||
|
||||
# Make sure that we actually build the project
|
||||
RUN touch src/main.rs
|
||||
|
||||
# Builds again, this time it'll just be
|
||||
# your actual source files being built
|
||||
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
|
||||
# hadolint ignore=DL3059
|
||||
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
|
||||
|
||||
######################## RUNTIME IMAGE ########################
|
||||
# Create a new stage with a minimal image
|
||||
# because we already have a binary built
|
||||
FROM balenalib/armv7hf-alpine:3.14
|
||||
|
||||
ENV ROCKET_ENV "staging"
|
||||
ENV ROCKET_PORT=80
|
||||
ENV ROCKET_WORKERS=10
|
||||
ENV SSL_CERT_DIR=/etc/ssl/certs
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-start" ]
|
||||
|
||||
# Create data folder and Install needed libraries
|
||||
RUN mkdir /data \
|
||||
&& apk add --no-cache \
|
||||
openssl \
|
||||
tzdata \
|
||||
curl \
|
||||
dumb-init \
|
||||
ca-certificates
|
||||
|
||||
# hadolint ignore=DL3059
|
||||
RUN [ "cross-build-end" ]
|
||||
|
||||
VOLUME /data
|
||||
EXPOSE 80
|
||||
EXPOSE 3012
|
||||
|
||||
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||
# and the binary from the "build" stage to the current stage
|
||||
WORKDIR /
|
||||
COPY Rocket.toml .
|
||||
COPY --from=vault /web-vault ./web-vault
|
||||
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
|
||||
|
||||
COPY docker/healthcheck.sh /healthcheck.sh
|
||||
COPY docker/start.sh /start.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||
|
||||
# Configures the startup!
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["/start.sh"]
|
@@ -34,12 +34,17 @@ for label in "${LABELS[@]}"; do
|
||||
LABEL_ARGS+=(--label "${label}")
|
||||
done
|
||||
|
||||
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
|
||||
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
|
||||
buildx_suffix=.buildx
|
||||
fi
|
||||
|
||||
set -ex
|
||||
|
||||
for arch in "${arches[@]}"; do
|
||||
docker build \
|
||||
"${LABEL_ARGS[@]}" \
|
||||
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
|
||||
-f docker/${arch}/Dockerfile${distro_suffix} \
|
||||
-f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
|
||||
.
|
||||
done
|
||||
|
17
hooks/push
17
hooks/push
@@ -10,7 +10,7 @@ join() { local IFS="$1"; shift; echo "$*"; }
|
||||
|
||||
set -ex
|
||||
|
||||
echo ">>> Starting local Docker registry..."
|
||||
echo ">>> Starting local Docker registry when needed..."
|
||||
|
||||
# Docker Buildx's `docker-container` driver is needed for multi-platform
|
||||
# builds, but it can't access existing images on the Docker host (like the
|
||||
@@ -25,7 +25,13 @@ echo ">>> Starting local Docker registry..."
|
||||
# Use host networking so the buildx container can access the registry via
|
||||
# localhost.
|
||||
#
|
||||
docker run -d --name registry --network host registry:2 # defaults to port 5000
|
||||
# First check if there already is a registry container running, else skip it.
|
||||
# This will only happen either locally or running it via Github Actions
|
||||
#
|
||||
if ! timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/5000'; then
|
||||
# defaults to port 5000
|
||||
docker run -d --name registry --network host registry:2
|
||||
fi
|
||||
|
||||
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
|
||||
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
|
||||
@@ -49,7 +55,12 @@ echo ">>> Setting up Docker Buildx..."
|
||||
#
|
||||
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
|
||||
#
|
||||
docker buildx create --name builder --use --driver-opt network=host
|
||||
# Check if there already is a builder running, else skip this and use the existing.
|
||||
# This will only happen either locally or running it via Github Actions
|
||||
#
|
||||
if ! docker buildx inspect builder > /dev/null 2>&1 ; then
|
||||
docker buildx create --name builder --use --driver-opt network=host
|
||||
fi
|
||||
|
||||
echo ">>> Running Docker Buildx..."
|
||||
|
||||
|
@@ -0,0 +1,5 @@
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN private_key TEXT;
|
||||
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN public_key TEXT;
|
@@ -0,0 +1 @@
|
||||
DROP TABLE emergency_access;
|
@@ -0,0 +1,14 @@
|
||||
CREATE TABLE emergency_access (
|
||||
uuid CHAR(36) NOT NULL PRIMARY KEY,
|
||||
grantor_uuid CHAR(36) REFERENCES users (uuid),
|
||||
grantee_uuid CHAR(36) REFERENCES users (uuid),
|
||||
email VARCHAR(255),
|
||||
key_encrypted TEXT,
|
||||
atype INTEGER NOT NULL,
|
||||
status INTEGER NOT NULL,
|
||||
wait_time_days INTEGER NOT NULL,
|
||||
recovery_initiated_at DATETIME,
|
||||
last_notification_at DATETIME,
|
||||
updated_at DATETIME NOT NULL,
|
||||
created_at DATETIME NOT NULL
|
||||
);
|
@@ -0,0 +1 @@
|
||||
DROP TABLE twofactor_incomplete;
|
@@ -0,0 +1,9 @@
|
||||
CREATE TABLE twofactor_incomplete (
|
||||
user_uuid CHAR(36) NOT NULL REFERENCES users(uuid),
|
||||
device_uuid CHAR(36) NOT NULL,
|
||||
device_name TEXT NOT NULL,
|
||||
login_time DATETIME NOT NULL,
|
||||
ip_address TEXT NOT NULL,
|
||||
|
||||
PRIMARY KEY (user_uuid, device_uuid)
|
||||
);
|
@@ -0,0 +1,5 @@
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN private_key TEXT;
|
||||
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN public_key TEXT;
|
@@ -0,0 +1 @@
|
||||
DROP TABLE emergency_access;
|
@@ -0,0 +1,14 @@
|
||||
CREATE TABLE emergency_access (
|
||||
uuid CHAR(36) NOT NULL PRIMARY KEY,
|
||||
grantor_uuid CHAR(36) REFERENCES users (uuid),
|
||||
grantee_uuid CHAR(36) REFERENCES users (uuid),
|
||||
email VARCHAR(255),
|
||||
key_encrypted TEXT,
|
||||
atype INTEGER NOT NULL,
|
||||
status INTEGER NOT NULL,
|
||||
wait_time_days INTEGER NOT NULL,
|
||||
recovery_initiated_at TIMESTAMP,
|
||||
last_notification_at TIMESTAMP,
|
||||
updated_at TIMESTAMP NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL
|
||||
);
|
@@ -0,0 +1 @@
|
||||
DROP TABLE twofactor_incomplete;
|
@@ -0,0 +1,9 @@
|
||||
CREATE TABLE twofactor_incomplete (
|
||||
user_uuid VARCHAR(40) NOT NULL REFERENCES users(uuid),
|
||||
device_uuid VARCHAR(40) NOT NULL,
|
||||
device_name TEXT NOT NULL,
|
||||
login_time TIMESTAMP NOT NULL,
|
||||
ip_address TEXT NOT NULL,
|
||||
|
||||
PRIMARY KEY (user_uuid, device_uuid)
|
||||
);
|
@@ -0,0 +1,5 @@
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN private_key TEXT;
|
||||
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN public_key TEXT;
|
@@ -0,0 +1 @@
|
||||
DROP TABLE emergency_access;
|
@@ -0,0 +1,14 @@
|
||||
CREATE TABLE emergency_access (
|
||||
uuid TEXT NOT NULL PRIMARY KEY,
|
||||
grantor_uuid TEXT REFERENCES users (uuid),
|
||||
grantee_uuid TEXT REFERENCES users (uuid),
|
||||
email TEXT,
|
||||
key_encrypted TEXT,
|
||||
atype INTEGER NOT NULL,
|
||||
status INTEGER NOT NULL,
|
||||
wait_time_days INTEGER NOT NULL,
|
||||
recovery_initiated_at DATETIME,
|
||||
last_notification_at DATETIME,
|
||||
updated_at DATETIME NOT NULL,
|
||||
created_at DATETIME NOT NULL
|
||||
);
|
@@ -0,0 +1 @@
|
||||
DROP TABLE twofactor_incomplete;
|
@@ -0,0 +1,9 @@
|
||||
CREATE TABLE twofactor_incomplete (
|
||||
user_uuid TEXT NOT NULL REFERENCES users(uuid),
|
||||
device_uuid TEXT NOT NULL,
|
||||
device_name TEXT NOT NULL,
|
||||
login_time DATETIME NOT NULL,
|
||||
ip_address TEXT NOT NULL,
|
||||
|
||||
PRIMARY KEY (user_uuid, device_uuid)
|
||||
);
|
99
resources/vaultwarden-icon-white.svg
Normal file
99
resources/vaultwarden-icon-white.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 8.7 KiB |
86
resources/vaultwarden-icon.svg
Normal file
86
resources/vaultwarden-icon.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 8.4 KiB |
271
resources/vaultwarden-logo-white.svg
Normal file
271
resources/vaultwarden-logo-white.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 17 KiB |
151
resources/vaultwarden-logo.svg
Normal file
151
resources/vaultwarden-logo.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 12 KiB |
@@ -1 +1 @@
|
||||
nightly-2021-06-24
|
||||
nightly-2021-11-05
|
||||
|
@@ -1,7 +1,7 @@
|
||||
use once_cell::sync::Lazy;
|
||||
use serde::de::DeserializeOwned;
|
||||
use serde_json::Value;
|
||||
use std::{env, time::Duration};
|
||||
use std::env;
|
||||
|
||||
use rocket::{
|
||||
http::{Cookie, Cookies, SameSite, Status},
|
||||
@@ -18,7 +18,9 @@ use crate::{
|
||||
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
|
||||
error::{Error, MapResult},
|
||||
mail,
|
||||
util::{format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker},
|
||||
util::{
|
||||
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
|
||||
},
|
||||
CONFIG,
|
||||
};
|
||||
|
||||
@@ -234,7 +236,7 @@ impl AdminTemplateData {
|
||||
}
|
||||
|
||||
#[get("/", rank = 1)]
|
||||
fn admin_page(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
|
||||
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
|
||||
let text = AdminTemplateData::new().render()?;
|
||||
Ok(Html(text))
|
||||
}
|
||||
@@ -269,7 +271,7 @@ fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> Json
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
|
||||
} else {
|
||||
let invitation = Invitation::new(data.email);
|
||||
let invitation = Invitation::new(user.email.clone());
|
||||
invitation.save(&conn)?;
|
||||
}
|
||||
|
||||
@@ -460,13 +462,13 @@ struct GitCommit {
|
||||
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
|
||||
let github_api = get_reqwest_client();
|
||||
|
||||
Ok(github_api.get(url).timeout(Duration::from_secs(10)).send()?.error_for_status()?.json::<T>()?)
|
||||
Ok(github_api.get(url).send()?.error_for_status()?.json::<T>()?)
|
||||
}
|
||||
|
||||
fn has_http_access() -> bool {
|
||||
let http_access = get_reqwest_client();
|
||||
|
||||
match http_access.head("https://github.com/dani-garcia/vaultwarden").timeout(Duration::from_secs(10)).send() {
|
||||
match http_access.head("https://github.com/dani-garcia/vaultwarden").send() {
|
||||
Ok(r) => r.status().is_success(),
|
||||
_ => false,
|
||||
}
|
||||
@@ -549,6 +551,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
|
||||
"web_vault_version": web_vault_version.version,
|
||||
"latest_web_build": latest_web_build,
|
||||
"running_within_docker": running_within_docker,
|
||||
"docker_base_image": docker_base_image(),
|
||||
"has_http_access": has_http_access,
|
||||
"ip_header_exists": &ip_header.0.is_some(),
|
||||
"ip_header_match": ip_header_name == CONFIG.ip_header(),
|
||||
|
@@ -49,6 +49,7 @@ struct RegisterData {
|
||||
MasterPasswordHint: Option<String>,
|
||||
Name: Option<String>,
|
||||
Token: Option<String>,
|
||||
#[allow(dead_code)]
|
||||
OrganizationUserId: Option<String>,
|
||||
}
|
||||
|
||||
@@ -62,11 +63,12 @@ struct KeysData {
|
||||
#[post("/accounts/register", data = "<data>")]
|
||||
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
||||
let data: RegisterData = data.into_inner().data;
|
||||
let email = data.Email.to_lowercase();
|
||||
|
||||
let mut user = match User::find_by_mail(&data.Email, &conn) {
|
||||
let mut user = match User::find_by_mail(&email, &conn) {
|
||||
Some(user) => {
|
||||
if !user.password_hash.is_empty() {
|
||||
if CONFIG.is_signup_allowed(&data.Email) {
|
||||
if CONFIG.is_signup_allowed(&email) {
|
||||
err!("User already exists")
|
||||
} else {
|
||||
err!("Registration not allowed or user already exists")
|
||||
@@ -75,20 +77,24 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
||||
|
||||
if let Some(token) = data.Token {
|
||||
let claims = decode_invite(&token)?;
|
||||
if claims.email == data.Email {
|
||||
if claims.email == email {
|
||||
user
|
||||
} else {
|
||||
err!("Registration email does not match invite email")
|
||||
}
|
||||
} else if Invitation::take(&data.Email, &conn) {
|
||||
} else if Invitation::take(&email, &conn) {
|
||||
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
|
||||
user_org.status = UserOrgStatus::Accepted as i32;
|
||||
user_org.save(&conn)?;
|
||||
}
|
||||
|
||||
user
|
||||
} else if CONFIG.is_signup_allowed(&data.Email) {
|
||||
err!("Account with this email already exists")
|
||||
} else if CONFIG.is_signup_allowed(&email) {
|
||||
// check if it's invited by emergency contact
|
||||
match EmergencyAccess::find_invited_by_grantee_email(&data.Email, &conn) {
|
||||
Some(_) => user,
|
||||
_ => err!("Account with this email already exists"),
|
||||
}
|
||||
} else {
|
||||
err!("Registration not allowed or user already exists")
|
||||
}
|
||||
@@ -97,8 +103,8 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
||||
// Order is important here; the invitation check must come first
|
||||
// because the vaultwarden admin can invite anyone, regardless
|
||||
// of other signup restrictions.
|
||||
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) {
|
||||
User::new(data.Email.clone())
|
||||
if Invitation::take(&email, &conn) || CONFIG.is_signup_allowed(&email) {
|
||||
User::new(email.clone())
|
||||
} else {
|
||||
err!("Registration not allowed or user already exists")
|
||||
}
|
||||
@@ -106,7 +112,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
||||
};
|
||||
|
||||
// Make sure we don't leave a lingering invitation.
|
||||
Invitation::take(&data.Email, &conn);
|
||||
Invitation::take(&email, &conn);
|
||||
|
||||
if let Some(client_kdf_iter) = data.KdfIterations {
|
||||
user.client_kdf_iter = client_kdf_iter;
|
||||
@@ -231,7 +237,10 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
|
||||
err!("Invalid password")
|
||||
}
|
||||
|
||||
user.set_password(&data.NewMasterPasswordHash, Some("post_rotatekey"));
|
||||
user.set_password(
|
||||
&data.NewMasterPasswordHash,
|
||||
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
|
||||
);
|
||||
user.akey = data.Key;
|
||||
user.save(&conn)
|
||||
}
|
||||
@@ -320,7 +329,9 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
|
||||
err!("The cipher is not owned by the user")
|
||||
}
|
||||
|
||||
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::CipherUpdate)?
|
||||
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
|
||||
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
|
||||
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &conn, &nt, UpdateType::None)?
|
||||
}
|
||||
|
||||
// Update user data
|
||||
@@ -329,7 +340,6 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
|
||||
user.akey = data.Key;
|
||||
user.private_key = Some(data.PrivateKey);
|
||||
user.reset_security_stamp();
|
||||
user.reset_stamp_exception();
|
||||
|
||||
user.save(&conn)
|
||||
}
|
||||
@@ -444,7 +454,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
|
||||
}
|
||||
|
||||
#[post("/accounts/verify-email")]
|
||||
fn post_verify_email(headers: Headers, _conn: DbConn) -> EmptyResult {
|
||||
fn post_verify_email(headers: Headers) -> EmptyResult {
|
||||
let user = headers.user;
|
||||
|
||||
if !CONFIG.mail_enabled() {
|
||||
@@ -576,24 +586,45 @@ struct PasswordHintData {
|
||||
|
||||
#[post("/accounts/password-hint", data = "<data>")]
|
||||
fn password_hint(data: JsonUpcase<PasswordHintData>, conn: DbConn) -> EmptyResult {
|
||||
let data: PasswordHintData = data.into_inner().data;
|
||||
|
||||
let hint = match User::find_by_mail(&data.Email, &conn) {
|
||||
Some(user) => user.password_hint,
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_password_hint(&data.Email, hint)?;
|
||||
} else if CONFIG.show_password_hint() {
|
||||
if let Some(hint) = hint {
|
||||
err!(format!("Your password hint is: {}", &hint));
|
||||
} else {
|
||||
err!("Sorry, you have no password hint...");
|
||||
}
|
||||
if !CONFIG.mail_enabled() && !CONFIG.show_password_hint() {
|
||||
err!("This server is not configured to provide password hints.");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
const NO_HINT: &str = "Sorry, you have no password hint...";
|
||||
|
||||
let data: PasswordHintData = data.into_inner().data;
|
||||
let email = &data.Email;
|
||||
|
||||
match User::find_by_mail(email, &conn) {
|
||||
None => {
|
||||
// To prevent user enumeration, act as if the user exists.
|
||||
if CONFIG.mail_enabled() {
|
||||
// There is still a timing side channel here in that the code
|
||||
// paths that send mail take noticeably longer than ones that
|
||||
// don't. Add a randomized sleep to mitigate this somewhat.
|
||||
use rand::{thread_rng, Rng};
|
||||
let mut rng = thread_rng();
|
||||
let base = 1000;
|
||||
let delta: i32 = 100;
|
||||
let sleep_ms = (base + rng.gen_range(-delta..=delta)) as u64;
|
||||
std::thread::sleep(std::time::Duration::from_millis(sleep_ms));
|
||||
Ok(())
|
||||
} else {
|
||||
err!(NO_HINT);
|
||||
}
|
||||
}
|
||||
Some(user) => {
|
||||
let hint: Option<String> = user.password_hint;
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_password_hint(email, hint)?;
|
||||
Ok(())
|
||||
} else if let Some(hint) = hint {
|
||||
err!(format!("Your password hint is: {}", hint));
|
||||
} else {
|
||||
err!(NO_HINT);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
@@ -623,7 +654,7 @@ struct VerifyPasswordData {
|
||||
}
|
||||
|
||||
#[post("/accounts/verify-password", data = "<data>")]
|
||||
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers, _conn: DbConn) -> EmptyResult {
|
||||
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers) -> EmptyResult {
|
||||
let data: VerifyPasswordData = data.into_inner().data;
|
||||
let user = headers.user;
|
||||
|
||||
|
@@ -105,7 +105,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> Json<Value> {
|
||||
let collections_json: Vec<Value> =
|
||||
collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect();
|
||||
|
||||
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
|
||||
let policies = OrgPolicy::find_confirmed_by_user(&headers.user.uuid, &conn);
|
||||
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
|
||||
|
||||
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
|
||||
@@ -248,7 +248,7 @@ fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn
|
||||
// This check is usually only needed in update_cipher_from_data(), but we
|
||||
// need it here as well to avoid creating an empty cipher in the call to
|
||||
// cipher.save() below.
|
||||
enforce_personal_ownership_policy(&data.Cipher, &headers, &conn)?;
|
||||
enforce_personal_ownership_policy(Some(&data.Cipher), &headers, &conn)?;
|
||||
|
||||
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
|
||||
cipher.user_uuid = Some(headers.user.uuid.clone());
|
||||
@@ -289,8 +289,8 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
|
||||
/// allowed to delete or share such ciphers to an org, however.
|
||||
///
|
||||
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
|
||||
fn enforce_personal_ownership_policy(data: &CipherData, headers: &Headers, conn: &DbConn) -> EmptyResult {
|
||||
if data.OrganizationId.is_none() {
|
||||
fn enforce_personal_ownership_policy(data: Option<&CipherData>, headers: &Headers, conn: &DbConn) -> EmptyResult {
|
||||
if data.is_none() || data.unwrap().OrganizationId.is_none() {
|
||||
let user_uuid = &headers.user.uuid;
|
||||
let policy_type = OrgPolicyType::PersonalOwnership;
|
||||
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
|
||||
@@ -309,7 +309,7 @@ pub fn update_cipher_from_data(
|
||||
nt: &Notify,
|
||||
ut: UpdateType,
|
||||
) -> EmptyResult {
|
||||
enforce_personal_ownership_policy(&data, headers, conn)?;
|
||||
enforce_personal_ownership_policy(Some(&data), headers, conn)?;
|
||||
|
||||
// Check that the client isn't updating an existing cipher with stale data.
|
||||
if let Some(dt) = data.LastKnownRevisionDate {
|
||||
@@ -458,6 +458,8 @@ struct RelationsData {
|
||||
|
||||
#[post("/ciphers/import", data = "<data>")]
|
||||
fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
|
||||
enforce_personal_ownership_policy(None, &headers, &conn)?;
|
||||
|
||||
let data: ImportData = data.into_inner().data;
|
||||
|
||||
// Read and create the folders
|
||||
@@ -687,12 +689,6 @@ fn put_cipher_share_selected(
|
||||
};
|
||||
}
|
||||
|
||||
let attachments = Attachment::find_by_ciphers(cipher_ids, &conn);
|
||||
|
||||
if !attachments.is_empty() {
|
||||
err!("Ciphers should not have any attachments.")
|
||||
}
|
||||
|
||||
while let Some(cipher) = data.Ciphers.pop() {
|
||||
let mut shared_cipher_data = ShareCipherData {
|
||||
Cipher: cipher,
|
||||
@@ -783,10 +779,7 @@ struct AttachmentRequestData {
|
||||
Key: String,
|
||||
FileName: String,
|
||||
FileSize: i32,
|
||||
// We check org owner/admin status via is_write_accessible_to_user(),
|
||||
// so we can just ignore this field.
|
||||
//
|
||||
// AdminRequest: bool,
|
||||
AdminRequest: Option<bool>, // true when attaching from an org vault view
|
||||
}
|
||||
|
||||
enum FileUploadType {
|
||||
@@ -821,14 +814,17 @@ fn post_attachment_v2(
|
||||
attachment.save(&conn).expect("Error saving attachment");
|
||||
|
||||
let url = format!("/ciphers/{}/attachment/{}", cipher.uuid, attachment_id);
|
||||
let response_key = match data.AdminRequest {
|
||||
Some(b) if b => "CipherMiniResponse",
|
||||
_ => "CipherResponse",
|
||||
};
|
||||
|
||||
Ok(Json(json!({ // AttachmentUploadDataResponseModel
|
||||
"Object": "attachment-fileUpload",
|
||||
"AttachmentId": attachment_id,
|
||||
"Url": url,
|
||||
"FileUploadType": FileUploadType::Direct as i32,
|
||||
"CipherResponse": cipher.to_json(&headers.host, &headers.user.uuid, &conn),
|
||||
"CipherMiniResponse": null,
|
||||
response_key: cipher.to_json(&headers.host, &headers.user.uuid, &conn),
|
||||
})))
|
||||
}
|
||||
|
||||
@@ -871,7 +867,7 @@ fn save_attachment(
|
||||
Some(limit_kb) => {
|
||||
let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, conn) + size_adjust;
|
||||
if left <= 0 {
|
||||
err_discard!("Attachment size limit reached! Delete some files to open space", data)
|
||||
err_discard!("Attachment storage limit reached! Delete some attachments to free up space", data)
|
||||
}
|
||||
Some(left as u64)
|
||||
}
|
||||
@@ -883,7 +879,7 @@ fn save_attachment(
|
||||
Some(limit_kb) => {
|
||||
let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, conn) + size_adjust;
|
||||
if left <= 0 {
|
||||
err_discard!("Attachment size limit reached! Delete some files to open space", data)
|
||||
err_discard!("Attachment storage limit reached! Delete some attachments to free up space", data)
|
||||
}
|
||||
Some(left as u64)
|
||||
}
|
||||
@@ -937,7 +933,7 @@ fn save_attachment(
|
||||
return;
|
||||
}
|
||||
SaveResult::Partial(_, reason) => {
|
||||
error = Some(format!("Attachment size limit exceeded with this file: {:?}", reason));
|
||||
error = Some(format!("Attachment storage limit exceeded with this file: {:?}", reason));
|
||||
return;
|
||||
}
|
||||
SaveResult::Error(e) => {
|
||||
|
804
src/api/core/emergency_access.rs
Normal file
804
src/api/core/emergency_access.rs
Normal file
@@ -0,0 +1,804 @@
|
||||
use chrono::{Duration, Utc};
|
||||
use rocket::Route;
|
||||
use rocket_contrib::json::Json;
|
||||
use serde_json::Value;
|
||||
use std::borrow::Borrow;
|
||||
|
||||
use crate::{
|
||||
api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString},
|
||||
auth::{decode_emergency_access_invite, Headers},
|
||||
db::{models::*, DbConn, DbPool},
|
||||
mail, CONFIG,
|
||||
};
|
||||
|
||||
pub fn routes() -> Vec<Route> {
|
||||
routes![
|
||||
get_contacts,
|
||||
get_grantees,
|
||||
get_emergency_access,
|
||||
put_emergency_access,
|
||||
delete_emergency_access,
|
||||
post_delete_emergency_access,
|
||||
send_invite,
|
||||
resend_invite,
|
||||
accept_invite,
|
||||
confirm_emergency_access,
|
||||
initiate_emergency_access,
|
||||
approve_emergency_access,
|
||||
reject_emergency_access,
|
||||
takeover_emergency_access,
|
||||
password_emergency_access,
|
||||
view_emergency_access,
|
||||
policies_emergency_access,
|
||||
]
|
||||
}
|
||||
|
||||
// region get
|
||||
|
||||
#[get("/emergency-access/trusted")]
|
||||
fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn);
|
||||
|
||||
let emergency_access_list_json: Vec<Value> =
|
||||
emergency_access_list.iter().map(|e| e.to_json_grantee_details(&conn)).collect();
|
||||
|
||||
Ok(Json(json!({
|
||||
"Data": emergency_access_list_json,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
})))
|
||||
}
|
||||
|
||||
#[get("/emergency-access/granted")]
|
||||
fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn);
|
||||
|
||||
let emergency_access_list_json: Vec<Value> =
|
||||
emergency_access_list.iter().map(|e| e.to_json_grantor_details(&conn)).collect();
|
||||
|
||||
Ok(Json(json!({
|
||||
"Data": emergency_access_list_json,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
})))
|
||||
}
|
||||
|
||||
#[get("/emergency-access/<emer_id>")]
|
||||
fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn))),
|
||||
None => err!("Emergency access not valid."),
|
||||
}
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
// region put/post
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct EmergencyAccessUpdateData {
|
||||
Type: NumberOrString,
|
||||
WaitTimeDays: i32,
|
||||
KeyEncrypted: Option<String>,
|
||||
}
|
||||
|
||||
#[put("/emergency-access/<emer_id>", data = "<data>")]
|
||||
fn put_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
|
||||
post_emergency_access(emer_id, data, conn)
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>", data = "<data>")]
|
||||
fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let data: EmergencyAccessUpdateData = data.into_inner().data;
|
||||
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emergency_access) => emergency_access,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
|
||||
Some(new_type) => new_type as i32,
|
||||
None => err!("Invalid emergency access type."),
|
||||
};
|
||||
|
||||
emergency_access.atype = new_type;
|
||||
emergency_access.wait_time_days = data.WaitTimeDays;
|
||||
emergency_access.key_encrypted = data.KeyEncrypted;
|
||||
|
||||
emergency_access.save(&conn)?;
|
||||
Ok(Json(emergency_access.to_json()))
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
// region delete
|
||||
|
||||
#[delete("/emergency-access/<emer_id>")]
|
||||
fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let grantor_user = headers.user;
|
||||
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => {
|
||||
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
emer
|
||||
}
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
emergency_access.delete(&conn)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/delete")]
|
||||
fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||
delete_emergency_access(emer_id, headers, conn)
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
// region invite
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct EmergencyAccessInviteData {
|
||||
Email: String,
|
||||
Type: NumberOrString,
|
||||
WaitTimeDays: i32,
|
||||
}
|
||||
|
||||
#[post("/emergency-access/invite", data = "<data>")]
|
||||
fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let data: EmergencyAccessInviteData = data.into_inner().data;
|
||||
let email = data.Email.to_lowercase();
|
||||
let wait_time_days = data.WaitTimeDays;
|
||||
|
||||
let emergency_access_status = EmergencyAccessStatus::Invited as i32;
|
||||
|
||||
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
|
||||
Some(new_type) => new_type as i32,
|
||||
None => err!("Invalid emergency access type."),
|
||||
};
|
||||
|
||||
let grantor_user = headers.user;
|
||||
|
||||
// avoid setting yourself as emergency contact
|
||||
if email == grantor_user.email {
|
||||
err!("You can not set yourself as an emergency contact.")
|
||||
}
|
||||
|
||||
let grantee_user = match User::find_by_mail(&email, &conn) {
|
||||
None => {
|
||||
if !CONFIG.signups_allowed() {
|
||||
err!(format!("Grantee user does not exist: {}", email))
|
||||
}
|
||||
|
||||
if !CONFIG.is_email_domain_allowed(&email) {
|
||||
err!("Email domain not eligible for invitations")
|
||||
}
|
||||
|
||||
if !CONFIG.mail_enabled() {
|
||||
let invitation = Invitation::new(email.clone());
|
||||
invitation.save(&conn)?;
|
||||
}
|
||||
|
||||
let mut user = User::new(email.clone());
|
||||
user.save(&conn)?;
|
||||
user
|
||||
}
|
||||
Some(user) => user,
|
||||
};
|
||||
|
||||
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
|
||||
&grantor_user.uuid,
|
||||
&grantee_user.uuid,
|
||||
&grantee_user.email,
|
||||
&conn,
|
||||
)
|
||||
.is_some()
|
||||
{
|
||||
err!(format!("Grantee user already invited: {}", email))
|
||||
}
|
||||
|
||||
let mut new_emergency_access = EmergencyAccess::new(
|
||||
grantor_user.uuid.clone(),
|
||||
Some(grantee_user.email.clone()),
|
||||
emergency_access_status,
|
||||
new_type,
|
||||
wait_time_days,
|
||||
);
|
||||
new_emergency_access.save(&conn)?;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_invite(
|
||||
&grantee_user.email,
|
||||
&grantee_user.uuid,
|
||||
Some(new_emergency_access.uuid),
|
||||
Some(grantor_user.name.clone()),
|
||||
Some(grantor_user.email),
|
||||
)?;
|
||||
} else {
|
||||
// Automatically mark user as accepted if no email invites
|
||||
match User::find_by_mail(&email, &conn) {
|
||||
Some(user) => {
|
||||
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()) {
|
||||
Ok(v) => (v),
|
||||
Err(e) => err!(e.to_string()),
|
||||
}
|
||||
}
|
||||
None => err!("Grantee user not found."),
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/reinvite")]
|
||||
fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if emergency_access.grantor_uuid != headers.user.uuid {
|
||||
err!("Emergency access not valid.");
|
||||
}
|
||||
|
||||
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
|
||||
err!("The grantee user is already accepted or confirmed to the organization");
|
||||
}
|
||||
|
||||
let email = match emergency_access.email.clone() {
|
||||
Some(email) => email,
|
||||
None => err!("Email not valid."),
|
||||
};
|
||||
|
||||
let grantee_user = match User::find_by_mail(&email, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantee user not found."),
|
||||
};
|
||||
|
||||
let grantor_user = headers.user;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_invite(
|
||||
&email,
|
||||
&grantor_user.uuid,
|
||||
Some(emergency_access.uuid),
|
||||
Some(grantor_user.name.clone()),
|
||||
Some(grantor_user.email),
|
||||
)?;
|
||||
} else {
|
||||
if Invitation::find_by_mail(&email, &conn).is_none() {
|
||||
let invitation = Invitation::new(email);
|
||||
invitation.save(&conn)?;
|
||||
}
|
||||
|
||||
// Automatically mark user as accepted if no email invites
|
||||
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow()) {
|
||||
Ok(v) => (v),
|
||||
Err(e) => err!(e.to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(non_snake_case)]
|
||||
struct AcceptData {
|
||||
Token: String,
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
|
||||
fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let data: AcceptData = data.into_inner().data;
|
||||
let token = &data.Token;
|
||||
let claims = decode_emergency_access_invite(token)?;
|
||||
|
||||
let grantee_user = match User::find_by_mail(&claims.email, &conn) {
|
||||
Some(user) => {
|
||||
Invitation::take(&claims.email, &conn);
|
||||
user
|
||||
}
|
||||
None => err!("Invited user not found"),
|
||||
};
|
||||
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
// get grantor user to send Accepted email
|
||||
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
|
||||
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
|
||||
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
|
||||
{
|
||||
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn) {
|
||||
Ok(v) => (v),
|
||||
Err(e) => err!(e.to_string()),
|
||||
}
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
} else {
|
||||
err!("Emergency access invitation error.")
|
||||
}
|
||||
}
|
||||
|
||||
fn accept_invite_process(grantee_uuid: String, emer_id: String, email: Option<String>, conn: &DbConn) -> EmptyResult {
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
let emer_email = emergency_access.email;
|
||||
if emer_email.is_none() || emer_email != email {
|
||||
err!("User email does not match invite.");
|
||||
}
|
||||
|
||||
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
|
||||
err!("Emergency contact already accepted.");
|
||||
}
|
||||
|
||||
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
|
||||
emergency_access.grantee_uuid = Some(grantee_uuid);
|
||||
emergency_access.email = None;
|
||||
emergency_access.save(conn)
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(non_snake_case)]
|
||||
struct ConfirmData {
|
||||
Key: String,
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
|
||||
fn confirm_emergency_access(
|
||||
emer_id: String,
|
||||
data: JsonUpcase<ConfirmData>,
|
||||
headers: Headers,
|
||||
conn: DbConn,
|
||||
) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let confirming_user = headers.user;
|
||||
let data: ConfirmData = data.into_inner().data;
|
||||
let key = data.Key;
|
||||
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
|
||||
|| emergency_access.grantor_uuid != confirming_user.uuid
|
||||
{
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantee user not found."),
|
||||
};
|
||||
|
||||
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
|
||||
emergency_access.key_encrypted = Some(key);
|
||||
emergency_access.email = None;
|
||||
|
||||
emergency_access.save(&conn)?;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name)?;
|
||||
}
|
||||
Ok(Json(emergency_access.to_json()))
|
||||
} else {
|
||||
err!("Grantee user not found.")
|
||||
}
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
// region access emergency access
|
||||
|
||||
#[post("/emergency-access/<emer_id>/initiate")]
|
||||
fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let initiating_user = headers.user;
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|
||||
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
|
||||
{
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
let now = Utc::now().naive_utc();
|
||||
emergency_access.status = EmergencyAccessStatus::RecoveryInitiated as i32;
|
||||
emergency_access.updated_at = now;
|
||||
emergency_access.recovery_initiated_at = Some(now);
|
||||
emergency_access.last_notification_at = Some(now);
|
||||
emergency_access.save(&conn)?;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_recovery_initiated(
|
||||
&grantor_user.email,
|
||||
&initiating_user.name,
|
||||
emergency_access.get_type_as_str(),
|
||||
&emergency_access.wait_time_days.clone().to_string(),
|
||||
)?;
|
||||
}
|
||||
Ok(Json(emergency_access.to_json()))
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/approve")]
|
||||
fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let approving_user = headers.user;
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|
||||
|| emergency_access.grantor_uuid != approving_user.uuid
|
||||
{
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantee user not found."),
|
||||
};
|
||||
|
||||
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
|
||||
emergency_access.save(&conn)?;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)?;
|
||||
}
|
||||
Ok(Json(emergency_access.to_json()))
|
||||
} else {
|
||||
err!("Grantee user not found.")
|
||||
}
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/reject")]
|
||||
fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let rejecting_user = headers.user;
|
||||
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|
||||
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|
||||
|| emergency_access.grantor_uuid != rejecting_user.uuid
|
||||
{
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantee user not found."),
|
||||
};
|
||||
|
||||
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
|
||||
emergency_access.save(&conn)?;
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name)?;
|
||||
}
|
||||
Ok(Json(emergency_access.to_json()))
|
||||
} else {
|
||||
err!("Grantee user not found.")
|
||||
}
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
// region action
|
||||
|
||||
#[post("/emergency-access/<emer_id>/view")]
|
||||
fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let requesting_user = headers.user;
|
||||
let host = headers.host;
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn);
|
||||
|
||||
let ciphers_json: Vec<Value> =
|
||||
ciphers.iter().map(|c| c.to_json(&host, &emergency_access.grantor_uuid, &conn)).collect();
|
||||
|
||||
Ok(Json(json!({
|
||||
"Ciphers": ciphers_json,
|
||||
"KeyEncrypted": &emergency_access.key_encrypted,
|
||||
"Object": "emergencyAccessView",
|
||||
})))
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/takeover")]
|
||||
fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let requesting_user = headers.user;
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
Ok(Json(json!({
|
||||
"Kdf": grantor_user.client_kdf_type,
|
||||
"KdfIterations": grantor_user.client_kdf_iter,
|
||||
"KeyEncrypted": &emergency_access.key_encrypted,
|
||||
"Object": "emergencyAccessTakeover",
|
||||
})))
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct EmergencyAccessPasswordData {
|
||||
NewMasterPasswordHash: String,
|
||||
Key: String,
|
||||
}
|
||||
|
||||
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
|
||||
fn password_emergency_access(
|
||||
emer_id: String,
|
||||
data: JsonUpcase<EmergencyAccessPasswordData>,
|
||||
headers: Headers,
|
||||
conn: DbConn,
|
||||
) -> EmptyResult {
|
||||
check_emergency_access_allowed()?;
|
||||
|
||||
let data: EmergencyAccessPasswordData = data.into_inner().data;
|
||||
let new_master_password_hash = &data.NewMasterPasswordHash;
|
||||
let key = data.Key;
|
||||
|
||||
let requesting_user = headers.user;
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
// change grantor_user password
|
||||
grantor_user.set_password(new_master_password_hash, None);
|
||||
grantor_user.akey = key;
|
||||
grantor_user.save(&conn)?;
|
||||
|
||||
// Disable TwoFactor providers since they will otherwise block logins
|
||||
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn)?;
|
||||
|
||||
// Removing owner, check that there are at least another owner
|
||||
let user_org_grantor = UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn);
|
||||
|
||||
// Remove grantor from all organisations unless Owner
|
||||
for user_org in user_org_grantor {
|
||||
if user_org.atype != UserOrgType::Owner as i32 {
|
||||
user_org.delete(&conn)?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// endregion
|
||||
|
||||
#[get("/emergency-access/<emer_id>/policies")]
|
||||
fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
let requesting_user = headers.user;
|
||||
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||
Some(emer) => emer,
|
||||
None => err!("Emergency access not valid."),
|
||||
};
|
||||
|
||||
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||
err!("Emergency access not valid.")
|
||||
}
|
||||
|
||||
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||
Some(user) => user,
|
||||
None => err!("Grantor user not found."),
|
||||
};
|
||||
|
||||
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
|
||||
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
|
||||
|
||||
Ok(Json(json!({
|
||||
"Data": policies_json,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
})))
|
||||
}
|
||||
|
||||
fn is_valid_request(
|
||||
emergency_access: &EmergencyAccess,
|
||||
requesting_user_uuid: String,
|
||||
requested_access_type: EmergencyAccessType,
|
||||
) -> bool {
|
||||
emergency_access.grantee_uuid == Some(requesting_user_uuid)
|
||||
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
|
||||
&& emergency_access.atype == requested_access_type as i32
|
||||
}
|
||||
|
||||
fn check_emergency_access_allowed() -> EmptyResult {
|
||||
if !CONFIG.emergency_access_allowed() {
|
||||
err!("Emergency access is not allowed.")
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn emergency_request_timeout_job(pool: DbPool) {
|
||||
debug!("Start emergency_request_timeout_job");
|
||||
if !CONFIG.emergency_access_allowed() {
|
||||
return;
|
||||
}
|
||||
|
||||
if let Ok(conn) = pool.get() {
|
||||
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
|
||||
|
||||
if emergency_access_list.is_empty() {
|
||||
debug!("No emergency request timeout to approve");
|
||||
}
|
||||
|
||||
for mut emer in emergency_access_list {
|
||||
if emer.recovery_initiated_at.is_some()
|
||||
&& Utc::now().naive_utc()
|
||||
>= emer.recovery_initiated_at.unwrap() + Duration::days(emer.wait_time_days as i64)
|
||||
{
|
||||
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
|
||||
emer.save(&conn).expect("Cannot save emergency access on job");
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
// get grantor user to send Accepted email
|
||||
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
|
||||
|
||||
// get grantee user to send Accepted email
|
||||
let grantee_user =
|
||||
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
|
||||
.expect("Grantee user not found.");
|
||||
|
||||
mail::send_emergency_access_recovery_timed_out(
|
||||
&grantor_user.email,
|
||||
&grantee_user.name.clone(),
|
||||
emer.get_type_as_str(),
|
||||
)
|
||||
.expect("Error on sending email");
|
||||
|
||||
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
|
||||
.expect("Error on sending email");
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
error!("Failed to get DB connection while searching emergency request timed out")
|
||||
}
|
||||
}
|
||||
|
||||
pub fn emergency_notification_reminder_job(pool: DbPool) {
|
||||
debug!("Start emergency_notification_reminder_job");
|
||||
if !CONFIG.emergency_access_allowed() {
|
||||
return;
|
||||
}
|
||||
|
||||
if let Ok(conn) = pool.get() {
|
||||
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
|
||||
|
||||
if emergency_access_list.is_empty() {
|
||||
debug!("No emergency request reminder notification to send");
|
||||
}
|
||||
|
||||
for mut emer in emergency_access_list {
|
||||
if (emer.recovery_initiated_at.is_some()
|
||||
&& Utc::now().naive_utc()
|
||||
>= emer.recovery_initiated_at.unwrap() + Duration::days((emer.wait_time_days as i64) - 1))
|
||||
&& (emer.last_notification_at.is_none()
|
||||
|| (emer.last_notification_at.is_some()
|
||||
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
|
||||
{
|
||||
emer.save(&conn).expect("Cannot save emergency access on job");
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
// get grantor user to send Accepted email
|
||||
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
|
||||
|
||||
// get grantee user to send Accepted email
|
||||
let grantee_user =
|
||||
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
|
||||
.expect("Grantee user not found.");
|
||||
|
||||
mail::send_emergency_access_recovery_reminder(
|
||||
&grantor_user.email,
|
||||
&grantee_user.name.clone(),
|
||||
emer.get_type_as_str(),
|
||||
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
|
||||
)
|
||||
.expect("Error on sending email");
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
error!("Failed to get DB connection while searching emergency notification reminder")
|
||||
}
|
||||
}
|
@@ -1,12 +1,15 @@
|
||||
mod accounts;
|
||||
mod ciphers;
|
||||
mod emergency_access;
|
||||
mod folders;
|
||||
mod organizations;
|
||||
mod sends;
|
||||
pub mod two_factor;
|
||||
|
||||
pub use ciphers::purge_trashed_ciphers;
|
||||
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
|
||||
pub use sends::purge_sends;
|
||||
pub use two_factor::send_incomplete_2fa_notifications;
|
||||
|
||||
pub fn routes() -> Vec<Route> {
|
||||
let mut mod_routes =
|
||||
@@ -15,6 +18,7 @@ pub fn routes() -> Vec<Route> {
|
||||
let mut routes = Vec::new();
|
||||
routes.append(&mut accounts::routes());
|
||||
routes.append(&mut ciphers::routes());
|
||||
routes.append(&mut emergency_access::routes());
|
||||
routes.append(&mut folders::routes());
|
||||
routes.append(&mut organizations::routes());
|
||||
routes.append(&mut two_factor::routes());
|
||||
|
@@ -35,12 +35,15 @@ pub fn routes() -> Vec<Route> {
|
||||
get_org_users,
|
||||
send_invite,
|
||||
reinvite_user,
|
||||
bulk_reinvite_user,
|
||||
confirm_invite,
|
||||
bulk_confirm_invite,
|
||||
accept_invite,
|
||||
get_user,
|
||||
edit_user,
|
||||
put_organization_user,
|
||||
delete_user,
|
||||
bulk_delete_user,
|
||||
post_delete_user,
|
||||
post_org_import,
|
||||
list_policies,
|
||||
@@ -51,6 +54,8 @@ pub fn routes() -> Vec<Route> {
|
||||
get_plans,
|
||||
get_plans_tax_rates,
|
||||
import,
|
||||
post_org_keys,
|
||||
bulk_public_keys,
|
||||
]
|
||||
}
|
||||
|
||||
@@ -61,6 +66,7 @@ struct OrgData {
|
||||
CollectionName: String,
|
||||
Key: String,
|
||||
Name: String,
|
||||
Keys: Option<OrgKeyData>,
|
||||
#[serde(rename = "PlanType")]
|
||||
_PlanType: NumberOrString, // Ignored, always use the same plan
|
||||
}
|
||||
@@ -78,15 +84,39 @@ struct NewCollectionData {
|
||||
Name: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(non_snake_case)]
|
||||
struct OrgKeyData {
|
||||
EncryptedPrivateKey: String,
|
||||
PublicKey: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct OrgBulkIds {
|
||||
Ids: Vec<String>,
|
||||
}
|
||||
|
||||
#[post("/organizations", data = "<data>")]
|
||||
fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult {
|
||||
if !CONFIG.is_org_creation_allowed(&headers.user.email) {
|
||||
err!("User not allowed to create organizations")
|
||||
}
|
||||
if OrgPolicy::is_applicable_to_user(&headers.user.uuid, OrgPolicyType::SingleOrg, &conn) {
|
||||
err!(
|
||||
"You may not create an organization. You belong to an organization which has a policy that prohibits you from being a member of any other organization."
|
||||
)
|
||||
}
|
||||
|
||||
let data: OrgData = data.into_inner().data;
|
||||
let (private_key, public_key) = if data.Keys.is_some() {
|
||||
let keys: OrgKeyData = data.Keys.unwrap();
|
||||
(Some(keys.EncryptedPrivateKey), Some(keys.PublicKey))
|
||||
} else {
|
||||
(None, None)
|
||||
};
|
||||
|
||||
let org = Organization::new(data.Name, data.BillingEmail);
|
||||
let org = Organization::new(data.Name, data.BillingEmail, private_key, public_key);
|
||||
let mut user_org = UserOrganization::new(headers.user.uuid, org.uuid.clone());
|
||||
let collection = Collection::new(org.uuid.clone(), data.CollectionName);
|
||||
|
||||
@@ -352,7 +382,7 @@ fn delete_organization_collection(
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
#[allow(non_snake_case, dead_code)]
|
||||
struct DeleteCollectionData {
|
||||
Id: String,
|
||||
OrgId: String,
|
||||
@@ -468,6 +498,32 @@ fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) ->
|
||||
}))
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/keys", data = "<data>")]
|
||||
fn post_org_keys(org_id: String, data: JsonUpcase<OrgKeyData>, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
|
||||
let data: OrgKeyData = data.into_inner().data;
|
||||
|
||||
let mut org = match Organization::find_by_uuid(&org_id, &conn) {
|
||||
Some(organization) => {
|
||||
if organization.private_key.is_some() && organization.public_key.is_some() {
|
||||
err!("Organization Keys already exist")
|
||||
}
|
||||
organization
|
||||
}
|
||||
None => err!("Can't find organization details"),
|
||||
};
|
||||
|
||||
org.private_key = Some(data.EncryptedPrivateKey);
|
||||
org.public_key = Some(data.PublicKey);
|
||||
|
||||
org.save(&conn)?;
|
||||
|
||||
Ok(Json(json!({
|
||||
"Object": "organizationKeys",
|
||||
"PublicKey": org.public_key,
|
||||
"PrivateKey": org.private_key,
|
||||
})))
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(non_snake_case)]
|
||||
struct CollectionData {
|
||||
@@ -499,18 +555,19 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
||||
}
|
||||
|
||||
for email in data.Emails.iter() {
|
||||
let email = email.to_lowercase();
|
||||
let mut user_org_status = if CONFIG.mail_enabled() {
|
||||
UserOrgStatus::Invited as i32
|
||||
} else {
|
||||
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
|
||||
};
|
||||
let user = match User::find_by_mail(email, &conn) {
|
||||
let user = match User::find_by_mail(&email, &conn) {
|
||||
None => {
|
||||
if !CONFIG.invitations_allowed() {
|
||||
err!(format!("User does not exist: {}", email))
|
||||
}
|
||||
|
||||
if !CONFIG.is_email_domain_allowed(email) {
|
||||
if !CONFIG.is_email_domain_allowed(&email) {
|
||||
err!("Email domain not eligible for invitations")
|
||||
}
|
||||
|
||||
@@ -560,7 +617,7 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
||||
};
|
||||
|
||||
mail::send_invite(
|
||||
email,
|
||||
&email,
|
||||
&user.uuid,
|
||||
Some(org_id.clone()),
|
||||
Some(new_user.uuid),
|
||||
@@ -573,8 +630,44 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/reinvite", data = "<data>")]
|
||||
fn bulk_reinvite_user(
|
||||
org_id: String,
|
||||
data: JsonUpcase<OrgBulkIds>,
|
||||
headers: AdminHeaders,
|
||||
conn: DbConn,
|
||||
) -> Json<Value> {
|
||||
let data: OrgBulkIds = data.into_inner().data;
|
||||
|
||||
let mut bulk_response = Vec::new();
|
||||
for org_user_id in data.Ids {
|
||||
let err_msg = match _reinvite_user(&org_id, &org_user_id, &headers.user.email, &conn) {
|
||||
Ok(_) => String::from(""),
|
||||
Err(e) => format!("{:?}", e),
|
||||
};
|
||||
|
||||
bulk_response.push(json!(
|
||||
{
|
||||
"Object": "OrganizationBulkConfirmResponseModel",
|
||||
"Id": org_user_id,
|
||||
"Error": err_msg
|
||||
}
|
||||
))
|
||||
}
|
||||
|
||||
Json(json!({
|
||||
"Data": bulk_response,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/<user_org>/reinvite")]
|
||||
fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
||||
_reinvite_user(&org_id, &user_org, &headers.user.email, &conn)
|
||||
}
|
||||
|
||||
fn _reinvite_user(org_id: &str, user_org: &str, invited_by_email: &str, conn: &DbConn) -> EmptyResult {
|
||||
if !CONFIG.invitations_allowed() {
|
||||
err!("Invitations are not allowed.")
|
||||
}
|
||||
@@ -583,7 +676,7 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
||||
err!("SMTP is not configured.")
|
||||
}
|
||||
|
||||
let user_org = match UserOrganization::find_by_uuid(&user_org, &conn) {
|
||||
let user_org = match UserOrganization::find_by_uuid(user_org, conn) {
|
||||
Some(user_org) => user_org,
|
||||
None => err!("The user hasn't been invited to the organization."),
|
||||
};
|
||||
@@ -592,12 +685,12 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
||||
err!("The user is already accepted or confirmed to the organization")
|
||||
}
|
||||
|
||||
let user = match User::find_by_uuid(&user_org.user_uuid, &conn) {
|
||||
let user = match User::find_by_uuid(&user_org.user_uuid, conn) {
|
||||
Some(user) => user,
|
||||
None => err!("User not found."),
|
||||
};
|
||||
|
||||
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
|
||||
let org_name = match Organization::find_by_uuid(org_id, conn) {
|
||||
Some(org) => org.name,
|
||||
None => err!("Error looking up organization."),
|
||||
};
|
||||
@@ -606,14 +699,14 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
||||
mail::send_invite(
|
||||
&user.email,
|
||||
&user.uuid,
|
||||
Some(org_id),
|
||||
Some(org_id.to_string()),
|
||||
Some(user_org.uuid),
|
||||
&org_name,
|
||||
Some(headers.user.email),
|
||||
Some(invited_by_email.to_string()),
|
||||
)?;
|
||||
} else {
|
||||
let invitation = Invitation::new(user.email);
|
||||
invitation.save(&conn)?;
|
||||
invitation.save(conn)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@@ -646,6 +739,43 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
|
||||
err!("User already accepted the invitation")
|
||||
}
|
||||
|
||||
let user_twofactor_disabled = TwoFactor::find_by_user(&user_org.user_uuid, &conn).is_empty();
|
||||
|
||||
let policy = OrgPolicyType::TwoFactorAuthentication as i32;
|
||||
let org_twofactor_policy_enabled =
|
||||
match OrgPolicy::find_by_org_and_type(&user_org.org_uuid, policy, &conn) {
|
||||
Some(p) => p.enabled,
|
||||
None => false,
|
||||
};
|
||||
|
||||
if org_twofactor_policy_enabled && user_twofactor_disabled {
|
||||
err!("You cannot join this organization until you enable two-step login on your user account.")
|
||||
}
|
||||
|
||||
// Enforce Single Organization Policy of organization user is trying to join
|
||||
let single_org_policy_enabled =
|
||||
match OrgPolicy::find_by_org_and_type(&user_org.org_uuid, OrgPolicyType::SingleOrg as i32, &conn) {
|
||||
Some(p) => p.enabled,
|
||||
None => false,
|
||||
};
|
||||
if single_org_policy_enabled && user_org.atype < UserOrgType::Admin {
|
||||
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&user_org.user_uuid, &conn)
|
||||
.into_iter()
|
||||
.filter(|uo| uo.org_uuid != user_org.org_uuid)
|
||||
.count()
|
||||
> 1;
|
||||
if is_member_of_another_org {
|
||||
err!("You may not join this organization until you leave or remove all other organizations.")
|
||||
}
|
||||
}
|
||||
|
||||
// Enforce Single Organization Policy of other organizations user is a member of
|
||||
if OrgPolicy::is_applicable_to_user(&user_org.user_uuid, OrgPolicyType::SingleOrg, &conn) {
|
||||
err!(
|
||||
"You cannot join this organization because you are a member of an organization which forbids it"
|
||||
)
|
||||
}
|
||||
|
||||
user_org.status = UserOrgStatus::Accepted as i32;
|
||||
user_org.save(&conn)?;
|
||||
}
|
||||
@@ -673,6 +803,40 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/confirm", data = "<data>")]
|
||||
fn bulk_confirm_invite(org_id: String, data: JsonUpcase<Value>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||
let data = data.into_inner().data;
|
||||
|
||||
let mut bulk_response = Vec::new();
|
||||
match data["Keys"].as_array() {
|
||||
Some(keys) => {
|
||||
for invite in keys {
|
||||
let org_user_id = invite["Id"].as_str().unwrap_or_default();
|
||||
let user_key = invite["Key"].as_str().unwrap_or_default();
|
||||
let err_msg = match _confirm_invite(&org_id, org_user_id, user_key, &headers, &conn) {
|
||||
Ok(_) => String::from(""),
|
||||
Err(e) => format!("{:?}", e),
|
||||
};
|
||||
|
||||
bulk_response.push(json!(
|
||||
{
|
||||
"Object": "OrganizationBulkConfirmResponseModel",
|
||||
"Id": org_user_id,
|
||||
"Error": err_msg
|
||||
}
|
||||
));
|
||||
}
|
||||
}
|
||||
None => error!("No keys to confirm"),
|
||||
}
|
||||
|
||||
Json(json!({
|
||||
"Data": bulk_response,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/<org_user_id>/confirm", data = "<data>")]
|
||||
fn confirm_invite(
|
||||
org_id: String,
|
||||
@@ -682,8 +846,16 @@ fn confirm_invite(
|
||||
conn: DbConn,
|
||||
) -> EmptyResult {
|
||||
let data = data.into_inner().data;
|
||||
let user_key = data["Key"].as_str().unwrap_or_default();
|
||||
_confirm_invite(&org_id, &org_user_id, user_key, &headers, &conn)
|
||||
}
|
||||
|
||||
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) {
|
||||
fn _confirm_invite(org_id: &str, org_user_id: &str, key: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
|
||||
if key.is_empty() || org_user_id.is_empty() {
|
||||
err!("Key or UserId is not set, unable to process request");
|
||||
}
|
||||
|
||||
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
|
||||
Some(user) => user,
|
||||
None => err!("The specified user isn't a member of the organization"),
|
||||
};
|
||||
@@ -697,24 +869,21 @@ fn confirm_invite(
|
||||
}
|
||||
|
||||
user_to_confirm.status = UserOrgStatus::Confirmed as i32;
|
||||
user_to_confirm.akey = match data["Key"].as_str() {
|
||||
Some(key) => key.to_string(),
|
||||
None => err!("Invalid key provided"),
|
||||
};
|
||||
user_to_confirm.akey = key.to_string();
|
||||
|
||||
if CONFIG.mail_enabled() {
|
||||
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
|
||||
let org_name = match Organization::find_by_uuid(org_id, conn) {
|
||||
Some(org) => org.name,
|
||||
None => err!("Error looking up organization."),
|
||||
};
|
||||
let address = match User::find_by_uuid(&user_to_confirm.user_uuid, &conn) {
|
||||
let address = match User::find_by_uuid(&user_to_confirm.user_uuid, conn) {
|
||||
Some(user) => user.email,
|
||||
None => err!("Error looking up user."),
|
||||
};
|
||||
mail::send_invite_confirmed(&address, &org_name)?;
|
||||
}
|
||||
|
||||
user_to_confirm.save(&conn)
|
||||
user_to_confirm.save(conn)
|
||||
}
|
||||
|
||||
#[get("/organizations/<org_id>/users/<org_user_id>")]
|
||||
@@ -815,9 +984,40 @@ fn edit_user(
|
||||
user_to_edit.save(&conn)
|
||||
}
|
||||
|
||||
#[delete("/organizations/<org_id>/users", data = "<data>")]
|
||||
fn bulk_delete_user(org_id: String, data: JsonUpcase<OrgBulkIds>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||
let data: OrgBulkIds = data.into_inner().data;
|
||||
|
||||
let mut bulk_response = Vec::new();
|
||||
for org_user_id in data.Ids {
|
||||
let err_msg = match _delete_user(&org_id, &org_user_id, &headers, &conn) {
|
||||
Ok(_) => String::from(""),
|
||||
Err(e) => format!("{:?}", e),
|
||||
};
|
||||
|
||||
bulk_response.push(json!(
|
||||
{
|
||||
"Object": "OrganizationBulkConfirmResponseModel",
|
||||
"Id": org_user_id,
|
||||
"Error": err_msg
|
||||
}
|
||||
))
|
||||
}
|
||||
|
||||
Json(json!({
|
||||
"Data": bulk_response,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
#[delete("/organizations/<org_id>/users/<org_user_id>")]
|
||||
fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
||||
let user_to_delete = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) {
|
||||
_delete_user(&org_id, &org_user_id, &headers, &conn)
|
||||
}
|
||||
|
||||
fn _delete_user(org_id: &str, org_user_id: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
|
||||
let user_to_delete = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
|
||||
Some(user) => user,
|
||||
None => err!("User to delete isn't member of the organization"),
|
||||
};
|
||||
@@ -828,14 +1028,14 @@ fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn:
|
||||
|
||||
if user_to_delete.atype == UserOrgType::Owner {
|
||||
// Removing owner, check that there are at least another owner
|
||||
let num_owners = UserOrganization::find_by_org_and_type(&org_id, UserOrgType::Owner as i32, &conn).len();
|
||||
let num_owners = UserOrganization::find_by_org_and_type(org_id, UserOrgType::Owner as i32, conn).len();
|
||||
|
||||
if num_owners <= 1 {
|
||||
err!("Can't delete the last owner")
|
||||
}
|
||||
}
|
||||
|
||||
user_to_delete.delete(&conn)
|
||||
user_to_delete.delete(conn)
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/<org_user_id>/delete")]
|
||||
@@ -843,6 +1043,38 @@ fn post_delete_user(org_id: String, org_user_id: String, headers: AdminHeaders,
|
||||
delete_user(org_id, org_user_id, headers, conn)
|
||||
}
|
||||
|
||||
#[post("/organizations/<org_id>/users/public-keys", data = "<data>")]
|
||||
fn bulk_public_keys(org_id: String, data: JsonUpcase<OrgBulkIds>, _headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||
let data: OrgBulkIds = data.into_inner().data;
|
||||
|
||||
let mut bulk_response = Vec::new();
|
||||
// Check all received UserOrg UUID's and find the matching User to retreive the public-key.
|
||||
// If the user does not exists, just ignore it, and do not return any information regarding that UserOrg UUID.
|
||||
// The web-vault will then ignore that user for the folowing steps.
|
||||
for user_org_id in data.Ids {
|
||||
match UserOrganization::find_by_uuid_and_org(&user_org_id, &org_id, &conn) {
|
||||
Some(user_org) => match User::find_by_uuid(&user_org.user_uuid, &conn) {
|
||||
Some(user) => bulk_response.push(json!(
|
||||
{
|
||||
"Object": "organizationUserPublicKeyResponseModel",
|
||||
"Id": user_org_id,
|
||||
"UserId": user.uuid,
|
||||
"Key": user.public_key
|
||||
}
|
||||
)),
|
||||
None => debug!("User doesn't exist"),
|
||||
},
|
||||
None => debug!("UserOrg doesn't exist"),
|
||||
}
|
||||
}
|
||||
|
||||
Json(json!({
|
||||
"Data": bulk_response,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
use super::ciphers::update_cipher_from_data;
|
||||
use super::ciphers::CipherData;
|
||||
|
||||
@@ -980,7 +1212,7 @@ struct PolicyData {
|
||||
enabled: bool,
|
||||
#[serde(rename = "type")]
|
||||
_type: i32,
|
||||
data: Value,
|
||||
data: Option<Value>,
|
||||
}
|
||||
|
||||
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
|
||||
@@ -998,6 +1230,56 @@ fn put_policy(
|
||||
None => err!("Invalid policy type"),
|
||||
};
|
||||
|
||||
// If enabling the TwoFactorAuthentication policy, remove this org's members that do have 2FA
|
||||
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
|
||||
let org_members = UserOrganization::find_by_org(&org_id, &conn);
|
||||
|
||||
for member in org_members.into_iter() {
|
||||
let user_twofactor_disabled = TwoFactor::find_by_user(&member.user_uuid, &conn).is_empty();
|
||||
|
||||
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
|
||||
if user_twofactor_disabled
|
||||
&& member.atype < UserOrgType::Admin
|
||||
&& member.status != UserOrgStatus::Invited as i32
|
||||
{
|
||||
if CONFIG.mail_enabled() {
|
||||
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
|
||||
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
|
||||
|
||||
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
|
||||
}
|
||||
member.delete(&conn)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If enabling the SingleOrg policy, remove this org's members that are members of other orgs
|
||||
if pol_type_enum == OrgPolicyType::SingleOrg && data.enabled {
|
||||
let org_members = UserOrganization::find_by_org(&org_id, &conn);
|
||||
|
||||
for member in org_members.into_iter() {
|
||||
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
|
||||
if member.atype < UserOrgType::Admin && member.status != UserOrgStatus::Invited as i32 {
|
||||
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&member.user_uuid, &conn)
|
||||
.into_iter()
|
||||
// Other UserOrganization's where they have accepted being a member of
|
||||
.filter(|uo| uo.uuid != member.uuid && uo.status != UserOrgStatus::Invited as i32)
|
||||
.count()
|
||||
> 1;
|
||||
|
||||
if is_member_of_another_org {
|
||||
if CONFIG.mail_enabled() {
|
||||
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
|
||||
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
|
||||
|
||||
mail::send_single_org_removed_from_org(&user.email, &org.name)?;
|
||||
}
|
||||
member.delete(&conn)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
|
||||
Some(p) => p,
|
||||
None => OrgPolicy::new(org_id, pol_type_enum, "{}".to_string()),
|
||||
@@ -1012,75 +1294,47 @@ fn put_policy(
|
||||
|
||||
#[allow(unused_variables)]
|
||||
#[get("/organizations/<org_id>/tax")]
|
||||
fn get_organization_tax(org_id: String, _headers: Headers, _conn: DbConn) -> EmptyResult {
|
||||
fn get_organization_tax(org_id: String, _headers: Headers) -> Json<Value> {
|
||||
// Prevent a 404 error, which also causes Javascript errors.
|
||||
err!("Only allowed when not self hosted.")
|
||||
// Upstream sends "Only allowed when not self hosted." As an error message.
|
||||
// If we do the same it will also output this to the log, which is overkill.
|
||||
// An empty list/data also works fine.
|
||||
Json(_empty_data_json())
|
||||
}
|
||||
|
||||
#[get("/plans")]
|
||||
fn get_plans(_headers: Headers, _conn: DbConn) -> Json<Value> {
|
||||
fn get_plans(_headers: Headers) -> Json<Value> {
|
||||
// Respond with a minimal json just enough to allow the creation of an new organization.
|
||||
Json(json!({
|
||||
"Object": "list",
|
||||
"Data": [
|
||||
{
|
||||
"Data": [{
|
||||
"Object": "plan",
|
||||
"Type": 0,
|
||||
"Product": 0,
|
||||
"Name": "Free",
|
||||
"IsAnnual": false,
|
||||
"NameLocalizationKey": "planNameFree",
|
||||
"DescriptionLocalizationKey": "planDescFree",
|
||||
"CanBeUsedByBusiness": false,
|
||||
"BaseSeats": 2,
|
||||
"BaseStorageGb": null,
|
||||
"MaxCollections": 2,
|
||||
"MaxUsers": 2,
|
||||
"HasAdditionalSeatsOption": false,
|
||||
"MaxAdditionalSeats": null,
|
||||
"HasAdditionalStorageOption": false,
|
||||
"MaxAdditionalStorage": null,
|
||||
"HasPremiumAccessOption": false,
|
||||
"TrialPeriodDays": null,
|
||||
"HasSelfHost": false,
|
||||
"HasPolicies": false,
|
||||
"HasGroups": false,
|
||||
"HasDirectory": false,
|
||||
"HasEvents": false,
|
||||
"HasTotp": false,
|
||||
"Has2fa": false,
|
||||
"HasApi": false,
|
||||
"HasSso": false,
|
||||
"UsersGetPremium": false,
|
||||
"UpgradeSortOrder": -1,
|
||||
"DisplaySortOrder": -1,
|
||||
"LegacyYear": null,
|
||||
"Disabled": false,
|
||||
"StripePlanId": null,
|
||||
"StripeSeatPlanId": null,
|
||||
"StripeStoragePlanId": null,
|
||||
"StripePremiumAccessPlanId": null,
|
||||
"BasePrice": 0.0,
|
||||
"SeatPrice": 0.0,
|
||||
"AdditionalStoragePricePerGb": 0.0,
|
||||
"PremiumAccessOptionPrice": 0.0
|
||||
}
|
||||
],
|
||||
"DescriptionLocalizationKey": "planDescFree"
|
||||
}],
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
#[get("/plans/sales-tax-rates")]
|
||||
fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> Json<Value> {
|
||||
fn get_plans_tax_rates(_headers: Headers) -> Json<Value> {
|
||||
// Prevent a 404 error, which also causes Javascript errors.
|
||||
Json(json!({
|
||||
Json(_empty_data_json())
|
||||
}
|
||||
|
||||
fn _empty_data_json() -> Value {
|
||||
json!({
|
||||
"Object": "list",
|
||||
"Data": [],
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
})
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
#[allow(non_snake_case, dead_code)]
|
||||
struct OrgImportGroupData {
|
||||
Name: String, // "GroupName"
|
||||
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
|
||||
@@ -1090,7 +1344,8 @@ struct OrgImportGroupData {
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct OrgImportUserData {
|
||||
Email: String, // "user@maildomain.net"
|
||||
Email: String, // "user@maildomain.net"
|
||||
#[allow(dead_code)]
|
||||
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
|
||||
Deleted: bool,
|
||||
}
|
||||
@@ -1098,6 +1353,7 @@ struct OrgImportUserData {
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[allow(non_snake_case)]
|
||||
struct OrgImportData {
|
||||
#[allow(dead_code)]
|
||||
Groups: Vec<OrgImportGroupData>,
|
||||
OverwriteExisting: bool,
|
||||
Users: Vec<OrgImportUserData>,
|
||||
|
@@ -10,6 +10,7 @@ use crate::{
|
||||
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
|
||||
auth::{Headers, Host},
|
||||
db::{models::*, DbConn, DbPool},
|
||||
util::SafeString,
|
||||
CONFIG,
|
||||
};
|
||||
|
||||
@@ -17,6 +18,8 @@ const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer availab
|
||||
|
||||
pub fn routes() -> Vec<rocket::Route> {
|
||||
routes![
|
||||
get_sends,
|
||||
get_send,
|
||||
post_send,
|
||||
post_send_file,
|
||||
post_access,
|
||||
@@ -127,6 +130,32 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
|
||||
Ok(send)
|
||||
}
|
||||
|
||||
#[get("/sends")]
|
||||
fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
|
||||
let sends = Send::find_by_user(&headers.user.uuid, &conn);
|
||||
let sends_json: Vec<Value> = sends.iter().map(|s| s.to_json()).collect();
|
||||
|
||||
Json(json!({
|
||||
"Data": sends_json,
|
||||
"Object": "list",
|
||||
"ContinuationToken": null
|
||||
}))
|
||||
}
|
||||
|
||||
#[get("/sends/<uuid>")]
|
||||
fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
let send = match Send::find_by_uuid(&uuid, &conn) {
|
||||
Some(send) => send,
|
||||
None => err!("Send not found"),
|
||||
};
|
||||
|
||||
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
|
||||
err!("Send is not owned by user")
|
||||
}
|
||||
|
||||
Ok(Json(send.to_json()))
|
||||
}
|
||||
|
||||
#[post("/sends", data = "<data>")]
|
||||
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
|
||||
enforce_disable_send_policy(&headers, &conn)?;
|
||||
@@ -138,9 +167,9 @@ fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Not
|
||||
err!("File sends should use /api/sends/file")
|
||||
}
|
||||
|
||||
let mut send = create_send(data, headers.user.uuid.clone())?;
|
||||
let mut send = create_send(data, headers.user.uuid)?;
|
||||
send.save(&conn)?;
|
||||
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
|
||||
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn));
|
||||
|
||||
Ok(Json(send.to_json()))
|
||||
}
|
||||
@@ -165,23 +194,23 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
|
||||
let data = serde_json::from_str::<crate::util::UpCase<SendData>>(&buf)?;
|
||||
enforce_disable_hide_email_policy(&data.data, &headers, &conn)?;
|
||||
|
||||
// Get the file length and add an extra 10% to avoid issues
|
||||
const SIZE_110_MB: u64 = 115_343_360;
|
||||
// Get the file length and add an extra 5% to avoid issues
|
||||
const SIZE_525_MB: u64 = 550_502_400;
|
||||
|
||||
let size_limit = match CONFIG.user_attachment_limit() {
|
||||
Some(0) => err!("File uploads are disabled"),
|
||||
Some(limit_kb) => {
|
||||
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &conn);
|
||||
if left <= 0 {
|
||||
err!("Attachment size limit reached! Delete some files to open space")
|
||||
err!("Attachment storage limit reached! Delete some attachments to free up space")
|
||||
}
|
||||
std::cmp::Ord::max(left as u64, SIZE_110_MB)
|
||||
std::cmp::Ord::max(left as u64, SIZE_525_MB)
|
||||
}
|
||||
None => SIZE_110_MB,
|
||||
None => SIZE_525_MB,
|
||||
};
|
||||
|
||||
// Create the Send
|
||||
let mut send = create_send(data.data, headers.user.uuid.clone())?;
|
||||
let mut send = create_send(data.data, headers.user.uuid)?;
|
||||
let file_id = crate::crypto::generate_send_id();
|
||||
|
||||
if send.atype != SendType::File as i32 {
|
||||
@@ -205,7 +234,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
|
||||
}
|
||||
SaveResult::Partial(_, reason) => {
|
||||
std::fs::remove_file(&file_path).ok();
|
||||
err!(format!("Attachment size limit exceeded with this file: {:?}", reason));
|
||||
err!(format!("Attachment storage limit exceeded with this file: {:?}", reason));
|
||||
}
|
||||
SaveResult::Error(e) => {
|
||||
std::fs::remove_file(&file_path).ok();
|
||||
@@ -224,7 +253,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
|
||||
|
||||
// Save the changes in the database
|
||||
send.save(&conn)?;
|
||||
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
|
||||
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||
|
||||
Ok(Json(send.to_json()))
|
||||
}
|
||||
@@ -335,7 +364,7 @@ fn post_access_file(
|
||||
}
|
||||
|
||||
#[get("/sends/<send_id>/<file_id>?<t>")]
|
||||
fn download_send(send_id: String, file_id: String, t: String) -> Option<NamedFile> {
|
||||
fn download_send(send_id: SafeString, file_id: SafeString, t: String) -> Option<NamedFile> {
|
||||
if let Ok(claims) = crate::auth::decode_send(&t) {
|
||||
if claims.sub == format!("{}/{}", send_id, file_id) {
|
||||
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).ok();
|
||||
@@ -396,7 +425,7 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
|
||||
}
|
||||
|
||||
send.save(&conn)?;
|
||||
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
|
||||
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||
|
||||
Ok(Json(send.to_json()))
|
||||
}
|
||||
@@ -413,7 +442,7 @@ fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyR
|
||||
}
|
||||
|
||||
send.delete(&conn)?;
|
||||
nt.send_user_update(UpdateType::SyncSendDelete, &headers.user);
|
||||
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -433,7 +462,7 @@ fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -
|
||||
|
||||
send.set_password(None);
|
||||
send.save(&conn)?;
|
||||
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
|
||||
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||
|
||||
Ok(Json(send.to_json()))
|
||||
}
|
||||
|
@@ -62,7 +62,7 @@ fn activate_authenticator(
|
||||
let data: EnableAuthenticatorData = data.into_inner().data;
|
||||
let password_hash = data.MasterPasswordHash;
|
||||
let key = data.Key;
|
||||
let token = data.Token.into_i32()? as u64;
|
||||
let token = data.Token.into_string();
|
||||
|
||||
let mut user = headers.user;
|
||||
|
||||
@@ -81,7 +81,7 @@ fn activate_authenticator(
|
||||
}
|
||||
|
||||
// Validate the token provided with the key, and save new twofactor
|
||||
validate_totp_code(&user.uuid, token, &key.to_uppercase(), &ip, &conn)?;
|
||||
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn)?;
|
||||
|
||||
_generate_recover_code(&mut user, &conn);
|
||||
|
||||
@@ -109,16 +109,15 @@ pub fn validate_totp_code_str(
|
||||
ip: &ClientIp,
|
||||
conn: &DbConn,
|
||||
) -> EmptyResult {
|
||||
let totp_code: u64 = match totp_code.parse() {
|
||||
Ok(code) => code,
|
||||
_ => err!("TOTP code is not a number"),
|
||||
};
|
||||
if !totp_code.chars().all(char::is_numeric) {
|
||||
err!("TOTP code is not a number");
|
||||
}
|
||||
|
||||
validate_totp_code(user_uuid, totp_code, secret, ip, conn)
|
||||
}
|
||||
|
||||
pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
|
||||
use oath::{totp_raw_custom_time, HashType};
|
||||
pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
|
||||
use totp_lite::{totp_custom, Sha1};
|
||||
|
||||
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
|
||||
Ok(s) => s,
|
||||
@@ -130,27 +129,28 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
|
||||
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
|
||||
};
|
||||
|
||||
// Get the current system time in UNIX Epoch (UTC)
|
||||
let current_time = chrono::Utc::now();
|
||||
let current_timestamp = current_time.timestamp();
|
||||
|
||||
// The amount of steps back and forward in time
|
||||
// Also check if we need to disable time drifted TOTP codes.
|
||||
// If that is the case, we set the steps to 0 so only the current TOTP is valid.
|
||||
let steps = !CONFIG.authenticator_disable_time_drift() as i64;
|
||||
|
||||
// Get the current system time in UNIX Epoch (UTC)
|
||||
let current_time = chrono::Utc::now();
|
||||
let current_timestamp = current_time.timestamp();
|
||||
|
||||
for step in -steps..=steps {
|
||||
let time_step = current_timestamp / 30i64 + step;
|
||||
// We need to calculate the time offsite and cast it as an i128.
|
||||
// Else we can't do math with it on a default u64 variable.
|
||||
|
||||
// We need to calculate the time offsite and cast it as an u64.
|
||||
// Since we only have times into the future and the totp generator needs an u64 instead of the default i64.
|
||||
let time = (current_timestamp + step * 30i64) as u64;
|
||||
let generated = totp_raw_custom_time(&decoded_secret, 6, 0, 30, time, &HashType::SHA1);
|
||||
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
|
||||
|
||||
// Check the the given code equals the generated and if the time_step is larger then the one last used.
|
||||
if generated == totp_code && time_step > twofactor.last_used as i64 {
|
||||
// If the step does not equals 0 the time is drifted either server or client side.
|
||||
if step != 0 {
|
||||
info!("TOTP Time drift detected. The step offset is {}", step);
|
||||
warn!("TOTP Time drift detected. The step offset is {}", step);
|
||||
}
|
||||
|
||||
// Save the last used time step so only totp time steps higher then this one are allowed.
|
||||
@@ -159,7 +159,7 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
|
||||
twofactor.save(conn)?;
|
||||
return Ok(());
|
||||
} else if generated == totp_code && time_step <= twofactor.last_used as i64 {
|
||||
warn!("This or a TOTP code within {} steps back and forward has already been used!", steps);
|
||||
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
|
||||
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
|
||||
}
|
||||
}
|
||||
|
@@ -80,14 +80,16 @@ fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) ->
|
||||
err!("Invalid password");
|
||||
}
|
||||
|
||||
let type_ = TwoFactorType::Email as i32;
|
||||
let enabled = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
|
||||
Some(x) => x.enabled,
|
||||
_ => false,
|
||||
let (enabled, mfa_email) = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn) {
|
||||
Some(x) => {
|
||||
let twofactor_data = EmailTokenData::from_json(&x.data)?;
|
||||
(true, json!(twofactor_data.email))
|
||||
}
|
||||
_ => (false, json!(null)),
|
||||
};
|
||||
|
||||
Ok(Json(json!({
|
||||
"Email": user.email,
|
||||
"Email": mfa_email,
|
||||
"Enabled": enabled,
|
||||
"Object": "twoFactorEmail"
|
||||
})))
|
||||
|
@@ -1,3 +1,4 @@
|
||||
use chrono::{Duration, Utc};
|
||||
use data_encoding::BASE32;
|
||||
use rocket::Route;
|
||||
use rocket_contrib::json::Json;
|
||||
@@ -7,10 +8,8 @@ use crate::{
|
||||
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
|
||||
auth::Headers,
|
||||
crypto,
|
||||
db::{
|
||||
models::{TwoFactor, User},
|
||||
DbConn,
|
||||
},
|
||||
db::{models::*, DbConn, DbPool},
|
||||
mail, CONFIG,
|
||||
};
|
||||
|
||||
pub mod authenticator;
|
||||
@@ -130,6 +129,23 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
|
||||
twofactor.delete(&conn)?;
|
||||
}
|
||||
|
||||
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &conn).is_empty();
|
||||
|
||||
if twofactor_disabled {
|
||||
let policy_type = OrgPolicyType::TwoFactorAuthentication;
|
||||
let org_list = UserOrganization::find_by_user_and_policy(&user.uuid, policy_type, &conn);
|
||||
|
||||
for user_org in org_list.into_iter() {
|
||||
if user_org.atype < UserOrgType::Admin {
|
||||
if CONFIG.mail_enabled() {
|
||||
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).unwrap();
|
||||
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
|
||||
}
|
||||
user_org.delete(&conn)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Json(json!({
|
||||
"Enabled": false,
|
||||
"Type": type_,
|
||||
@@ -141,3 +157,33 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
|
||||
fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
|
||||
disable_twofactor(data, headers, conn)
|
||||
}
|
||||
|
||||
pub fn send_incomplete_2fa_notifications(pool: DbPool) {
|
||||
debug!("Sending notifications for incomplete 2FA logins");
|
||||
|
||||
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||
return;
|
||||
}
|
||||
|
||||
let conn = match pool.get() {
|
||||
Ok(conn) => conn,
|
||||
_ => {
|
||||
error!("Failed to get DB connection in send_incomplete_2fa_notifications()");
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
let now = Utc::now().naive_utc();
|
||||
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
|
||||
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&(now - time_limit), &conn);
|
||||
for login in incomplete_logins {
|
||||
let user = User::find_by_uuid(&login.user_uuid, &conn).expect("User not found");
|
||||
info!(
|
||||
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
|
||||
user.email, login.ip_address
|
||||
);
|
||||
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
|
||||
.expect("Error sending incomplete 2FA email");
|
||||
login.delete(&conn).expect("Error deleting incomplete 2FA record");
|
||||
}
|
||||
}
|
||||
|
@@ -1,6 +1,7 @@
|
||||
use rocket::Route;
|
||||
use rocket_contrib::json::Json;
|
||||
use serde_json::Value;
|
||||
use url::Url;
|
||||
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
|
||||
|
||||
use crate::{
|
||||
@@ -22,19 +23,18 @@ pub fn routes() -> Vec<Route> {
|
||||
|
||||
struct WebauthnConfig {
|
||||
url: String,
|
||||
origin: Url,
|
||||
rpid: String,
|
||||
}
|
||||
|
||||
impl WebauthnConfig {
|
||||
fn load() -> Webauthn<Self> {
|
||||
let domain = CONFIG.domain();
|
||||
let domain_origin = CONFIG.domain_origin();
|
||||
Webauthn::new(Self {
|
||||
rpid: reqwest::Url::parse(&domain)
|
||||
.map(|u| u.domain().map(str::to_owned))
|
||||
.ok()
|
||||
.flatten()
|
||||
.unwrap_or_default(),
|
||||
rpid: Url::parse(&domain).map(|u| u.domain().map(str::to_owned)).ok().flatten().unwrap_or_default(),
|
||||
url: domain,
|
||||
origin: Url::parse(&domain_origin).unwrap(),
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -44,26 +44,19 @@ impl webauthn_rs::WebauthnConfig for WebauthnConfig {
|
||||
&self.url
|
||||
}
|
||||
|
||||
fn get_origin(&self) -> &str {
|
||||
&self.url
|
||||
fn get_origin(&self) -> &Url {
|
||||
&self.origin
|
||||
}
|
||||
|
||||
fn get_relying_party_id(&self) -> &str {
|
||||
&self.rpid
|
||||
}
|
||||
}
|
||||
|
||||
impl webauthn_rs::WebauthnConfig for &WebauthnConfig {
|
||||
fn get_relying_party_name(&self) -> &str {
|
||||
&self.url
|
||||
}
|
||||
|
||||
fn get_origin(&self) -> &str {
|
||||
&self.url
|
||||
}
|
||||
|
||||
fn get_relying_party_id(&self) -> &str {
|
||||
&self.rpid
|
||||
/// We have WebAuthn configured to discourage user verification
|
||||
/// if we leave this enabled, it will cause verification issues when a keys send UV=1.
|
||||
/// Upstream (the library they use) ignores this when set to discouraged, so we should too.
|
||||
fn get_require_uv_consistency(&self) -> bool {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
@@ -289,15 +282,14 @@ fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, conn: DbCo
|
||||
err!("Invalid password");
|
||||
}
|
||||
|
||||
let type_ = TwoFactorType::Webauthn as i32;
|
||||
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, type_, &conn) {
|
||||
let mut tf = match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &conn) {
|
||||
Some(tf) => tf,
|
||||
None => err!("Webauthn data not found!"),
|
||||
};
|
||||
|
||||
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
|
||||
|
||||
let item_pos = match data.iter().position(|r| r.id != id) {
|
||||
let item_pos = match data.iter().position(|r| r.id == id) {
|
||||
Some(p) => p,
|
||||
None => err!("Webauthn entry not found"),
|
||||
};
|
||||
|
@@ -250,7 +250,7 @@ fn is_domain_blacklisted(domain: &str) -> bool {
|
||||
|
||||
// Use the pre-generate Regex stored in a Lazy HashMap.
|
||||
if regex.is_match(domain) {
|
||||
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist);
|
||||
warn!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
|
||||
is_blacklisted = true;
|
||||
}
|
||||
}
|
||||
@@ -555,7 +555,7 @@ fn get_page(url: &str) -> Result<Response, Error> {
|
||||
|
||||
fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
|
||||
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
|
||||
err!("Favicon rel linked to a blacklisted domain!");
|
||||
err!("Favicon resolves to a blacklisted domain or IP!", url);
|
||||
}
|
||||
|
||||
let mut client = CLIENT.get(url);
|
||||
@@ -563,7 +563,10 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
|
||||
client = client.header("Referer", referer)
|
||||
}
|
||||
|
||||
client.send()?.error_for_status().map_err(Into::into)
|
||||
match client.send() {
|
||||
Ok(c) => c.error_for_status().map_err(Into::into),
|
||||
Err(e) => err_silent!(format!("{}", e)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a Integer with the priority of the type of the icon which to prefer.
|
||||
@@ -647,7 +650,7 @@ fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
|
||||
|
||||
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
||||
if is_domain_blacklisted(domain) {
|
||||
err!("Domain is blacklisted", domain)
|
||||
err_silent!("Domain is blacklisted", domain)
|
||||
}
|
||||
|
||||
let icon_result = get_icon_url(domain)?;
|
||||
@@ -676,7 +679,7 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
||||
break;
|
||||
}
|
||||
}
|
||||
_ => warn!("Extracted icon from data:image uri is invalid"),
|
||||
_ => debug!("Extracted icon from data:image uri is invalid"),
|
||||
};
|
||||
} else {
|
||||
match get_page_with_referer(&icon.href, &icon_result.referer) {
|
||||
@@ -692,13 +695,13 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
||||
info!("Downloaded icon from {}", icon.href);
|
||||
break;
|
||||
}
|
||||
_ => warn!("Download failed for {}", icon.href),
|
||||
Err(e) => debug!("{:?}", e),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
if buffer.is_empty() {
|
||||
err!("Empty response downloading icon")
|
||||
err_silent!("Empty response or unable find a valid icon", domain);
|
||||
}
|
||||
|
||||
Ok((buffer, icon_type))
|
||||
|
@@ -1,4 +1,4 @@
|
||||
use chrono::Local;
|
||||
use chrono::Utc;
|
||||
use num_traits::FromPrimitive;
|
||||
use rocket::{
|
||||
request::{Form, FormItems, FromForm},
|
||||
@@ -56,7 +56,7 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
|
||||
|
||||
// COMMON
|
||||
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
|
||||
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
|
||||
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
|
||||
|
||||
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
||||
|
||||
@@ -102,10 +102,9 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
|
||||
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
|
||||
}
|
||||
|
||||
let now = Local::now();
|
||||
let now = Utc::now().naive_utc();
|
||||
|
||||
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
|
||||
let now = now.naive_utc();
|
||||
if user.last_verifying_at.is_none()
|
||||
|| now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds()
|
||||
> CONFIG.signups_verify_resend_time() as i64
|
||||
@@ -147,7 +146,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
|
||||
}
|
||||
|
||||
// Common
|
||||
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
|
||||
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
|
||||
|
||||
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
||||
device.save(&conn)?;
|
||||
@@ -219,6 +218,8 @@ fn twofactor_auth(
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn)?;
|
||||
|
||||
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
|
||||
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, asume the first one
|
||||
|
||||
@@ -262,6 +263,8 @@ fn twofactor_auth(
|
||||
_ => err!("Invalid two factor provider"),
|
||||
}
|
||||
|
||||
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn)?;
|
||||
|
||||
if !CONFIG.disable_2fa_remember() && remember == 1 {
|
||||
Ok(Some(device.refresh_twofactor_remember()))
|
||||
} else {
|
||||
|
@@ -13,6 +13,8 @@ pub use crate::api::{
|
||||
core::purge_sends,
|
||||
core::purge_trashed_ciphers,
|
||||
core::routes as core_routes,
|
||||
core::two_factor::send_incomplete_2fa_notifications,
|
||||
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
|
||||
icons::routes as icons_routes,
|
||||
identity::routes as identity_routes,
|
||||
notifications::routes as notifications_routes,
|
||||
|
@@ -4,7 +4,7 @@ use rocket::Route;
|
||||
use rocket_contrib::json::Json;
|
||||
use serde_json::Value as JsonValue;
|
||||
|
||||
use crate::{api::EmptyResult, auth::Headers, db::DbConn, Error, CONFIG};
|
||||
use crate::{api::EmptyResult, auth::Headers, Error, CONFIG};
|
||||
|
||||
pub fn routes() -> Vec<Route> {
|
||||
routes![negotiate, websockets_err]
|
||||
@@ -30,7 +30,7 @@ fn websockets_err() -> EmptyResult {
|
||||
}
|
||||
|
||||
#[post("/hub/negotiate")]
|
||||
fn negotiate(_headers: Headers, _conn: DbConn) -> Json<JsonValue> {
|
||||
fn negotiate(_headers: Headers) -> Json<JsonValue> {
|
||||
use crate::crypto;
|
||||
use data_encoding::BASE64URL;
|
||||
|
||||
@@ -65,7 +65,7 @@ use chashmap::CHashMap;
|
||||
use chrono::NaiveDateTime;
|
||||
use serde_json::from_str;
|
||||
|
||||
use crate::db::models::{Cipher, Folder, User};
|
||||
use crate::db::models::{Cipher, Folder, Send, User};
|
||||
|
||||
use rmpv::Value;
|
||||
|
||||
@@ -335,6 +335,23 @@ impl WebSocketUsers {
|
||||
self.send_update(uuid, &data).ok();
|
||||
}
|
||||
}
|
||||
|
||||
pub fn send_send_update(&self, ut: UpdateType, send: &Send, user_uuids: &[String]) {
|
||||
let user_uuid = convert_option(send.user_uuid.clone());
|
||||
|
||||
let data = create_update(
|
||||
vec![
|
||||
("Id".into(), send.uuid.clone().into()),
|
||||
("UserId".into(), user_uuid),
|
||||
("RevisionDate".into(), serialize_date(send.revision_date)),
|
||||
],
|
||||
ut,
|
||||
);
|
||||
|
||||
for uuid in user_uuids {
|
||||
self.send_update(uuid, &data).ok();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Message Structure
|
||||
@@ -408,12 +425,10 @@ pub fn start_notification_server() -> WebSocketUsers {
|
||||
|
||||
if CONFIG.websocket_enabled() {
|
||||
thread::spawn(move || {
|
||||
let settings = ws::Settings {
|
||||
max_connections: 500,
|
||||
queue_size: 2,
|
||||
panic_on_internal: false,
|
||||
..Default::default()
|
||||
};
|
||||
let mut settings = ws::Settings::default();
|
||||
settings.max_connections = 500;
|
||||
settings.queue_size = 2;
|
||||
settings.panic_on_internal = false;
|
||||
|
||||
ws::Builder::new()
|
||||
.with_settings(settings)
|
||||
|
@@ -4,7 +4,11 @@ use rocket::{http::ContentType, response::content::Content, response::NamedFile,
|
||||
use rocket_contrib::json::Json;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::{error::Error, util::Cached, CONFIG};
|
||||
use crate::{
|
||||
error::Error,
|
||||
util::{Cached, SafeString},
|
||||
CONFIG,
|
||||
};
|
||||
|
||||
pub fn routes() -> Vec<Route> {
|
||||
// If addding more routes here, consider also adding them to
|
||||
@@ -56,12 +60,14 @@ fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
|
||||
}
|
||||
|
||||
#[get("/attachments/<uuid>/<file_id>")]
|
||||
fn attachments(uuid: String, file_id: String) -> Option<NamedFile> {
|
||||
fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
|
||||
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).ok()
|
||||
}
|
||||
|
||||
// We use DbConn here to let the alive healthcheck also verify the database connection.
|
||||
use crate::db::DbConn;
|
||||
#[get("/alive")]
|
||||
fn alive() -> Json<String> {
|
||||
fn alive(_conn: DbConn) -> Json<String> {
|
||||
use crate::util::format_date;
|
||||
use chrono::Utc;
|
||||
|
||||
|
58
src/auth.rs
58
src/auth.rs
@@ -22,6 +22,8 @@ static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
|
||||
|
||||
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
|
||||
static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
|
||||
static JWT_EMERGENCY_ACCESS_INVITE_ISSUER: Lazy<String> =
|
||||
Lazy::new(|| format!("{}|emergencyaccessinvite", CONFIG.domain_origin()));
|
||||
static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
|
||||
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
|
||||
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
|
||||
@@ -75,6 +77,10 @@ pub fn decode_invite(token: &str) -> Result<InviteJwtClaims, Error> {
|
||||
decode_jwt(token, JWT_INVITE_ISSUER.to_string())
|
||||
}
|
||||
|
||||
pub fn decode_emergency_access_invite(token: &str) -> Result<EmergencyAccessInviteJwtClaims, Error> {
|
||||
decode_jwt(token, JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string())
|
||||
}
|
||||
|
||||
pub fn decode_delete(token: &str) -> Result<BasicJwtClaims, Error> {
|
||||
decode_jwt(token, JWT_DELETE_ISSUER.to_string())
|
||||
}
|
||||
@@ -159,6 +165,43 @@ pub fn generate_invite_claims(
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct EmergencyAccessInviteJwtClaims {
|
||||
// Not before
|
||||
pub nbf: i64,
|
||||
// Expiration time
|
||||
pub exp: i64,
|
||||
// Issuer
|
||||
pub iss: String,
|
||||
// Subject
|
||||
pub sub: String,
|
||||
|
||||
pub email: String,
|
||||
pub emer_id: Option<String>,
|
||||
pub grantor_name: Option<String>,
|
||||
pub grantor_email: Option<String>,
|
||||
}
|
||||
|
||||
pub fn generate_emergency_access_invite_claims(
|
||||
uuid: String,
|
||||
email: String,
|
||||
emer_id: Option<String>,
|
||||
grantor_name: Option<String>,
|
||||
grantor_email: Option<String>,
|
||||
) -> EmergencyAccessInviteJwtClaims {
|
||||
let time_now = Utc::now().naive_utc();
|
||||
EmergencyAccessInviteJwtClaims {
|
||||
nbf: time_now.timestamp(),
|
||||
exp: (time_now + Duration::days(5)).timestamp(),
|
||||
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
|
||||
sub: uuid,
|
||||
email,
|
||||
emer_id,
|
||||
grantor_name,
|
||||
grantor_email,
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct BasicJwtClaims {
|
||||
// Not before
|
||||
@@ -325,8 +368,19 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
|
||||
_ => err_handler!("Error getting current route for stamp exception"),
|
||||
};
|
||||
|
||||
// Check if both match, if not this route is not allowed with the current security stamp.
|
||||
if stamp_exception.route != current_route {
|
||||
// Check if the stamp exception has expired first.
|
||||
// Then, check if the current route matches any of the allowed routes.
|
||||
// After that check the stamp in exception matches the one in the claims.
|
||||
if Utc::now().naive_utc().timestamp() > stamp_exception.expire {
|
||||
// If the stamp exception has been expired remove it from the database.
|
||||
// This prevents checking this stamp exception for new requests.
|
||||
let mut user = user;
|
||||
user.reset_stamp_exception();
|
||||
if let Err(e) = user.save(&conn) {
|
||||
error!("Error updating user: {:#?}", e);
|
||||
}
|
||||
err_handler!("Stamp exception is expired")
|
||||
} else if !stamp_exception.routes.contains(¤t_route.to_string()) {
|
||||
err_handler!("Invalid security stamp: Current route and exception route do not match")
|
||||
} else if stamp_exception.security_stamp != claims.sstamp {
|
||||
err_handler!("Invalid security stamp for matched stamp exception")
|
||||
|
203
src/config.rs
203
src/config.rs
@@ -2,7 +2,6 @@ use std::process::exit;
|
||||
use std::sync::RwLock;
|
||||
|
||||
use once_cell::sync::Lazy;
|
||||
use regex::Regex;
|
||||
use reqwest::Url;
|
||||
|
||||
use crate::{
|
||||
@@ -23,21 +22,6 @@ pub static CONFIG: Lazy<Config> = Lazy::new(|| {
|
||||
})
|
||||
});
|
||||
|
||||
static PRIVACY_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[\w]").unwrap());
|
||||
const PRIVACY_CONFIG: &[&str] = &[
|
||||
"allowed_iframe_ancestors",
|
||||
"database_url",
|
||||
"domain_origin",
|
||||
"domain_path",
|
||||
"domain",
|
||||
"helo_name",
|
||||
"org_creation_users",
|
||||
"signups_domains_whitelist",
|
||||
"smtp_from",
|
||||
"smtp_host",
|
||||
"smtp_username",
|
||||
];
|
||||
|
||||
pub type Pass = String;
|
||||
|
||||
macro_rules! make_config {
|
||||
@@ -61,7 +45,7 @@ macro_rules! make_config {
|
||||
_overrides: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
||||
#[derive(Clone, Default, Deserialize, Serialize)]
|
||||
pub struct ConfigBuilder {
|
||||
$($(
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
@@ -133,19 +117,6 @@ macro_rules! make_config {
|
||||
builder
|
||||
}
|
||||
|
||||
/// Returns a new builder with all the elements from self,
|
||||
/// except those that are equal in both sides
|
||||
fn _remove(&self, other: &Self) -> Self {
|
||||
let mut builder = ConfigBuilder::default();
|
||||
$($(
|
||||
if &self.$name != &other.$name {
|
||||
builder.$name = self.$name.clone();
|
||||
}
|
||||
|
||||
)+)+
|
||||
builder
|
||||
}
|
||||
|
||||
fn build(&self) -> ConfigItems {
|
||||
let mut config = ConfigItems::default();
|
||||
let _domain_set = self.domain.is_some();
|
||||
@@ -161,12 +132,13 @@ macro_rules! make_config {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ConfigItems { $($(pub $name: make_config!{@type $ty, $none_action}, )+)+ }
|
||||
#[derive(Clone, Default)]
|
||||
struct ConfigItems { $($( $name: make_config!{@type $ty, $none_action}, )+)+ }
|
||||
|
||||
#[allow(unused)]
|
||||
impl Config {
|
||||
$($(
|
||||
$(#[doc = $doc])+
|
||||
pub fn $name(&self) -> make_config!{@type $ty, $none_action} {
|
||||
self.inner.read().unwrap().config.$name.clone()
|
||||
}
|
||||
@@ -189,38 +161,91 @@ macro_rules! make_config {
|
||||
|
||||
fn _get_doc(doc: &str) -> serde_json::Value {
|
||||
let mut split = doc.split("|>").map(str::trim);
|
||||
json!({
|
||||
"name": split.next(),
|
||||
"description": split.next()
|
||||
|
||||
// We do not use the json!() macro here since that causes a lot of macro recursion.
|
||||
// This slows down compile time and it also causes issues with rust-analyzer
|
||||
serde_json::Value::Object({
|
||||
let mut doc_json = serde_json::Map::new();
|
||||
doc_json.insert("name".into(), serde_json::to_value(split.next()).unwrap());
|
||||
doc_json.insert("description".into(), serde_json::to_value(split.next()).unwrap());
|
||||
doc_json
|
||||
})
|
||||
}
|
||||
|
||||
json!([ $({
|
||||
"group": stringify!($group),
|
||||
"grouptoggle": stringify!($($group_enabled)?),
|
||||
"groupdoc": make_config!{ @show $($groupdoc)? },
|
||||
"elements": [
|
||||
$( {
|
||||
"editable": $editable,
|
||||
"name": stringify!($name),
|
||||
"value": cfg.$name,
|
||||
"default": def.$name,
|
||||
"type": _get_form_type(stringify!($ty)),
|
||||
"doc": _get_doc(concat!($($doc),+)),
|
||||
"overridden": overriden.contains(&stringify!($name).to_uppercase()),
|
||||
}, )+
|
||||
]}, )+ ])
|
||||
// We do not use the json!() macro here since that causes a lot of macro recursion.
|
||||
// This slows down compile time and it also causes issues with rust-analyzer
|
||||
serde_json::Value::Array(<[_]>::into_vec(Box::new([
|
||||
$(
|
||||
serde_json::Value::Object({
|
||||
let mut group = serde_json::Map::new();
|
||||
group.insert("group".into(), (stringify!($group)).into());
|
||||
group.insert("grouptoggle".into(), (stringify!($($group_enabled)?)).into());
|
||||
group.insert("groupdoc".into(), (make_config!{ @show $($groupdoc)? }).into());
|
||||
|
||||
group.insert("elements".into(), serde_json::Value::Array(<[_]>::into_vec(Box::new([
|
||||
$(
|
||||
serde_json::Value::Object({
|
||||
let mut element = serde_json::Map::new();
|
||||
element.insert("editable".into(), ($editable).into());
|
||||
element.insert("name".into(), (stringify!($name)).into());
|
||||
element.insert("value".into(), serde_json::to_value(cfg.$name).unwrap());
|
||||
element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
|
||||
element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
|
||||
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
|
||||
element.insert("overridden".into(), (overriden.contains(&stringify!($name).to_uppercase())).into());
|
||||
element
|
||||
}),
|
||||
)+
|
||||
]))));
|
||||
group
|
||||
}),
|
||||
)+
|
||||
])))
|
||||
}
|
||||
|
||||
pub fn get_support_json(&self) -> serde_json::Value {
|
||||
// Define which config keys need to be masked.
|
||||
// Pass types will always be masked and no need to put them in the list.
|
||||
// Besides Pass, only String types will be masked via _privacy_mask.
|
||||
const PRIVACY_CONFIG: &[&str] = &[
|
||||
"allowed_iframe_ancestors",
|
||||
"database_url",
|
||||
"domain_origin",
|
||||
"domain_path",
|
||||
"domain",
|
||||
"helo_name",
|
||||
"org_creation_users",
|
||||
"signups_domains_whitelist",
|
||||
"smtp_from",
|
||||
"smtp_host",
|
||||
"smtp_username",
|
||||
];
|
||||
|
||||
let cfg = {
|
||||
let inner = &self.inner.read().unwrap();
|
||||
inner.config.clone()
|
||||
};
|
||||
|
||||
json!({ $($(
|
||||
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
|
||||
)+)+ })
|
||||
/// We map over the string and remove all alphanumeric, _ and - characters.
|
||||
/// This is the fastest way (within micro-seconds) instead of using a regex (which takes mili-seconds)
|
||||
fn _privacy_mask(value: &str) -> String {
|
||||
value.chars().map(|c|
|
||||
match c {
|
||||
c if c.is_alphanumeric() => '*',
|
||||
'_' => '*',
|
||||
'-' => '*',
|
||||
_ => c
|
||||
}
|
||||
).collect::<String>()
|
||||
}
|
||||
|
||||
serde_json::Value::Object({
|
||||
let mut json = serde_json::Map::new();
|
||||
$($(
|
||||
json.insert(stringify!($name).into(), make_config!{ @supportstr $name, cfg.$name, $ty, $none_action });
|
||||
)+)+;
|
||||
json
|
||||
})
|
||||
}
|
||||
|
||||
pub fn get_overrides(&self) -> Vec<String> {
|
||||
@@ -228,29 +253,30 @@ macro_rules! make_config {
|
||||
let inner = &self.inner.read().unwrap();
|
||||
inner._overrides.clone()
|
||||
};
|
||||
|
||||
overrides
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Support string print
|
||||
( @supportstr $name:ident, $value:expr, Pass, option ) => { $value.as_ref().map(|_| String::from("***")) }; // Optional pass, we map to an Option<String> with "***"
|
||||
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { String::from("***") }; // Required pass, we return "***"
|
||||
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
|
||||
( @supportstr $name:ident, $value:expr, Pass, option ) => { serde_json::to_value($value.as_ref().map(|_| String::from("***"))).unwrap() }; // Optional pass, we map to an Option<String> with "***"
|
||||
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { "***".into() }; // Required pass, we return "***"
|
||||
( @supportstr $name:ident, $value:expr, String, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
|
||||
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
||||
json!($value.as_ref().map(|x| PRIVACY_REGEX.replace_all(&x.to_string(), "${1}*").to_string()))
|
||||
serde_json::to_value($value.as_ref().map(|x| _privacy_mask(x) )).unwrap()
|
||||
} else {
|
||||
json!($value)
|
||||
serde_json::to_value($value).unwrap()
|
||||
}
|
||||
};
|
||||
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
|
||||
( @supportstr $name:ident, $value:expr, String, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
|
||||
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
||||
json!(PRIVACY_REGEX.replace_all(&$value.to_string(), "${1}*").to_string())
|
||||
} else {
|
||||
json!($value)
|
||||
}
|
||||
_privacy_mask(&$value).into()
|
||||
} else {
|
||||
($value).into()
|
||||
}
|
||||
};
|
||||
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { serde_json::to_value($value).unwrap() }; // Optional other value, we return as is or convert to string to apply the privacy config
|
||||
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { ($value).into() }; // Required other value, we return as is or convert to string to apply the privacy config
|
||||
|
||||
// Group or empty string
|
||||
( @show ) => { "" };
|
||||
@@ -300,8 +326,6 @@ make_config! {
|
||||
data_folder: String, false, def, "data".to_string();
|
||||
/// Database URL
|
||||
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3");
|
||||
/// Database connection pool size
|
||||
database_max_conns: u32, false, def, 10;
|
||||
/// Icon cache folder
|
||||
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
|
||||
/// Attachments folder
|
||||
@@ -333,6 +357,15 @@ make_config! {
|
||||
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
|
||||
/// Defaults to daily. Set blank to disable this job.
|
||||
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
|
||||
/// Incomplete 2FA login schedule |> Cron schedule of the job that checks for incomplete 2FA logins.
|
||||
/// Defaults to once every minute. Set blank to disable this job.
|
||||
incomplete_2fa_schedule: String, false, def, "30 * * * * *".to_string();
|
||||
/// Emergency notification reminder schedule |> Cron schedule of the job that sends expiration reminders to emergency access grantors.
|
||||
/// Defaults to hourly. Set blank to disable this job.
|
||||
emergency_notification_reminder_schedule: String, false, def, "0 5 * * * *".to_string();
|
||||
/// Emergency request timeout schedule |> Cron schedule of the job that grants emergency access requests that have met the required wait time.
|
||||
/// Defaults to hourly. Set blank to disable this job.
|
||||
emergency_request_timeout_schedule: String, false, def, "0 5 * * * *".to_string();
|
||||
},
|
||||
|
||||
/// General settings
|
||||
@@ -356,9 +389,9 @@ make_config! {
|
||||
/// HIBP Api Key |> HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
|
||||
hibp_api_key: Pass, true, option;
|
||||
|
||||
/// Per-user attachment limit (KB) |> Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more
|
||||
/// Per-user attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per user. When this limit is reached, the user will not be allowed to upload further attachments.
|
||||
user_attachment_limit: i64, true, option;
|
||||
/// Per-organization attachment limit (KB) |> Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
|
||||
/// Per-organization attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per org. When this limit is reached, org members will not be allowed to upload further attachments for ciphers owned by that org.
|
||||
org_attachment_limit: i64, true, option;
|
||||
|
||||
/// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item.
|
||||
@@ -366,6 +399,13 @@ make_config! {
|
||||
/// sure to inform all users of any changes to this setting.
|
||||
trash_auto_delete_days: i64, true, option;
|
||||
|
||||
/// Incomplete 2FA time limit |> Number of minutes to wait before a 2FA-enabled login is
|
||||
/// considered incomplete, resulting in an email notification. An incomplete 2FA login is one
|
||||
/// where the correct master password was provided but the required 2FA step was not completed,
|
||||
/// which potentially indicates a master password compromise. Set to 0 to disable this check.
|
||||
/// This setting applies globally to all users.
|
||||
incomplete_2fa_time_limit: i64, true, def, 3;
|
||||
|
||||
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
|
||||
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
|
||||
/// otherwise it will delete them and they won't be downloaded again.
|
||||
@@ -385,12 +425,15 @@ make_config! {
|
||||
org_creation_users: String, true, def, "".to_string();
|
||||
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
|
||||
invitations_allowed: bool, true, def, true;
|
||||
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
|
||||
emergency_access_allowed: bool, true, def, true;
|
||||
/// Password iterations |> Number of server-side passwords hashing iterations.
|
||||
/// The changes only apply when a user changes their password. Not recommended to lower the value
|
||||
password_iterations: i32, true, def, 100_000;
|
||||
/// Show password hints |> Controls if the password hint should be shown directly in the web page.
|
||||
/// Otherwise, if email is disabled, there is no way to see the password hint
|
||||
show_password_hint: bool, true, def, true;
|
||||
/// Show password hint |> Controls whether a password hint should be shown directly in the web page
|
||||
/// if SMTP service is not configured. Not recommended for publicly-accessible instances as this
|
||||
/// provides unauthenticated access to potentially sensitive data.
|
||||
show_password_hint: bool, true, def, false;
|
||||
|
||||
/// Admin page token |> The token used to authenticate in this very same page. Changing it here won't deauthorize the current session
|
||||
admin_token: Pass, true, option;
|
||||
@@ -452,6 +495,9 @@ make_config! {
|
||||
/// Max database connection retries |> Number of times to retry the database connection during startup, with 1 second between each retry, set to 0 to retry indefinitely
|
||||
db_connection_retries: u32, false, def, 15;
|
||||
|
||||
/// Database connection pool size
|
||||
database_max_conns: u32, false, def, 10;
|
||||
|
||||
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
|
||||
disable_admin_token: bool, true, def, false;
|
||||
|
||||
@@ -606,7 +652,7 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
|
||||
|
||||
// Check if the icon blacklist regex is valid
|
||||
if let Some(ref r) = cfg.icon_blacklist_regex {
|
||||
let validate_regex = Regex::new(r);
|
||||
let validate_regex = regex::Regex::new(r);
|
||||
match validate_regex {
|
||||
Ok(_) => (),
|
||||
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
|
||||
@@ -698,7 +744,7 @@ impl Config {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
|
||||
fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
|
||||
let builder = {
|
||||
let usr = &self.inner.read().unwrap()._usr;
|
||||
let mut _overrides = Vec::new();
|
||||
@@ -852,12 +898,23 @@ where
|
||||
|
||||
reg!("email/change_email", ".html");
|
||||
reg!("email/delete_account", ".html");
|
||||
reg!("email/emergency_access_invite_accepted", ".html");
|
||||
reg!("email/emergency_access_invite_confirmed", ".html");
|
||||
reg!("email/emergency_access_recovery_approved", ".html");
|
||||
reg!("email/emergency_access_recovery_initiated", ".html");
|
||||
reg!("email/emergency_access_recovery_rejected", ".html");
|
||||
reg!("email/emergency_access_recovery_reminder", ".html");
|
||||
reg!("email/emergency_access_recovery_timed_out", ".html");
|
||||
reg!("email/incomplete_2fa_login", ".html");
|
||||
reg!("email/invite_accepted", ".html");
|
||||
reg!("email/invite_confirmed", ".html");
|
||||
reg!("email/new_device_logged_in", ".html");
|
||||
reg!("email/pw_hint_none", ".html");
|
||||
reg!("email/pw_hint_some", ".html");
|
||||
reg!("email/send_2fa_removed_from_org", ".html");
|
||||
reg!("email/send_single_org_removed_from_org", ".html");
|
||||
reg!("email/send_org_invite", ".html");
|
||||
reg!("email/send_emergency_access_invite", ".html");
|
||||
reg!("email/twofactor_email", ".html");
|
||||
reg!("email/verify_email", ".html");
|
||||
reg!("email/welcome", ".html");
|
||||
|
@@ -278,7 +278,6 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
|
||||
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
|
||||
#[cfg(sqlite)]
|
||||
mod sqlite_migrations {
|
||||
#[allow(unused_imports)]
|
||||
embed_migrations!("migrations/sqlite");
|
||||
|
||||
pub fn run_migrations() -> Result<(), super::Error> {
|
||||
@@ -315,7 +314,6 @@ mod sqlite_migrations {
|
||||
|
||||
#[cfg(mysql)]
|
||||
mod mysql_migrations {
|
||||
#[allow(unused_imports)]
|
||||
embed_migrations!("migrations/mysql");
|
||||
|
||||
pub fn run_migrations() -> Result<(), super::Error> {
|
||||
@@ -336,7 +334,6 @@ mod mysql_migrations {
|
||||
|
||||
#[cfg(postgresql)]
|
||||
mod postgresql_migrations {
|
||||
#[allow(unused_imports)]
|
||||
embed_migrations!("migrations/postgresql");
|
||||
|
||||
pub fn run_migrations() -> Result<(), super::Error> {
|
||||
|
@@ -143,16 +143,6 @@ impl Attachment {
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_ciphers(cipher_uuids: Vec<String>, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
attachments::table
|
||||
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
|
||||
.load::<AttachmentDb>(conn)
|
||||
.expect("Error loading attachments")
|
||||
.from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
|
||||
db_run! { conn: {
|
||||
let result: Option<i64> = attachments::table
|
||||
|
@@ -343,36 +343,39 @@ impl Cipher {
|
||||
db_run! {conn: {
|
||||
// Check whether this cipher is in any collections accessible to the
|
||||
// user. If so, retrieve the access flags for each collection.
|
||||
let query = ciphers::table
|
||||
let rows = ciphers::table
|
||||
.filter(ciphers::uuid.eq(&self.uuid))
|
||||
.inner_join(ciphers_collections::table.on(
|
||||
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
|
||||
.inner_join(users_collections::table.on(
|
||||
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
|
||||
.and(users_collections::user_uuid.eq(user_uuid))))
|
||||
.select((users_collections::read_only, users_collections::hide_passwords));
|
||||
.select((users_collections::read_only, users_collections::hide_passwords))
|
||||
.load::<(bool, bool)>(conn)
|
||||
.expect("Error getting access restrictions");
|
||||
|
||||
// There's an edge case where a cipher can be in multiple collections
|
||||
// with inconsistent access flags. For example, a cipher could be in
|
||||
// one collection where the user has read-only access, but also in
|
||||
// another collection where the user has read/write access. To handle
|
||||
// this, we do a boolean OR of all values in each of the `read_only`
|
||||
// and `hide_passwords` columns. This could ideally be done as part
|
||||
// of the query, but Diesel doesn't support a max() or bool_or()
|
||||
// function on booleans and this behavior isn't portable anyway.
|
||||
if let Ok(vec) = query.load::<(bool, bool)>(conn) {
|
||||
let mut read_only = false;
|
||||
let mut hide_passwords = false;
|
||||
for (ro, hp) in vec.iter() {
|
||||
read_only |= ro;
|
||||
hide_passwords |= hp;
|
||||
}
|
||||
|
||||
Some((read_only, hide_passwords))
|
||||
} else {
|
||||
if rows.is_empty() {
|
||||
// This cipher isn't in any collections accessible to the user.
|
||||
None
|
||||
return None;
|
||||
}
|
||||
|
||||
// A cipher can be in multiple collections with inconsistent access flags.
|
||||
// For example, a cipher could be in one collection where the user has
|
||||
// read-only access, but also in another collection where the user has
|
||||
// read/write access. For a flag to be in effect for a cipher, upstream
|
||||
// requires all collections the cipher is in to have that flag set.
|
||||
// Therefore, we do a boolean AND of all values in each of the `read_only`
|
||||
// and `hide_passwords` columns. This could ideally be done as part of the
|
||||
// query, but Diesel doesn't support a min() or bool_and() function on
|
||||
// booleans and this behavior isn't portable anyway.
|
||||
let mut read_only = true;
|
||||
let mut hide_passwords = true;
|
||||
for (ro, hp) in rows.iter() {
|
||||
read_only &= ro;
|
||||
hide_passwords &= hp;
|
||||
}
|
||||
|
||||
Some((read_only, hide_passwords))
|
||||
}}
|
||||
}
|
||||
|
||||
|
@@ -17,8 +17,7 @@ db_object! {
|
||||
pub user_uuid: String,
|
||||
|
||||
pub name: String,
|
||||
// https://github.com/bitwarden/core/tree/master/src/Core/Enums
|
||||
pub atype: i32,
|
||||
pub atype: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs
|
||||
pub push_token: Option<String>,
|
||||
|
||||
pub refresh_token: String,
|
||||
|
282
src/db/models/emergency_access.rs
Normal file
282
src/db/models/emergency_access.rs
Normal file
@@ -0,0 +1,282 @@
|
||||
use chrono::{NaiveDateTime, Utc};
|
||||
use serde_json::Value;
|
||||
|
||||
use super::User;
|
||||
|
||||
db_object! {
|
||||
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
|
||||
#[table_name = "emergency_access"]
|
||||
#[changeset_options(treat_none_as_null="true")]
|
||||
#[belongs_to(User, foreign_key = "grantor_uuid")]
|
||||
#[primary_key(uuid)]
|
||||
pub struct EmergencyAccess {
|
||||
pub uuid: String,
|
||||
pub grantor_uuid: String,
|
||||
pub grantee_uuid: Option<String>,
|
||||
pub email: Option<String>,
|
||||
pub key_encrypted: Option<String>,
|
||||
pub atype: i32, //EmergencyAccessType
|
||||
pub status: i32, //EmergencyAccessStatus
|
||||
pub wait_time_days: i32,
|
||||
pub recovery_initiated_at: Option<NaiveDateTime>,
|
||||
pub last_notification_at: Option<NaiveDateTime>,
|
||||
pub updated_at: NaiveDateTime,
|
||||
pub created_at: NaiveDateTime,
|
||||
}
|
||||
}
|
||||
|
||||
/// Local methods
|
||||
|
||||
impl EmergencyAccess {
|
||||
pub fn new(grantor_uuid: String, email: Option<String>, status: i32, atype: i32, wait_time_days: i32) -> Self {
|
||||
let now = Utc::now().naive_utc();
|
||||
|
||||
Self {
|
||||
uuid: crate::util::get_uuid(),
|
||||
grantor_uuid,
|
||||
grantee_uuid: None,
|
||||
email,
|
||||
status,
|
||||
atype,
|
||||
wait_time_days,
|
||||
recovery_initiated_at: None,
|
||||
created_at: now,
|
||||
updated_at: now,
|
||||
key_encrypted: None,
|
||||
last_notification_at: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_type_as_str(&self) -> &'static str {
|
||||
if self.atype == EmergencyAccessType::View as i32 {
|
||||
"View"
|
||||
} else {
|
||||
"Takeover"
|
||||
}
|
||||
}
|
||||
|
||||
pub fn has_type(&self, access_type: EmergencyAccessType) -> bool {
|
||||
self.atype == access_type as i32
|
||||
}
|
||||
|
||||
pub fn has_status(&self, status: EmergencyAccessStatus) -> bool {
|
||||
self.status == status as i32
|
||||
}
|
||||
|
||||
pub fn to_json(&self) -> Value {
|
||||
json!({
|
||||
"Id": self.uuid,
|
||||
"Status": self.status,
|
||||
"Type": self.atype,
|
||||
"WaitTimeDays": self.wait_time_days,
|
||||
"Object": "emergencyAccess",
|
||||
})
|
||||
}
|
||||
|
||||
pub fn to_json_grantor_details(&self, conn: &DbConn) -> Value {
|
||||
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).expect("Grantor user not found.");
|
||||
|
||||
json!({
|
||||
"Id": self.uuid,
|
||||
"Status": self.status,
|
||||
"Type": self.atype,
|
||||
"WaitTimeDays": self.wait_time_days,
|
||||
"GrantorId": grantor_user.uuid,
|
||||
"Email": grantor_user.email,
|
||||
"Name": grantor_user.name,
|
||||
"Object": "emergencyAccessGrantorDetails",
|
||||
})
|
||||
}
|
||||
|
||||
#[allow(clippy::manual_map)]
|
||||
pub fn to_json_grantee_details(&self, conn: &DbConn) -> Value {
|
||||
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
|
||||
Some(User::find_by_uuid(grantee_uuid, conn).expect("Grantee user not found."))
|
||||
} else if let Some(email) = self.email.as_deref() {
|
||||
Some(User::find_by_mail(email, conn).expect("Grantee user not found."))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
json!({
|
||||
"Id": self.uuid,
|
||||
"Status": self.status,
|
||||
"Type": self.atype,
|
||||
"WaitTimeDays": self.wait_time_days,
|
||||
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
|
||||
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
|
||||
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
|
||||
"Object": "emergencyAccessGranteeDetails",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
|
||||
pub enum EmergencyAccessType {
|
||||
View = 0,
|
||||
Takeover = 1,
|
||||
}
|
||||
|
||||
impl EmergencyAccessType {
|
||||
pub fn from_str(s: &str) -> Option<Self> {
|
||||
match s {
|
||||
"0" | "View" => Some(EmergencyAccessType::View),
|
||||
"1" | "Takeover" => Some(EmergencyAccessType::Takeover),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<i32> for EmergencyAccessType {
|
||||
fn eq(&self, other: &i32) -> bool {
|
||||
*other == *self as i32
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<EmergencyAccessType> for i32 {
|
||||
fn eq(&self, other: &EmergencyAccessType) -> bool {
|
||||
*self == *other as i32
|
||||
}
|
||||
}
|
||||
|
||||
pub enum EmergencyAccessStatus {
|
||||
Invited = 0,
|
||||
Accepted = 1,
|
||||
Confirmed = 2,
|
||||
RecoveryInitiated = 3,
|
||||
RecoveryApproved = 4,
|
||||
}
|
||||
|
||||
// region Database methods
|
||||
|
||||
use crate::db::DbConn;
|
||||
|
||||
use crate::api::EmptyResult;
|
||||
use crate::error::MapResult;
|
||||
|
||||
impl EmergencyAccess {
|
||||
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
|
||||
User::update_uuid_revision(&self.grantor_uuid, conn);
|
||||
self.updated_at = Utc::now().naive_utc();
|
||||
|
||||
db_run! { conn:
|
||||
sqlite, mysql {
|
||||
match diesel::replace_into(emergency_access::table)
|
||||
.values(EmergencyAccessDb::to_db(self))
|
||||
.execute(conn)
|
||||
{
|
||||
Ok(_) => Ok(()),
|
||||
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
|
||||
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
|
||||
diesel::update(emergency_access::table)
|
||||
.filter(emergency_access::uuid.eq(&self.uuid))
|
||||
.set(EmergencyAccessDb::to_db(self))
|
||||
.execute(conn)
|
||||
.map_res("Error updating emergency access")
|
||||
}
|
||||
Err(e) => Err(e.into()),
|
||||
}.map_res("Error saving emergency access")
|
||||
}
|
||||
postgresql {
|
||||
let value = EmergencyAccessDb::to_db(self);
|
||||
diesel::insert_into(emergency_access::table)
|
||||
.values(&value)
|
||||
.on_conflict(emergency_access::uuid)
|
||||
.do_update()
|
||||
.set(&value)
|
||||
.execute(conn)
|
||||
.map_res("Error saving emergency access")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn) {
|
||||
ea.delete(conn)?;
|
||||
}
|
||||
for ea in Self::find_all_by_grantee_uuid(user_uuid, conn) {
|
||||
ea.delete(conn)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||
User::update_uuid_revision(&self.grantor_uuid, conn);
|
||||
|
||||
db_run! { conn: {
|
||||
diesel::delete(emergency_access::table.filter(emergency_access::uuid.eq(self.uuid)))
|
||||
.execute(conn)
|
||||
.map_res("Error removing user from emergency access")
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::uuid.eq(uuid))
|
||||
.first::<EmergencyAccessDb>(conn)
|
||||
.ok().from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_grantor_uuid_and_grantee_uuid_or_email(
|
||||
grantor_uuid: &str,
|
||||
grantee_uuid: &str,
|
||||
email: &str,
|
||||
conn: &DbConn,
|
||||
) -> Option<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||
.filter(emergency_access::grantee_uuid.eq(grantee_uuid).or(emergency_access::email.eq(email)))
|
||||
.first::<EmergencyAccessDb>(conn)
|
||||
.ok().from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_all_recoveries(conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::status.eq(EmergencyAccessStatus::RecoveryInitiated as i32))
|
||||
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::uuid.eq(uuid))
|
||||
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||
.first::<EmergencyAccessDb>(conn)
|
||||
.ok().from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
|
||||
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_invited_by_grantee_email(grantee_email: &str, conn: &DbConn) -> Option<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::email.eq(grantee_email))
|
||||
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
|
||||
.first::<EmergencyAccessDb>(conn)
|
||||
.ok().from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
emergency_access::table
|
||||
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||
}}
|
||||
}
|
||||
}
|
||||
|
||||
// endregion
|
@@ -2,22 +2,26 @@ mod attachment;
|
||||
mod cipher;
|
||||
mod collection;
|
||||
mod device;
|
||||
mod emergency_access;
|
||||
mod favorite;
|
||||
mod folder;
|
||||
mod org_policy;
|
||||
mod organization;
|
||||
mod send;
|
||||
mod two_factor;
|
||||
mod two_factor_incomplete;
|
||||
mod user;
|
||||
|
||||
pub use self::attachment::Attachment;
|
||||
pub use self::cipher::Cipher;
|
||||
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
|
||||
pub use self::device::Device;
|
||||
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
|
||||
pub use self::favorite::Favorite;
|
||||
pub use self::folder::{Folder, FolderCipher};
|
||||
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
|
||||
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
|
||||
pub use self::send::{Send, SendType};
|
||||
pub use self::two_factor::{TwoFactor, TwoFactorType};
|
||||
pub use self::two_factor_incomplete::TwoFactorIncomplete;
|
||||
pub use self::user::{Invitation, User, UserStampException};
|
||||
|
@@ -22,12 +22,12 @@ db_object! {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, num_derive::FromPrimitive)]
|
||||
#[derive(Copy, Clone, PartialEq, num_derive::FromPrimitive)]
|
||||
pub enum OrgPolicyType {
|
||||
TwoFactorAuthentication = 0,
|
||||
MasterPassword = 1,
|
||||
PasswordGenerator = 2,
|
||||
// SingleOrg = 3, // Not currently supported.
|
||||
SingleOrg = 3,
|
||||
// RequireSso = 4, // Not currently supported.
|
||||
PersonalOwnership = 5,
|
||||
DisableSend = 6,
|
||||
@@ -143,7 +143,7 @@ impl OrgPolicy {
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
org_policies::table
|
||||
.inner_join(
|
||||
@@ -184,8 +184,8 @@ impl OrgPolicy {
|
||||
/// and the user is not an owner or admin of that org. This is only useful for checking
|
||||
/// applicability of policy types that have these particular semantics.
|
||||
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
|
||||
// Returns confirmed users only.
|
||||
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
|
||||
// TODO: Should check confirmed and accepted users
|
||||
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
|
||||
if policy.enabled && policy.has_type(policy_type) {
|
||||
let org_uuid = &policy.org_uuid;
|
||||
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
||||
@@ -201,8 +201,7 @@ impl OrgPolicy {
|
||||
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
|
||||
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
|
||||
pub fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
|
||||
// Returns confirmed users only.
|
||||
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
|
||||
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
|
||||
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
|
||||
let org_uuid = &policy.org_uuid;
|
||||
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
||||
|
@@ -2,7 +2,7 @@ use num_traits::FromPrimitive;
|
||||
use serde_json::Value;
|
||||
use std::cmp::Ordering;
|
||||
|
||||
use super::{CollectionUser, OrgPolicy, User};
|
||||
use super::{CollectionUser, OrgPolicy, OrgPolicyType, User};
|
||||
|
||||
db_object! {
|
||||
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
|
||||
@@ -12,6 +12,8 @@ db_object! {
|
||||
pub uuid: String,
|
||||
pub name: String,
|
||||
pub billing_email: String,
|
||||
pub private_key: Option<String>,
|
||||
pub public_key: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
|
||||
@@ -122,12 +124,13 @@ impl PartialOrd<UserOrgType> for i32 {
|
||||
|
||||
/// Local methods
|
||||
impl Organization {
|
||||
pub fn new(name: String, billing_email: String) -> Self {
|
||||
pub fn new(name: String, billing_email: String, private_key: Option<String>, public_key: Option<String>) -> Self {
|
||||
Self {
|
||||
uuid: crate::util::get_uuid(),
|
||||
|
||||
name,
|
||||
billing_email,
|
||||
private_key,
|
||||
public_key,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -140,14 +143,16 @@ impl Organization {
|
||||
"MaxCollections": 10, // The value doesn't matter, we don't check server-side
|
||||
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
|
||||
"Use2fa": true,
|
||||
"UseDirectory": false,
|
||||
"UseEvents": false,
|
||||
"UseGroups": false,
|
||||
"UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
|
||||
"UseEvents": false, // not supported by us
|
||||
"UseGroups": false, // not supported by us
|
||||
"UseTotp": true,
|
||||
"UsePolicies": true,
|
||||
"UseSso": false, // We do not support SSO
|
||||
"SelfHost": true,
|
||||
"UseApi": false, // not supported by us
|
||||
"HasPublicAndPrivateKeys": self.private_key.is_some() && self.public_key.is_some(),
|
||||
"ResetPasswordEnrolled": false, // not supported by us
|
||||
|
||||
"BusinessName": null,
|
||||
"BusinessAddress1": null,
|
||||
@@ -269,13 +274,15 @@ impl UserOrganization {
|
||||
"UsersGetPremium": true,
|
||||
|
||||
"Use2fa": true,
|
||||
"UseDirectory": false,
|
||||
"UseEvents": false,
|
||||
"UseGroups": false,
|
||||
"UseDirectory": false, // Is supported, but this value isn't checked anywhere (yet)
|
||||
"UseEvents": false, // not supported by us
|
||||
"UseGroups": false, // not supported by us
|
||||
"UseTotp": true,
|
||||
"UsePolicies": true,
|
||||
"UseApi": false, // not supported by us
|
||||
"SelfHost": true,
|
||||
"HasPublicAndPrivateKeys": org.private_key.is_some() && org.public_key.is_some(),
|
||||
"ResetPasswordEnrolled": false, // not supported by us
|
||||
"SsoBound": false, // We do not support SSO
|
||||
"UseSso": false, // We do not support SSO
|
||||
// TODO: Add support for Business Portal
|
||||
@@ -283,6 +290,8 @@ impl UserOrganization {
|
||||
// For now they still have that code also in the web-vault, but they will remove it at some point.
|
||||
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
|
||||
"UseBusinessPortal": false, // Disable BusinessPortal Button
|
||||
"ProviderId": null,
|
||||
"ProviderName": null,
|
||||
|
||||
// TODO: Add support for Custom User Roles
|
||||
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
|
||||
@@ -293,10 +302,12 @@ impl UserOrganization {
|
||||
// "AccessReports": false,
|
||||
// "ManageAllCollections": false,
|
||||
// "ManageAssignedCollections": false,
|
||||
// "ManageCiphers": false,
|
||||
// "ManageGroups": false,
|
||||
// "ManagePolicies": false,
|
||||
// "ManageResetPassword": false,
|
||||
// "ManageSso": false,
|
||||
// "ManageUsers": false
|
||||
// "ManageUsers": false,
|
||||
// },
|
||||
|
||||
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
|
||||
@@ -466,7 +477,7 @@ impl UserOrganization {
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
users_organizations::table
|
||||
.filter(users_organizations::user_uuid.eq(user_uuid))
|
||||
@@ -535,6 +546,25 @@ impl UserOrganization {
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_user_and_policy(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
users_organizations::table
|
||||
.inner_join(
|
||||
org_policies::table.on(
|
||||
org_policies::org_uuid.eq(users_organizations::org_uuid)
|
||||
.and(users_organizations::user_uuid.eq(user_uuid))
|
||||
.and(org_policies::atype.eq(policy_type as i32))
|
||||
.and(org_policies::enabled.eq(true)))
|
||||
)
|
||||
.filter(
|
||||
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
|
||||
)
|
||||
.select(users_organizations::all_columns)
|
||||
.load::<UserOrganizationDb>(conn)
|
||||
.unwrap_or_default().from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! { conn: {
|
||||
users_organizations::table
|
||||
|
@@ -232,15 +232,18 @@ impl Send {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn update_users_revision(&self, conn: &DbConn) {
|
||||
pub fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
|
||||
let mut user_uuids = Vec::new();
|
||||
match &self.user_uuid {
|
||||
Some(user_uuid) => {
|
||||
User::update_uuid_revision(user_uuid, conn);
|
||||
user_uuids.push(user_uuid.clone())
|
||||
}
|
||||
None => {
|
||||
// Belongs to Organization, not implemented
|
||||
}
|
||||
}
|
||||
};
|
||||
user_uuids
|
||||
}
|
||||
|
||||
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||
|
@@ -1,8 +1,6 @@
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::api::EmptyResult;
|
||||
use crate::db::DbConn;
|
||||
use crate::error::MapResult;
|
||||
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
|
||||
|
||||
use super::User;
|
||||
|
||||
@@ -161,7 +159,6 @@ impl TwoFactor {
|
||||
|
||||
use crate::api::core::two_factor::u2f::U2FRegistration;
|
||||
use crate::api::core::two_factor::webauthn::{get_webauthn_registrations, WebauthnRegistration};
|
||||
use std::convert::TryInto;
|
||||
use webauthn_rs::proto::*;
|
||||
|
||||
for mut u2f in u2f_factors {
|
||||
|
108
src/db/models/two_factor_incomplete.rs
Normal file
108
src/db/models/two_factor_incomplete.rs
Normal file
@@ -0,0 +1,108 @@
|
||||
use chrono::{NaiveDateTime, Utc};
|
||||
|
||||
use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONFIG};
|
||||
|
||||
use super::User;
|
||||
|
||||
db_object! {
|
||||
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
|
||||
#[table_name = "twofactor_incomplete"]
|
||||
#[belongs_to(User, foreign_key = "user_uuid")]
|
||||
#[primary_key(user_uuid, device_uuid)]
|
||||
pub struct TwoFactorIncomplete {
|
||||
pub user_uuid: String,
|
||||
// This device UUID is simply what's claimed by the device. It doesn't
|
||||
// necessarily correspond to any UUID in the devices table, since a device
|
||||
// must complete 2FA login before being added into the devices table.
|
||||
pub device_uuid: String,
|
||||
pub device_name: String,
|
||||
pub login_time: NaiveDateTime,
|
||||
pub ip_address: String,
|
||||
}
|
||||
}
|
||||
|
||||
impl TwoFactorIncomplete {
|
||||
pub fn mark_incomplete(
|
||||
user_uuid: &str,
|
||||
device_uuid: &str,
|
||||
device_name: &str,
|
||||
ip: &ClientIp,
|
||||
conn: &DbConn,
|
||||
) -> EmptyResult {
|
||||
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Don't update the data for an existing user/device pair, since that
|
||||
// would allow an attacker to arbitrarily delay notifications by
|
||||
// sending repeated 2FA attempts to reset the timer.
|
||||
let existing = Self::find_by_user_and_device(user_uuid, device_uuid, conn);
|
||||
if existing.is_some() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
db_run! { conn: {
|
||||
diesel::insert_into(twofactor_incomplete::table)
|
||||
.values((
|
||||
twofactor_incomplete::user_uuid.eq(user_uuid),
|
||||
twofactor_incomplete::device_uuid.eq(device_uuid),
|
||||
twofactor_incomplete::device_name.eq(device_name),
|
||||
twofactor_incomplete::login_time.eq(Utc::now().naive_utc()),
|
||||
twofactor_incomplete::ip_address.eq(ip.ip.to_string()),
|
||||
))
|
||||
.execute(conn)
|
||||
.map_res("Error adding twofactor_incomplete record")
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
Self::delete_by_user_and_device(user_uuid, device_uuid, conn)
|
||||
}
|
||||
|
||||
pub fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||
db_run! { conn: {
|
||||
twofactor_incomplete::table
|
||||
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
|
||||
.filter(twofactor_incomplete::device_uuid.eq(device_uuid))
|
||||
.first::<TwoFactorIncompleteDb>(conn)
|
||||
.ok()
|
||||
.from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn find_logins_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
|
||||
db_run! {conn: {
|
||||
twofactor_incomplete::table
|
||||
.filter(twofactor_incomplete::login_time.lt(dt))
|
||||
.load::<TwoFactorIncompleteDb>(conn)
|
||||
.expect("Error loading twofactor_incomplete")
|
||||
.from_db()
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn)
|
||||
}
|
||||
|
||||
pub fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||
db_run! { conn: {
|
||||
diesel::delete(twofactor_incomplete::table
|
||||
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
|
||||
.filter(twofactor_incomplete::device_uuid.eq(device_uuid)))
|
||||
.execute(conn)
|
||||
.map_res("Error in twofactor_incomplete::delete_by_user_and_device()")
|
||||
}}
|
||||
}
|
||||
|
||||
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||
db_run! { conn: {
|
||||
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
|
||||
.execute(conn)
|
||||
.map_res("Error in twofactor_incomplete::delete_all_by_user()")
|
||||
}}
|
||||
}
|
||||
}
|
@@ -1,4 +1,4 @@
|
||||
use chrono::{NaiveDateTime, Utc};
|
||||
use chrono::{Duration, NaiveDateTime, Utc};
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::crypto;
|
||||
@@ -63,8 +63,9 @@ enum UserStatus {
|
||||
|
||||
#[derive(Serialize, Deserialize)]
|
||||
pub struct UserStampException {
|
||||
pub route: String,
|
||||
pub routes: Vec<String>,
|
||||
pub security_stamp: String,
|
||||
pub expire: i64,
|
||||
}
|
||||
|
||||
/// Local methods
|
||||
@@ -72,9 +73,9 @@ impl User {
|
||||
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
|
||||
pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000;
|
||||
|
||||
pub fn new(mail: String) -> Self {
|
||||
pub fn new(email: String) -> Self {
|
||||
let now = Utc::now().naive_utc();
|
||||
let email = mail.to_lowercase();
|
||||
let email = email.to_lowercase();
|
||||
|
||||
Self {
|
||||
uuid: crate::util::get_uuid(),
|
||||
@@ -135,9 +136,11 @@ impl User {
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `password` - A str which contains a hashed version of the users master password.
|
||||
/// * `allow_next_route` - A Option<&str> with the function name of the next allowed (rocket) route.
|
||||
/// * `allow_next_route` - A Option<Vec<String>> with the function names of the next allowed (rocket) routes.
|
||||
/// These routes are able to use the previous stamp id for the next 2 minutes.
|
||||
/// After these 2 minutes this stamp will expire.
|
||||
///
|
||||
pub fn set_password(&mut self, password: &str, allow_next_route: Option<&str>) {
|
||||
pub fn set_password(&mut self, password: &str, allow_next_route: Option<Vec<String>>) {
|
||||
self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32);
|
||||
|
||||
if let Some(route) = allow_next_route {
|
||||
@@ -154,30 +157,29 @@ impl User {
|
||||
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `route_exception` - A str with the function name of the next allowed (rocket) route.
|
||||
/// * `route_exception` - A Vec<String> with the function names of the next allowed (rocket) routes.
|
||||
/// These routes are able to use the previous stamp id for the next 2 minutes.
|
||||
/// After these 2 minutes this stamp will expire.
|
||||
///
|
||||
/// ### Future
|
||||
/// In the future it could be posible that we need more of these exception routes.
|
||||
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
|
||||
pub fn set_stamp_exception(&mut self, route_exception: &str) {
|
||||
pub fn set_stamp_exception(&mut self, route_exception: Vec<String>) {
|
||||
let stamp_exception = UserStampException {
|
||||
route: route_exception.to_string(),
|
||||
routes: route_exception,
|
||||
security_stamp: self.security_stamp.to_string(),
|
||||
expire: (Utc::now().naive_utc() + Duration::minutes(2)).timestamp(),
|
||||
};
|
||||
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
|
||||
}
|
||||
|
||||
/// Resets the stamp_exception to prevent re-use of the previous security-stamp
|
||||
///
|
||||
/// ### Future
|
||||
/// In the future it could be posible that we need more of these exception routes.
|
||||
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
|
||||
pub fn reset_stamp_exception(&mut self) {
|
||||
self.stamp_exception = None;
|
||||
}
|
||||
}
|
||||
|
||||
use super::{Cipher, Device, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization};
|
||||
use super::{
|
||||
Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, TwoFactorIncomplete, UserOrgType,
|
||||
UserOrganization,
|
||||
};
|
||||
use crate::db::DbConn;
|
||||
|
||||
use crate::api::EmptyResult;
|
||||
@@ -186,7 +188,7 @@ use crate::error::MapResult;
|
||||
/// Database methods
|
||||
impl User {
|
||||
pub fn to_json(&self, conn: &DbConn) -> Value {
|
||||
let orgs = UserOrganization::find_by_user(&self.uuid, conn);
|
||||
let orgs = UserOrganization::find_confirmed_by_user(&self.uuid, conn);
|
||||
let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(conn)).collect();
|
||||
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty();
|
||||
|
||||
@@ -211,7 +213,10 @@ impl User {
|
||||
"PrivateKey": self.private_key,
|
||||
"SecurityStamp": self.security_stamp,
|
||||
"Organizations": orgs_json,
|
||||
"Object": "profile"
|
||||
"Providers": [],
|
||||
"ProviderOrganizations": [],
|
||||
"ForcePasswordReset": false,
|
||||
"Object": "profile",
|
||||
})
|
||||
}
|
||||
|
||||
@@ -254,7 +259,7 @@ impl User {
|
||||
}
|
||||
|
||||
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||
for user_org in UserOrganization::find_by_user(&self.uuid, conn) {
|
||||
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn) {
|
||||
if user_org.atype == UserOrgType::Owner {
|
||||
let owner_type = UserOrgType::Owner as i32;
|
||||
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
|
||||
@@ -264,12 +269,14 @@ impl User {
|
||||
}
|
||||
|
||||
Send::delete_all_by_user(&self.uuid, conn)?;
|
||||
EmergencyAccess::delete_all_by_user(&self.uuid, conn)?;
|
||||
UserOrganization::delete_all_by_user(&self.uuid, conn)?;
|
||||
Cipher::delete_all_by_user(&self.uuid, conn)?;
|
||||
Favorite::delete_all_by_user(&self.uuid, conn)?;
|
||||
Folder::delete_all_by_user(&self.uuid, conn)?;
|
||||
Device::delete_all_by_user(&self.uuid, conn)?;
|
||||
TwoFactor::delete_all_by_user(&self.uuid, conn)?;
|
||||
TwoFactorIncomplete::delete_all_by_user(&self.uuid, conn)?;
|
||||
Invitation::take(&self.email, conn); // Delete invitation if any
|
||||
|
||||
db_run! {conn: {
|
||||
@@ -347,7 +354,8 @@ impl User {
|
||||
}
|
||||
|
||||
impl Invitation {
|
||||
pub const fn new(email: String) -> Self {
|
||||
pub fn new(email: String) -> Self {
|
||||
let email = email.to_lowercase();
|
||||
Self {
|
||||
email,
|
||||
}
|
||||
|
@@ -100,6 +100,8 @@ table! {
|
||||
uuid -> Text,
|
||||
name -> Text,
|
||||
billing_email -> Text,
|
||||
private_key -> Nullable<Text>,
|
||||
public_key -> Nullable<Text>,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,6 +140,16 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
twofactor_incomplete (user_uuid, device_uuid) {
|
||||
user_uuid -> Text,
|
||||
device_uuid -> Text,
|
||||
device_name -> Text,
|
||||
login_time -> Timestamp,
|
||||
ip_address -> Text,
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
users (uuid) {
|
||||
uuid -> Text,
|
||||
@@ -190,6 +202,23 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
emergency_access (uuid) {
|
||||
uuid -> Text,
|
||||
grantor_uuid -> Text,
|
||||
grantee_uuid -> Nullable<Text>,
|
||||
email -> Nullable<Text>,
|
||||
key_encrypted -> Nullable<Text>,
|
||||
atype -> Integer,
|
||||
status -> Integer,
|
||||
wait_time_days -> Integer,
|
||||
recovery_initiated_at -> Nullable<Timestamp>,
|
||||
last_notification_at -> Nullable<Timestamp>,
|
||||
updated_at -> Timestamp,
|
||||
created_at -> Timestamp,
|
||||
}
|
||||
}
|
||||
|
||||
joinable!(attachments -> ciphers (cipher_uuid));
|
||||
joinable!(ciphers -> organizations (organization_uuid));
|
||||
joinable!(ciphers -> users (user_uuid));
|
||||
@@ -208,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
||||
joinable!(users_collections -> users (user_uuid));
|
||||
joinable!(users_organizations -> organizations (org_uuid));
|
||||
joinable!(users_organizations -> users (user_uuid));
|
||||
joinable!(emergency_access -> users (grantor_uuid));
|
||||
|
||||
allow_tables_to_appear_in_same_query!(
|
||||
attachments,
|
||||
@@ -225,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
||||
users,
|
||||
users_collections,
|
||||
users_organizations,
|
||||
emergency_access,
|
||||
);
|
||||
|
@@ -100,6 +100,8 @@ table! {
|
||||
uuid -> Text,
|
||||
name -> Text,
|
||||
billing_email -> Text,
|
||||
private_key -> Nullable<Text>,
|
||||
public_key -> Nullable<Text>,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,6 +140,16 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
twofactor_incomplete (user_uuid, device_uuid) {
|
||||
user_uuid -> Text,
|
||||
device_uuid -> Text,
|
||||
device_name -> Text,
|
||||
login_time -> Timestamp,
|
||||
ip_address -> Text,
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
users (uuid) {
|
||||
uuid -> Text,
|
||||
@@ -190,6 +202,23 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
emergency_access (uuid) {
|
||||
uuid -> Text,
|
||||
grantor_uuid -> Text,
|
||||
grantee_uuid -> Nullable<Text>,
|
||||
email -> Nullable<Text>,
|
||||
key_encrypted -> Nullable<Text>,
|
||||
atype -> Integer,
|
||||
status -> Integer,
|
||||
wait_time_days -> Integer,
|
||||
recovery_initiated_at -> Nullable<Timestamp>,
|
||||
last_notification_at -> Nullable<Timestamp>,
|
||||
updated_at -> Timestamp,
|
||||
created_at -> Timestamp,
|
||||
}
|
||||
}
|
||||
|
||||
joinable!(attachments -> ciphers (cipher_uuid));
|
||||
joinable!(ciphers -> organizations (organization_uuid));
|
||||
joinable!(ciphers -> users (user_uuid));
|
||||
@@ -208,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
||||
joinable!(users_collections -> users (user_uuid));
|
||||
joinable!(users_organizations -> organizations (org_uuid));
|
||||
joinable!(users_organizations -> users (user_uuid));
|
||||
joinable!(emergency_access -> users (grantor_uuid));
|
||||
|
||||
allow_tables_to_appear_in_same_query!(
|
||||
attachments,
|
||||
@@ -225,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
||||
users,
|
||||
users_collections,
|
||||
users_organizations,
|
||||
emergency_access,
|
||||
);
|
||||
|
@@ -100,6 +100,8 @@ table! {
|
||||
uuid -> Text,
|
||||
name -> Text,
|
||||
billing_email -> Text,
|
||||
private_key -> Nullable<Text>,
|
||||
public_key -> Nullable<Text>,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,6 +140,16 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
twofactor_incomplete (user_uuid, device_uuid) {
|
||||
user_uuid -> Text,
|
||||
device_uuid -> Text,
|
||||
device_name -> Text,
|
||||
login_time -> Timestamp,
|
||||
ip_address -> Text,
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
users (uuid) {
|
||||
uuid -> Text,
|
||||
@@ -190,6 +202,23 @@ table! {
|
||||
}
|
||||
}
|
||||
|
||||
table! {
|
||||
emergency_access (uuid) {
|
||||
uuid -> Text,
|
||||
grantor_uuid -> Text,
|
||||
grantee_uuid -> Nullable<Text>,
|
||||
email -> Nullable<Text>,
|
||||
key_encrypted -> Nullable<Text>,
|
||||
atype -> Integer,
|
||||
status -> Integer,
|
||||
wait_time_days -> Integer,
|
||||
recovery_initiated_at -> Nullable<Timestamp>,
|
||||
last_notification_at -> Nullable<Timestamp>,
|
||||
updated_at -> Timestamp,
|
||||
created_at -> Timestamp,
|
||||
}
|
||||
}
|
||||
|
||||
joinable!(attachments -> ciphers (cipher_uuid));
|
||||
joinable!(ciphers -> organizations (organization_uuid));
|
||||
joinable!(ciphers -> users (user_uuid));
|
||||
@@ -208,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
||||
joinable!(users_collections -> users (user_uuid));
|
||||
joinable!(users_organizations -> organizations (org_uuid));
|
||||
joinable!(users_organizations -> users (user_uuid));
|
||||
joinable!(emergency_access -> users (grantor_uuid));
|
||||
|
||||
allow_tables_to_appear_in_same_query!(
|
||||
attachments,
|
||||
@@ -225,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
||||
users,
|
||||
users_collections,
|
||||
users_organizations,
|
||||
emergency_access,
|
||||
);
|
||||
|
16
src/error.rs
16
src/error.rs
@@ -73,7 +73,7 @@ make_error! {
|
||||
Serde(SerdeErr): _has_source, _api_error,
|
||||
JWt(JwtErr): _has_source, _api_error,
|
||||
Handlebars(HbErr): _has_source, _api_error,
|
||||
//WsError(ws::Error): _has_source, _api_error,
|
||||
|
||||
Io(IoErr): _has_source, _api_error,
|
||||
Time(TimeErr): _has_source, _api_error,
|
||||
Req(ReqErr): _has_source, _api_error,
|
||||
@@ -166,7 +166,7 @@ fn _serialize(e: &impl serde::Serialize, _msg: &str) -> String {
|
||||
|
||||
fn _api_error(_: &impl std::any::Any, msg: &str) -> String {
|
||||
let json = json!({
|
||||
"Message": "",
|
||||
"Message": msg,
|
||||
"error": "",
|
||||
"error_description": "",
|
||||
"ValidationErrors": {"": [ msg ]},
|
||||
@@ -174,6 +174,9 @@ fn _api_error(_: &impl std::any::Any, msg: &str) -> String {
|
||||
"Message": msg,
|
||||
"Object": "error"
|
||||
},
|
||||
"ExceptionMessage": null,
|
||||
"ExceptionStackTrace": null,
|
||||
"InnerExceptionMessage": null,
|
||||
"Object": "error"
|
||||
});
|
||||
_serialize(&json, "")
|
||||
@@ -217,6 +220,15 @@ macro_rules! err {
|
||||
}};
|
||||
}
|
||||
|
||||
macro_rules! err_silent {
|
||||
($msg:expr) => {{
|
||||
return Err(crate::error::Error::new($msg, $msg));
|
||||
}};
|
||||
($usr_msg:expr, $log_value:expr) => {{
|
||||
return Err(crate::error::Error::new($usr_msg, $log_value));
|
||||
}};
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! err_code {
|
||||
($msg:expr, $err_code: expr) => {{
|
||||
|
205
src/mail.rs
205
src/mail.rs
@@ -1,6 +1,6 @@
|
||||
use std::str::FromStr;
|
||||
|
||||
use chrono::{DateTime, Local};
|
||||
use chrono::NaiveDateTime;
|
||||
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
|
||||
|
||||
use lettre::{
|
||||
@@ -13,7 +13,10 @@ use lettre::{
|
||||
|
||||
use crate::{
|
||||
api::EmptyResult,
|
||||
auth::{encode_jwt, generate_delete_claims, generate_invite_claims, generate_verify_email_claims},
|
||||
auth::{
|
||||
encode_jwt, generate_delete_claims, generate_emergency_access_invite_claims, generate_invite_claims,
|
||||
generate_verify_email_claims,
|
||||
},
|
||||
error::Error,
|
||||
CONFIG,
|
||||
};
|
||||
@@ -180,6 +183,30 @@ pub fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult {
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_2fa_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/send_2fa_removed_from_org",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"org_name": org_name,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_single_org_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/send_single_org_removed_from_org",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"org_name": org_name,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_invite(
|
||||
address: &str,
|
||||
uuid: &str,
|
||||
@@ -212,6 +239,136 @@ pub fn send_invite(
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_invite(
|
||||
address: &str,
|
||||
uuid: &str,
|
||||
emer_id: Option<String>,
|
||||
grantor_name: Option<String>,
|
||||
grantor_email: Option<String>,
|
||||
) -> EmptyResult {
|
||||
let claims = generate_emergency_access_invite_claims(
|
||||
uuid.to_string(),
|
||||
String::from(address),
|
||||
emer_id.clone(),
|
||||
grantor_name.clone(),
|
||||
grantor_email,
|
||||
);
|
||||
|
||||
let invite_token = encode_jwt(&claims);
|
||||
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/send_emergency_access_invite",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"emer_id": emer_id.unwrap_or_else(|| "_".to_string()),
|
||||
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
|
||||
"grantor_name": grantor_name,
|
||||
"token": invite_token,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_invite_accepted(address: &str, grantee_email: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_invite_accepted",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantee_email": grantee_email,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_invite_confirmed(address: &str, grantor_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_invite_confirmed",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantor_name": grantor_name,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_recovery_approved(address: &str, grantor_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_recovery_approved",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantor_name": grantor_name,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_recovery_initiated(
|
||||
address: &str,
|
||||
grantee_name: &str,
|
||||
atype: &str,
|
||||
wait_time_days: &str,
|
||||
) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_recovery_initiated",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantee_name": grantee_name,
|
||||
"atype": atype,
|
||||
"wait_time_days": wait_time_days,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_recovery_reminder(
|
||||
address: &str,
|
||||
grantee_name: &str,
|
||||
atype: &str,
|
||||
days_left: &str,
|
||||
) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_recovery_reminder",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantee_name": grantee_name,
|
||||
"atype": atype,
|
||||
"days_left": days_left,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_recovery_rejected(address: &str, grantor_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_recovery_rejected",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantor_name": grantor_name,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_emergency_access_recovery_timed_out(address: &str, grantee_name: &str, atype: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/emergency_access_recovery_timed_out",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"grantee_name": grantee_name,
|
||||
"atype": atype,
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/invite_accepted",
|
||||
@@ -237,7 +394,7 @@ pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>, device: &str) -> EmptyResult {
|
||||
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult {
|
||||
use crate::util::upcase_first;
|
||||
let device = upcase_first(device);
|
||||
|
||||
@@ -248,7 +405,26 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>,
|
||||
"url": CONFIG.domain(),
|
||||
"ip": ip,
|
||||
"device": device,
|
||||
"datetime": crate::util::format_datetime_local(dt, fmt),
|
||||
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
|
||||
}),
|
||||
)?;
|
||||
|
||||
send_email(address, &subject, body_html, body_text)
|
||||
}
|
||||
|
||||
pub fn send_incomplete_2fa_login(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult {
|
||||
use crate::util::upcase_first;
|
||||
let device = upcase_first(device);
|
||||
|
||||
let fmt = "%A, %B %_d, %Y at %r %Z";
|
||||
let (subject, body_html, body_text) = get_text(
|
||||
"email/incomplete_2fa_login",
|
||||
json!({
|
||||
"url": CONFIG.domain(),
|
||||
"ip": ip,
|
||||
"device": device,
|
||||
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
|
||||
"time_limit": CONFIG.incomplete_2fa_time_limit(),
|
||||
}),
|
||||
)?;
|
||||
|
||||
@@ -328,15 +504,28 @@ fn send_email(address: &str, subject: &str, body_html: String, body_text: String
|
||||
// Match some common errors and make them more user friendly
|
||||
Err(e) => {
|
||||
if e.is_client() {
|
||||
debug!("SMTP Client error: {:#?}", e);
|
||||
err!(format!("SMTP Client error: {}", e));
|
||||
} else if e.is_transient() {
|
||||
err!(format!("SMTP 4xx error: {:?}", e));
|
||||
debug!("SMTP 4xx error: {:#?}", e);
|
||||
err!(format!("SMTP 4xx error: {}", e));
|
||||
} else if e.is_permanent() {
|
||||
err!(format!("SMTP 5xx error: {:?}", e));
|
||||
debug!("SMTP 5xx error: {:#?}", e);
|
||||
let mut msg = e.to_string();
|
||||
// Add a special check for 535 to add a more descriptive message
|
||||
if msg.contains("(535)") {
|
||||
msg = format!("{} - Authentication credentials invalid", msg);
|
||||
}
|
||||
err!(format!("SMTP 5xx error: {}", msg));
|
||||
} else if e.is_timeout() {
|
||||
err!(format!("SMTP timeout error: {:?}", e));
|
||||
debug!("SMTP timeout error: {:#?}", e);
|
||||
err!(format!("SMTP timeout error: {}", e));
|
||||
} else if e.is_tls() {
|
||||
debug!("SMTP Encryption error: {:#?}", e);
|
||||
err!(format!("SMTP Encryption error: {}", e));
|
||||
} else {
|
||||
Err(e.into())
|
||||
debug!("SMTP {:#?}", e);
|
||||
err!(format!("SMTP {}", e));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
45
src/main.rs
45
src/main.rs
@@ -1,6 +1,10 @@
|
||||
#![forbid(unsafe_code)]
|
||||
#![cfg_attr(feature = "unstable", feature(ip))]
|
||||
#![recursion_limit = "512"]
|
||||
// The recursion_limit is mainly triggered by the json!() macro.
|
||||
// The more key/value pairs there are the more recursion occurs.
|
||||
// We want to keep this as low as possible, but not higher then 128.
|
||||
// If you go above 128 it will cause rust-analyzer to fail,
|
||||
#![recursion_limit = "87"]
|
||||
|
||||
extern crate openssl;
|
||||
#[macro_use]
|
||||
@@ -104,6 +108,14 @@ fn launch_info() {
|
||||
}
|
||||
|
||||
fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
|
||||
// Depending on the main log level we either want to disable or enable logging for trust-dns.
|
||||
// Else if there are timeouts it will clutter the logs since trust-dns uses warn for this.
|
||||
let trust_dns_level = if level >= log::LevelFilter::Debug {
|
||||
level
|
||||
} else {
|
||||
log::LevelFilter::Off
|
||||
};
|
||||
|
||||
let mut logger = fern::Dispatch::new()
|
||||
.level(level)
|
||||
// Hide unknown certificate errors if using self-signed
|
||||
@@ -122,6 +134,8 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
|
||||
.level_for("hyper::client", log::LevelFilter::Off)
|
||||
// Prevent cookie_store logs
|
||||
.level_for("cookie_store", log::LevelFilter::Off)
|
||||
// Variable level for trust-dns used by reqwest
|
||||
.level_for("trust_dns_proto", trust_dns_level)
|
||||
.chain(std::io::stdout());
|
||||
|
||||
// Enable smtp debug logging only specifically for smtp when need.
|
||||
@@ -345,11 +359,40 @@ fn schedule_jobs(pool: db::DbPool) {
|
||||
}));
|
||||
}
|
||||
|
||||
// Send email notifications about incomplete 2FA logins, which potentially
|
||||
// indicates that a user's master password has been compromised.
|
||||
if !CONFIG.incomplete_2fa_schedule().is_empty() {
|
||||
sched.add(Job::new(CONFIG.incomplete_2fa_schedule().parse().unwrap(), || {
|
||||
api::send_incomplete_2fa_notifications(pool.clone());
|
||||
}));
|
||||
}
|
||||
|
||||
// Grant emergency access requests that have met the required wait time.
|
||||
// This job should run before the emergency access reminders job to avoid
|
||||
// sending reminders for requests that are about to be granted anyway.
|
||||
if !CONFIG.emergency_request_timeout_schedule().is_empty() {
|
||||
sched.add(Job::new(CONFIG.emergency_request_timeout_schedule().parse().unwrap(), || {
|
||||
api::emergency_request_timeout_job(pool.clone());
|
||||
}));
|
||||
}
|
||||
|
||||
// Send reminders to emergency access grantors that there are pending
|
||||
// emergency access requests.
|
||||
if !CONFIG.emergency_notification_reminder_schedule().is_empty() {
|
||||
sched.add(Job::new(CONFIG.emergency_notification_reminder_schedule().parse().unwrap(), || {
|
||||
api::emergency_notification_reminder_job(pool.clone());
|
||||
}));
|
||||
}
|
||||
|
||||
// Periodically check for jobs to run. We probably won't need any
|
||||
// jobs that run more often than once a minute, so a default poll
|
||||
// interval of 30 seconds should be sufficient. Users who want to
|
||||
// schedule jobs to run more frequently for some reason can reduce
|
||||
// the poll interval accordingly.
|
||||
//
|
||||
// Note that the scheduler checks jobs in the order in which they
|
||||
// were added, so if two jobs are both eligible to run at a given
|
||||
// tick, the one that was added earlier will run first.
|
||||
loop {
|
||||
sched.tick();
|
||||
thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms()));
|
||||
|
517
src/static/scripts/bootstrap-native.js
vendored
517
src/static/scripts/bootstrap-native.js
vendored
File diff suppressed because it is too large
Load Diff
790
src/static/scripts/bootstrap.css
vendored
790
src/static/scripts/bootstrap.css
vendored
File diff suppressed because it is too large
Load Diff
110
src/static/scripts/datatables.css
vendored
110
src/static/scripts/datatables.css
vendored
@@ -4,13 +4,94 @@
|
||||
*
|
||||
* To rebuild or modify this file with the latest versions of the included
|
||||
* software please visit:
|
||||
* https://datatables.net/download/#bs5/dt-1.10.25
|
||||
* https://datatables.net/download/#bs5/dt-1.11.3
|
||||
*
|
||||
* Included libraries:
|
||||
* DataTables 1.10.25
|
||||
* DataTables 1.11.3
|
||||
*/
|
||||
|
||||
@charset "UTF-8";
|
||||
td.dt-control {
|
||||
background: url("https://www.datatables.net/examples/resources/details_open.png") no-repeat center center;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
tr.dt-hasChild td.dt-control {
|
||||
background: url("https://www.datatables.net/examples/resources/details_close.png") no-repeat center center;
|
||||
}
|
||||
|
||||
table.dataTable th.dt-left,
|
||||
table.dataTable td.dt-left {
|
||||
text-align: left;
|
||||
}
|
||||
table.dataTable th.dt-center,
|
||||
table.dataTable td.dt-center,
|
||||
table.dataTable td.dataTables_empty {
|
||||
text-align: center;
|
||||
}
|
||||
table.dataTable th.dt-right,
|
||||
table.dataTable td.dt-right {
|
||||
text-align: right;
|
||||
}
|
||||
table.dataTable th.dt-justify,
|
||||
table.dataTable td.dt-justify {
|
||||
text-align: justify;
|
||||
}
|
||||
table.dataTable th.dt-nowrap,
|
||||
table.dataTable td.dt-nowrap {
|
||||
white-space: nowrap;
|
||||
}
|
||||
table.dataTable thead th.dt-head-left,
|
||||
table.dataTable thead td.dt-head-left,
|
||||
table.dataTable tfoot th.dt-head-left,
|
||||
table.dataTable tfoot td.dt-head-left {
|
||||
text-align: left;
|
||||
}
|
||||
table.dataTable thead th.dt-head-center,
|
||||
table.dataTable thead td.dt-head-center,
|
||||
table.dataTable tfoot th.dt-head-center,
|
||||
table.dataTable tfoot td.dt-head-center {
|
||||
text-align: center;
|
||||
}
|
||||
table.dataTable thead th.dt-head-right,
|
||||
table.dataTable thead td.dt-head-right,
|
||||
table.dataTable tfoot th.dt-head-right,
|
||||
table.dataTable tfoot td.dt-head-right {
|
||||
text-align: right;
|
||||
}
|
||||
table.dataTable thead th.dt-head-justify,
|
||||
table.dataTable thead td.dt-head-justify,
|
||||
table.dataTable tfoot th.dt-head-justify,
|
||||
table.dataTable tfoot td.dt-head-justify {
|
||||
text-align: justify;
|
||||
}
|
||||
table.dataTable thead th.dt-head-nowrap,
|
||||
table.dataTable thead td.dt-head-nowrap,
|
||||
table.dataTable tfoot th.dt-head-nowrap,
|
||||
table.dataTable tfoot td.dt-head-nowrap {
|
||||
white-space: nowrap;
|
||||
}
|
||||
table.dataTable tbody th.dt-body-left,
|
||||
table.dataTable tbody td.dt-body-left {
|
||||
text-align: left;
|
||||
}
|
||||
table.dataTable tbody th.dt-body-center,
|
||||
table.dataTable tbody td.dt-body-center {
|
||||
text-align: center;
|
||||
}
|
||||
table.dataTable tbody th.dt-body-right,
|
||||
table.dataTable tbody td.dt-body-right {
|
||||
text-align: right;
|
||||
}
|
||||
table.dataTable tbody th.dt-body-justify,
|
||||
table.dataTable tbody td.dt-body-justify {
|
||||
text-align: justify;
|
||||
}
|
||||
table.dataTable tbody th.dt-body-nowrap,
|
||||
table.dataTable tbody td.dt-body-nowrap {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
/*! Bootstrap 5 integration for DataTables
|
||||
*
|
||||
* ©2020 SpryMedia Ltd, all rights reserved.
|
||||
@@ -143,21 +224,21 @@ div.dataTables_scrollHead table.dataTable {
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
div.dataTables_scrollBody table {
|
||||
div.dataTables_scrollBody > table {
|
||||
border-top: none;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
div.dataTables_scrollBody table thead .sorting:before,
|
||||
div.dataTables_scrollBody table thead .sorting_asc:before,
|
||||
div.dataTables_scrollBody table thead .sorting_desc:before,
|
||||
div.dataTables_scrollBody table thead .sorting:after,
|
||||
div.dataTables_scrollBody table thead .sorting_asc:after,
|
||||
div.dataTables_scrollBody table thead .sorting_desc:after {
|
||||
div.dataTables_scrollBody > table > thead .sorting:before,
|
||||
div.dataTables_scrollBody > table > thead .sorting_asc:before,
|
||||
div.dataTables_scrollBody > table > thead .sorting_desc:before,
|
||||
div.dataTables_scrollBody > table > thead .sorting:after,
|
||||
div.dataTables_scrollBody > table > thead .sorting_asc:after,
|
||||
div.dataTables_scrollBody > table > thead .sorting_desc:after {
|
||||
display: none;
|
||||
}
|
||||
div.dataTables_scrollBody table tbody tr:first-child th,
|
||||
div.dataTables_scrollBody table tbody tr:first-child td {
|
||||
div.dataTables_scrollBody > table > tbody tr:first-child th,
|
||||
div.dataTables_scrollBody > table > tbody tr:first-child td {
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
@@ -235,4 +316,11 @@ div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:last-
|
||||
padding-right: 0;
|
||||
}
|
||||
|
||||
table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1) {
|
||||
--bs-table-accent-bg: transparent;
|
||||
}
|
||||
table.dataTable.table-striped > tbody > tr.odd {
|
||||
--bs-table-accent-bg: var(--bs-table-striped-bg);
|
||||
}
|
||||
|
||||
|
||||
|
1037
src/static/scripts/datatables.js
vendored
1037
src/static/scripts/datatables.js
vendored
File diff suppressed because it is too large
Load Diff
@@ -62,8 +62,8 @@
|
||||
headers: { "Content-Type": "application/json" }
|
||||
}).then( resp => {
|
||||
if (resp.ok) { msg(successMsg, reload_page); return Promise.reject({error: false}); }
|
||||
respStatus = resp.status;
|
||||
respStatusText = resp.statusText;
|
||||
const respStatus = resp.status;
|
||||
const respStatusText = resp.statusText;
|
||||
return resp.text();
|
||||
}).then( respText => {
|
||||
try {
|
||||
@@ -126,9 +126,9 @@
|
||||
|
||||
// get current URL path and assign 'active' class to the correct nav-item
|
||||
(() => {
|
||||
var pathname = window.location.pathname;
|
||||
const pathname = window.location.pathname;
|
||||
if (pathname === "") return;
|
||||
var navItem = document.querySelectorAll('.navbar-nav .nav-item a[href="'+pathname+'"]');
|
||||
let navItem = document.querySelectorAll('.navbar-nav .nav-item a[href="'+pathname+'"]');
|
||||
if (navItem.length === 1) {
|
||||
navItem[0].className = navItem[0].className + ' active';
|
||||
navItem[0].setAttribute('aria-current', 'page');
|
||||
|
@@ -58,7 +58,7 @@
|
||||
<dt class="col-sm-5">Running within Docker</dt>
|
||||
<dd class="col-sm-7">
|
||||
{{#if page_data.running_within_docker}}
|
||||
<span class="d-block"><b>Yes</b></span>
|
||||
<span class="d-block"><b>Yes (Base: {{ page_data.docker_base_image }})</b></span>
|
||||
{{/if}}
|
||||
{{#unless page_data.running_within_docker}}
|
||||
<span class="d-block"><b>No</b></span>
|
||||
@@ -150,7 +150,7 @@
|
||||
|
||||
<dt class="col-sm-5">Domain configuration
|
||||
<span class="badge bg-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
|
||||
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not matches the browsers location.
The domain variable does not seem to be configured correctly.
Some features may not work as expected!">No Match</span>
|
||||
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not match the browser location.
The domain variable does not seem to be configured correctly.
Some features may not work as expected!">No Match</span>
|
||||
<span class="badge bg-success d-none" id="https-success" title="Configurued to use HTTPS">HTTPS</span>
|
||||
<span class="badge bg-danger d-none" id="https-warning" title="Not configured to use HTTPS.
Some features may not work as expected!">No HTTPS</span>
|
||||
</dt>
|
||||
@@ -329,7 +329,7 @@
|
||||
|
||||
supportString += "* Vaultwarden version: v{{ version }}\n";
|
||||
supportString += "* Web-vault version: v{{ page_data.web_vault_version }}\n";
|
||||
supportString += "* Running within Docker: {{ page_data.running_within_docker }}\n";
|
||||
supportString += "* Running within Docker: {{ page_data.running_within_docker }} (Base: {{ page_data.docker_base_image }})\n";
|
||||
supportString += "* Environment settings overridden: ";
|
||||
{{#if page_data.overrides}}
|
||||
supportString += "true\n"
|
||||
|
@@ -37,8 +37,8 @@
|
||||
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
|
||||
{{/if}}
|
||||
</td>
|
||||
<td class="text-end pe-2 small">
|
||||
<a class="d-block" href="#" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</a>
|
||||
<td class="text-end px-0 small">
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</button>
|
||||
</td>
|
||||
</tr>
|
||||
{{/each}}
|
||||
|
@@ -12,9 +12,7 @@
|
||||
{{#each config}}
|
||||
{{#if groupdoc}}
|
||||
<div class="card bg-light mb-3">
|
||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">
|
||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">{{groupdoc}}</button>
|
||||
</div>
|
||||
<button id="b_{{group}}" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_{{group}}" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">{{groupdoc}}</button>
|
||||
<div id="g_{{group}}" class="card-body collapse">
|
||||
{{#each elements}}
|
||||
{{#if editable}}
|
||||
@@ -61,10 +59,8 @@
|
||||
{{/each}}
|
||||
|
||||
<div class="card bg-light mb-3">
|
||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_readonly">
|
||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_readonly">Read-Only Config</button>
|
||||
</div>
|
||||
|
||||
<button id="b_readonly" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_readonly"
|
||||
data-bs-toggle="collapse" data-bs-target="#g_readonly">Read-Only Config</button>
|
||||
<div id="g_readonly" class="card-body collapse">
|
||||
<div class="small mb-3">
|
||||
NOTE: These options can't be modified in the editor because they would require the server
|
||||
@@ -109,9 +105,8 @@
|
||||
|
||||
{{#if can_backup}}
|
||||
<div class="card bg-light mb-3">
|
||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_database">
|
||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_database">Backup Database</button>
|
||||
</div>
|
||||
<button id="b_database" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_database"
|
||||
data-bs-toggle="collapse" data-bs-target="#g_database">Backup Database</button>
|
||||
<div id="g_database" class="card-body collapse">
|
||||
<div class="small mb-3">
|
||||
WARNING: This function only creates a backup copy of the SQLite database.
|
||||
@@ -224,11 +219,10 @@
|
||||
onChange(); // Trigger the event initially
|
||||
checkbox.addEventListener("change", onChange);
|
||||
}
|
||||
// These are formatted because otherwise the
|
||||
// VSCode formatter breaks But they still work
|
||||
// {{#each config}} {{#if grouptoggle}}
|
||||
|
||||
{{#each config}} {{#if grouptoggle}}
|
||||
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
|
||||
// {{/if}} {{/each}}
|
||||
{{/if}} {{/each}}
|
||||
|
||||
// Two functions to help check if there were changes to the form fields
|
||||
// Useful for example during the smtp test to prevent people from clicking save before testing there new settings
|
||||
|
@@ -61,16 +61,16 @@
|
||||
{{/each}}
|
||||
</div>
|
||||
</td>
|
||||
<td class="text-end pe-2 small">
|
||||
<td class="text-end px-0 small">
|
||||
{{#if TwoFactorEnabled}}
|
||||
<a class="d-block" href="#" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</a>
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</button>
|
||||
{{/if}}
|
||||
<a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
|
||||
<a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</button>
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</button>
|
||||
{{#if user_enabled}}
|
||||
<a class="d-block" href="#" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</a>
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</button>
|
||||
{{else}}
|
||||
<a class="d-block" href="#" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</a>
|
||||
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</button>
|
||||
{{/if}}
|
||||
</td>
|
||||
</tr>
|
||||
|
@@ -1,23 +1,22 @@
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
</tr>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
<table class="footer" cellpadding="0" cellspacing="0" width="100%" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; width: 100%;">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
<table class="footer" cellpadding="0" cellspacing="0" width="100%" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; clear: both; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; width: 100%;">
|
||||
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
|
||||
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
|
||||
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
|
||||
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
|
||||
<td class="aligncenter social-icons" align="center" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 15px 0 0 0;" valign="top">
|
||||
<table cellpadding="0" cellspacing="0" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0 auto;">
|
||||
<tr style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0;">
|
||||
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
<td style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; padding: 0 10px;" valign="top"><a href="https://github.com/dani-garcia/vaultwarden" target="_blank" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; box-sizing: border-box; color: #999; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; line-height: 20px; margin: 0; text-decoration: underline;"><img src="{{url}}/bwrs_static/mail-github.png" alt="GitHub" width="30" height="30" style="-webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none; border: none; box-sizing: border-box; color: #333; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; margin: 0; max-width: 100%;" /></a></td>
|
||||
</tr>
|
||||
</table>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
@@ -0,0 +1,8 @@
|
||||
Emergency access contact {{{grantee_email}}} accepted
|
||||
<!---------------->
|
||||
This email is to notify you that {{grantee_email}} has accepted your invitation to become an emergency access contact.
|
||||
|
||||
To confirm this user, log into the web vault ({{url}}), go to settings and confirm the user.
|
||||
|
||||
If you do not wish to confirm this user, you can also remove them on the same page.
|
||||
{{> email/email_footer_text }}
|
@@ -0,0 +1,21 @@
|
||||
Emergency access contact {{{grantee_email}}} accepted
|
||||
<!---------------->
|
||||
{{> email/email_header }}
|
||||
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||
This email is to notify you that {{grantee_email}} has accepted your invitation to become an emergency access contact.
|
||||
</td>
|
||||
</tr>
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||
To confirm this user, log into the <a href="{{url}}/">web vault</a>, go to settings and confirm the user.
|
||||
</td>
|
||||
</tr>
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||
If you do not wish to confirm this user, you can also remove them on the same page.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
{{> email/email_footer }}
|
@@ -0,0 +1,6 @@
|
||||
Emergency access contact for {{{grantor_name}}} confirmed
|
||||
<!---------------->
|
||||
This email is to notify you that you have been confirmed as an emergency access contact for *{{grantor_name}}*.
|
||||
|
||||
You can now initiate emergency access requests from the web vault ({{url}}).
|
||||
{{> email/email_footer_text }}
|
@@ -0,0 +1,16 @@
|
||||
Emergency access contact for {{{grantor_name}}} confirmed
|
||||
<!---------------->
|
||||
{{> email/email_header }}
|
||||
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||
This email is to notify you that you have been confirmed as an emergency access contact for <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b>.
|
||||
</td>
|
||||
</tr>
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||
You can now initiate emergency access requests from the <a href="{{url}}/">web vault</a>.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
{{> email/email_footer }}
|
@@ -0,0 +1,4 @@
|
||||
Emergency access request for {{{grantor_name}}} approved
|
||||
<!---------------->
|
||||
{{grantor_name}} has approved your emergency access request. You may now login on the web vault ({{url}}) and access their account.
|
||||
{{> email/email_footer_text }}
|
@@ -0,0 +1,11 @@
|
||||
Emergency access request for {{{grantor_name}}} approved
|
||||
<!---------------->
|
||||
{{> email/email_header }}
|
||||
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b> has approved your emergency access request. You may now login on the <a href="{{url}}/">web vault</a> and access their account.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
{{> email/email_footer }}
|
@@ -0,0 +1,6 @@
|
||||
Emergency access request by {{{grantee_name}}} initiated
|
||||
<!---------------->
|
||||
{{grantee_name}} has initiated an emergency access request to {{atype}} your account. You may login on the web vault ({{url}}) and manually approve or reject this request.
|
||||
|
||||
If you do nothing, the request will automatically be approved after {{wait_time_days}} day(s).
|
||||
{{> email/email_footer_text }}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user