Compare commits

...

36 Commits

Author SHA1 Message Date
Daniel García
9323c57f49 Remove debug print 2021-02-07 00:22:39 +01:00
Daniel García
85e3c73525 Basic experimental ldap import support with the official directory connector 2021-02-06 20:15:42 +01:00
Daniel García
a74bc2e58f Update web vault to 2.18.1b 2021-02-06 16:49:49 +01:00
Daniel García
0680638933 Update dependencies 2021-02-06 16:49:28 +01:00
Daniel García
46d31ee5f7 Merge pull request #1356 from BlackDex/fix-config-bug
Fixed small buggy in validation
2021-02-03 23:50:49 +01:00
BlackDex
e794b397d3 Fixed small buggy in validation 2021-02-03 23:47:48 +01:00
Daniel García
d41350050b Merge pull request #1353 from BlackDex/admin-interface
Extra features for admin interface.
2021-02-03 22:50:15 +01:00
Mathijs van Veluw
4cd5b06b7f Merge branch 'master' into admin-interface 2021-02-03 22:41:59 +01:00
Daniel García
cd768439d2 Merge pull request #1329 from BlackDex/misc-updates
JSON Response updates and small fixes
2021-02-03 22:37:59 +01:00
Mathijs van Veluw
9e5fd2d576 Merge branch 'master' into admin-interface 2021-02-03 22:22:33 +01:00
Mathijs van Veluw
ecb46f591c Merge branch 'master' into misc-updates 2021-02-03 22:22:06 +01:00
Daniel García
d62d53aa8e Merge pull request #1341 from BlackDex/dep-update
Updated dependencies and small mail fixes
2021-02-03 22:19:18 +01:00
Daniel García
2c515ab13c Merge pull request #1355 from jjlin/global-domains
Sync global_domains.json with upstream
2021-02-03 22:17:57 +01:00
Jeremy Lin
83d556ff0c Sync global_domains.json to bitwarden/server@cf84453 (Disney, Sony) 2021-02-03 12:22:03 -08:00
Jeremy Lin
678d313836 global_domains.py: allow syncing to a specific Git ref 2021-02-03 12:20:44 -08:00
BlackDex
705d840ea3 Extra features for admin interface.
- Able to modify the user type per organization
- Able to remove a whole organization
- Added podman detection
- Only show web-vault update when not running a containerized
  bitwarden_rs

Solves #936
2021-02-03 18:43:54 +01:00
BlackDex
7dff8c01dd JSON Response updates and small fixes
Updated several json response models.
Also fixed a few small bugs.

ciphers.rs:
  - post_ciphers_create:
    * Prevent cipher creation to organization without a collection.
  - update_cipher_from_data:
    * ~~Fixed removal of user_uuid which prevent user-owned shared-cipher to be not editable anymore when set to read-only.~~
    * Cleanup the json_data by removing the `Response` key/values from several objects.
  - delete_all:
    * Do not delete all Collections during the Purge of an Organization (same as upstream).

cipher.rs:
  - Cipher::to_json:
    * Updated json response to match upstream.
    * Return empty json object if there is no type_data instead of values which should not be set for the type_data.

organizations.rs:
  * Added two new endpoints to prevent Javascript errors regarding tax

organization.rs:
  - Organization::to_json:
    * Updated response model to match upstream
  - UserOrganization::to_json:
    * Updated response model to match upstream

collection.rs:
  - Collection::{to_json, to_json_details}:
    * Updated the json response model, and added a detailed version used during the sync
  - hide_passwords_for_user:
    * Added this function to return if the passwords should be hidden or not for the user at the specific collection (used by `to_json_details`)

Update 1: Some small changes after comments from @jjlin.
Update 2: Fixed vault purge by user to make sure the cipher is not part of an organization.

Resolves #971
Closes #990, Closes #991
2021-01-31 21:46:37 +01:00
BlackDex
5860679624 Updated dependencies and small mail fixes
- Updated rust nightly
- Updated depenencies
- Removed unicode support for regex (less dependencies)
- Fixed dependency and nightly changes/deprications
- Some mail changes for less spam point triggering
2021-01-31 20:07:42 +01:00
Daniel García
4628e4519d Update web vault to 2.18.1 2021-01-27 16:08:11 +01:00
Mathijs van Veluw
b884fd20a1 Merge pull request #1333 from jjlin/fix-manager-access
Fix collection access issues for owner/admin users
2021-01-27 08:07:20 +01:00
Jeremy Lin
67c657003d Fix collection access issues for owner/admin users
The implementation of the `Manager` user type (#1242) introduced a regression
whereby owner/admin users are incorrectly denied access to certain collection
APIs if their access control for collections isn't set to "access all".

Owner/admin users should always have full access to collection APIs, per
https://bitwarden.com/help/article/user-types-access-control/#access-control:

> Assigning Admins and Owners to Collections via Access Control will only
> impact which Collections appear readily in the Filters section of their
> Vault. Admins and Owners will always be able to access "un-assigned"
> Collections via the Organization view.
2021-01-26 22:35:09 -08:00
Daniel García
580c1bbc7d Update web vault to 2.18.0 2021-01-25 12:27:57 +01:00
Daniel García
2b6383d243 Merge pull request #1327 from jjlin/dockerfile-cleanup
Dockerfile.j2: clean up web-vault section
2021-01-25 12:24:04 +01:00
Daniel García
f27455a26f Merge pull request #1328 from jjlin/restore-rev-date
Add cipher response to restore operations
2021-01-25 12:23:00 +01:00
Jeremy Lin
1d4f900e48 Add cipher response to restore operations
This matches changes in the upstream Bitwarden server and clients.

Upstream PR: https://github.com/bitwarden/server/pull/1072
2021-01-24 21:57:32 -08:00
Jeremy Lin
c5ca588a6f Dockerfile.j2: clean up web-vault section 2021-01-24 17:26:25 -08:00
Daniel García
06888251e3 Merge pull request #1326 from jjlin/personal-ownership
Add support for the Personal Ownership policy
2021-01-24 14:09:12 +01:00
Daniel García
1a6e4cf4e4 Merge pull request #1321 from mkilchhofer/feature/improve_shutdown_behavior
Improve shutdown behavior (on kubernetes and allow CTRL+C)
2021-01-24 14:06:15 +01:00
Jeremy Lin
9f86196a9d Add support for the Personal Ownership policy
Upstream refs:

* https://github.com/bitwarden/server/pull/1013
* https://bitwarden.com/help/article/policies/#personal-ownership
2021-01-23 20:50:06 -08:00
Marco Kilchhofer
1e31043fb3 Improve shutdown behavior (on kubernetes) 2021-01-22 11:50:24 +01:00
Daniel García
85adcf1ae5 Merge pull request #1316 from BlackDex/admin-interface
Updated the admin interface
2021-01-19 21:58:21 +01:00
Daniel García
9abb4d2873 Merge pull request #1314 from jjlin/image-labels
Add `org.opencontainers` labels to Docker images
2021-01-19 21:53:27 +01:00
BlackDex
235ff44736 Updated the admin interface
Mostly updated the admin interface, also some small other items.

- Added more diagnostic information to (hopefully) decrease issue
  reporting, or at least solve them quicker.
- Added an option to generate a support string which can be used to
  copy/paste on the forum or during the creation of an issue. It will
try to hide the sensitive information automatically.
- Changed the `Created At` and `Last Active` info to be in a column and
  able to sort them in the users overview.
- Some small layout changes.
- Updated javascript and css files to the latest versions available.
- Decreased the png file sizes using `oxipng`
- Updated target='_blank' links to have rel='noreferrer' to prevent
  javascript window.opener modifications.
2021-01-19 17:55:21 +01:00
Jeremy Lin
9c2d741749 Add org.opencontainers labels to Docker images 2021-01-18 01:10:41 -08:00
Daniel García
37cc0c34cf Merge pull request #1304 from jjlin/buildx
Use Docker Buildx for multi-arch builds
2021-01-12 21:51:33 +01:00
Jeremy Lin
5633b6ac94 Use Docker Buildx for multi-arch builds
The bitwarden_rs code is still cross-compiled exactly as before, but Docker
Buildx is used to rewrite the resulting Docker images with correct platform
metadata (reflecting the target platform instead of the build platform).
Buildx also now handles building and pushing the multi-arch manifest lists.
2021-01-09 02:33:36 -08:00
46 changed files with 1772 additions and 744 deletions

744
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -32,27 +32,26 @@ rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev"
# HTTP client
reqwest = { version = "0.10.10", features = ["blocking", "json"] }
reqwest = { version = "0.11.0", features = ["blocking", "json"] }
# multipart/form-data support
multipart = { version = "0.17.0", features = ["server"], default-features = false }
multipart = { version = "0.17.1", features = ["server"], default-features = false }
# WebSockets library
ws = { version = "0.10.0", package = "parity-ws" }
# MessagePack library
rmpv = "0.4.6"
rmpv = "0.4.7"
# Concurrent hashmap implementation
chashmap = "2.2.2"
# A generic serialization/deserialization framework
serde = "1.0.118"
serde_derive = "1.0.118"
serde_json = "1.0.60"
serde = { version = "1.0.123", features = ["derive"] }
serde_json = "1.0.62"
# Logging
log = "0.4.11"
log = "0.4.14"
fern = { version = "0.6.0", features = ["syslog-4"] }
# A safe, extensible ORM and Query builder
@@ -63,22 +62,22 @@ diesel_migrations = "1.4.0"
libsqlite3-sys = { version = "0.18.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.7.3"
ring = "0.16.19"
rand = "0.8.3"
ring = "0.16.20"
# UUID generation
uuid = { version = "0.8.1", features = ["v4"] }
uuid = { version = "0.8.2", features = ["v4"] }
# Date and time libraries
chrono = "0.4.19"
chrono-tz = "0.5.3"
time = "0.2.23"
time = "0.2.25"
# TOTP library
oath = "0.10.2"
# Data encoding library
data-encoding = "2.3.1"
data-encoding = "2.3.2"
# JWT library
jsonwebtoken = "7.2.0"
@@ -87,7 +86,7 @@ jsonwebtoken = "7.2.0"
u2f = "0.2.0"
# Yubico Library
yubico = { version = "0.9.1", features = ["online-tokio"], default-features = false }
yubico = { version = "0.9.2", features = ["online-tokio"], default-features = false }
# A `dotenv` implementation for Rust
dotenv = { version = "0.15.0", default-features = false }
@@ -100,30 +99,30 @@ num-traits = "0.2.14"
num-derive = "0.3.3"
# Email libraries
lettre = { version = "0.10.0-alpha.4", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
lettre = { version = "0.10.0-alpha.5", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
newline-converter = "0.1.0"
# Template library
handlebars = { version = "3.5.1", features = ["dir_source"] }
handlebars = { version = "3.5.2", features = ["dir_source"] }
# For favicon extraction from main website
soup = "0.5.0"
regex = "1.4.2"
regex = { version = "1.4.3", features = ["std", "perf"], default-features = false }
data-url = "0.1.0"
# Used by U2F, JWT and Postgres
openssl = "0.10.31"
openssl = "0.10.32"
# URL encoding library
percent-encoding = "2.1.0"
# Punycode conversion
idna = "0.2.0"
idna = "0.2.1"
# CLI argument parsing
structopt = "0.3.21"
# Logging panics to logfile instead stderr only
backtrace = "0.3.55"
backtrace = "0.3.56"
# Macro ident concatenation
paste = "1.0.4"

33
docker/Dockerfile.buildx Normal file
View File

@@ -0,0 +1,33 @@
# The cross-built images have the build arch (`amd64`) embedded in the image
# manifest, rather than the target arch. For example:
#
# $ docker inspect bitwardenrs/server:latest-armv7 | jq -r '.[]|.Architecture'
# amd64
#
# Recent versions of Docker have started printing a warning when the image's
# claimed arch doesn't match the host arch. For example:
#
# WARNING: The requested image's platform (linux/amd64) does not match the
# detected host platform (linux/arm/v7) and no specific platform was requested
#
# The image still works fine, but the spurious warning creates confusion.
#
# Docker doesn't seem to provide a way to directly set the arch of an image
# at build time. To resolve the build vs. target arch discrepancy, we use
# Docker Buildx to build a new set of images with the correct target arch.
#
# Docker Buildx uses this Dockerfile to build an image for each requested
# platform. Since the Dockerfile basically consists of a single `FROM`
# instruction, we're effectively telling Buildx to build a platform-specific
# image by simply copying the existing cross-built image and setting the
# correct target arch as a side effect.
#
# References:
#
# - https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images
# - https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
# - https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
#
ARG LOCAL_REPO
ARG DOCKER_TAG
FROM ${LOCAL_REPO}:${DOCKER_TAG}-${TARGETARCH}${TARGETVARIANT}

View File

@@ -1,30 +1,30 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
{% set build_stage_base_image = "rust:1.48" %}
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2020-11-22" %}
{% set build_stage_base_image = "clux/muslrust:nightly-2021-01-25" %}
{% set runtime_stage_base_image = "alpine:3.12" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "arm32v7" in target_file %}
{% elif "armv7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.12" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% endif %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %}
{% elif "arm64v8" in target_file %}
{% elif "arm64" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set package_arch_name = "arm64" %}
{% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "arm32v6" in target_file %}
{% elif "armv6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set package_arch_name = "armel" %}
{% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "arm32v7" in target_file %}
{% elif "armv7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
@@ -44,19 +44,26 @@
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_image_hash = "sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0" %}
{% raw %}
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
{% set vault_version = "2.18.1b" %}
{% set vault_image_digest = "sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb" %}
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
{% endraw %}
FROM bitwardenrs/web-vault@{{ vault_image_hash }} as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v{{ vault_version }}
# $ docker image inspect --format "{{ '{{' }}.RepoDigests}}" bitwardenrs/web-vault:v{{ vault_version }}
# [bitwardenrs/web-vault@{{ vault_image_digest }}]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" bitwardenrs/web-vault@{{ vault_image_digest }}
# [bitwardenrs/web-vault:v{{ vault_version }}]
#
FROM bitwardenrs/web-vault@{{ vault_image_digest }} as vault
########################## BUILD IMAGE ##########################
FROM {{ build_stage_base_image }} as build
@@ -178,7 +185,7 @@ RUN touch src/main.rs
# your actual source files being built
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %}
{% if "arm32v7" in target_file %}
{% if "armv7" in target_file %}
RUN musl-strip target/{{ package_arch_target }}/release/bitwarden_rs
{% endif %}
{% endif %}
@@ -204,6 +211,7 @@ RUN [ "cross-build-start" ]
RUN apk add --no-cache \
openssl \
curl \
dumb-init \
{% if "sqlite" in features %}
sqlite \
{% endif %}
@@ -220,14 +228,12 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
dumb-init \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
{% if "alpine" in target_file and "arm32v7" in target_file %}
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
{% endif %}
RUN mkdir /data
{% if "amd64" not in target_file %}
@@ -256,8 +262,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
{% if "alpine" in target_file and "arm32v7" in target_file %}
CMD ["catatonit", "/start.sh"]
{% else %}
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]
{% endif %}

View File

@@ -1,21 +1,28 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM rust:1.48 as build
@@ -78,6 +85,7 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
dumb-init \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
@@ -101,5 +109,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,24 +1,31 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM clux/muslrust:nightly-2020-11-22 as build
FROM clux/muslrust:nightly-2021-01-25 as build
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
@@ -74,6 +81,7 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
RUN apk add --no-cache \
openssl \
curl \
dumb-init \
sqlite \
postgresql-libs \
ca-certificates
@@ -96,5 +104,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,21 +1,28 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM rust:1.48 as build
@@ -121,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
dumb-init \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
@@ -147,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,21 +1,28 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM rust:1.48 as build
@@ -121,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
dumb-init \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
@@ -147,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,21 +1,28 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM rust:1.48 as build
@@ -121,6 +128,7 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
dumb-init \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
@@ -147,5 +155,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -1,21 +1,28 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
# The web-vault digest specifies a particular web-vault build on Docker Hub.
# Using the digest instead of the tag name provides better security,
# as the digest of an image is immutable, whereas a tag name can later
# be changed to point to a malicious image.
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
# To verify the current digest for a given tag name:
# - From https://hub.docker.com/r/bitwardenrs/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull bitwardenrs/web-vault:v2.18.1b
# $ docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.18.1b
# [bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb
# [bitwardenrs/web-vault:v2.18.1b]
#
FROM bitwardenrs/web-vault@sha256:345a509dd5482343458b672dcd69203836ffac2e5181a1c99826d9695b9cb1eb as vault
########################## BUILD IMAGE ##########################
FROM messense/rust-musl-cross:armv7-musleabihf as build
@@ -77,9 +84,9 @@ RUN [ "cross-build-start" ]
RUN apk add --no-cache \
openssl \
curl \
dumb-init \
sqlite \
ca-certificates
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
RUN mkdir /data
@@ -102,5 +109,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["catatonit", "/start.sh"]
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/start.sh"]

View File

@@ -10,7 +10,7 @@ Docker Hub hooks provide these predefined [environment variables](https://docs.d
* `DOCKER_TAG`: the Docker repository tag being built.
* `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO:DOCKER_TAG`.)
The current multi-arch image build relies on the original bitwarden_rs Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/database/OS combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point.
The current multi-arch image build relies on the original bitwarden_rs Dockerfiles, which use cross-compilation for architectures other than `amd64`, and don't yet support all arch/distro combinations. However, cross-compilation is much faster than QEMU-based builds (e.g., using `docker buildx`). This situation may need to be revisited at some point.
## References

View File

@@ -1,19 +1,16 @@
# The default Debian-based images support these arches for all database connections
#
# Other images (Alpine-based) currently
# support only a subset of these.
# The default Debian-based images support these arches for all database backends.
arches=(
amd64
arm32v6
arm32v7
arm64v8
armv6
armv7
arm64
)
if [[ "${DOCKER_TAG}" == *alpine ]]; then
# The Alpine build currently only works for amd64.
os_suffix=.alpine
# The Alpine image build currently only works for certain arches.
distro_suffix=.alpine
arches=(
amd64
arm32v7
armv7
)
fi

View File

@@ -4,11 +4,42 @@ echo ">>> Building images..."
source ./hooks/arches.sh
if [[ -z "${SOURCE_COMMIT}" ]]; then
# This var is typically predefined by Docker Hub, but it won't be
# when testing locally.
SOURCE_COMMIT="$(git rev-parse HEAD)"
fi
# Construct a version string in the style of `build.rs`.
GIT_EXACT_TAG="$(git describe --tags --abbrev=0 --exact-match 2>/dev/null)"
if [[ -n "${GIT_EXACT_TAG}" ]]; then
SOURCE_VERSION="${GIT_EXACT_TAG}"
else
GIT_LAST_TAG="$(git describe --tags --abbrev=0)"
SOURCE_VERSION="${GIT_LAST_TAG}-${SOURCE_COMMIT:0:8}"
fi
LABELS=(
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
org.opencontainers.image.created="$(date --utc --iso-8601=seconds)"
org.opencontainers.image.documentation="https://github.com/dani-garcia/bitwarden_rs/wiki"
org.opencontainers.image.licenses="GPL-3.0-only"
org.opencontainers.image.revision="${SOURCE_COMMIT}"
org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}"
org.opencontainers.image.url="https://hub.docker.com/r/${DOCKER_REPO#*/}"
org.opencontainers.image.version="${SOURCE_VERSION}"
)
LABEL_ARGS=()
for label in "${LABELS[@]}"; do
LABEL_ARGS+=(--label "${label}")
done
set -ex
for arch in "${arches[@]}"; do
docker build \
"${LABEL_ARGS[@]}" \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/Dockerfile${os_suffix} \
-f docker/${arch}/Dockerfile${distro_suffix} \
.
done

28
hooks/pre_build Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -ex
# If requested, print some environment info for troubleshooting.
if [[ -n "${DOCKER_HUB_DEBUG}" ]]; then
id
pwd
df -h
env
docker info
docker version
fi
# Install build dependencies.
deps=(
jq
)
apt-get update
apt-get install -y "${deps[@]}"
# Docker Hub uses a shallow clone and doesn't fetch tags, which breaks some
# Git operations that we perform later, so fetch the complete history and
# tags first. Note that if the build is cached, the clone may have been
# unshallowed already; if so, unshallowing will fail, so skip it.
if [[ -f .git/shallow ]]; then
git fetch --unshallow --tags
fi

View File

@@ -1,117 +1,138 @@
#!/bin/bash
echo ">>> Pushing images..."
export DOCKER_CLI_EXPERIMENTAL=enabled
declare -A annotations=(
[amd64]="--os linux --arch amd64"
[arm32v6]="--os linux --arch arm --variant v6"
[arm32v7]="--os linux --arch arm --variant v7"
[arm64v8]="--os linux --arch arm64 --variant v8"
)
source ./hooks/arches.sh
export DOCKER_CLI_EXPERIMENTAL=enabled
# Join a list of args with a single char.
# Ref: https://stackoverflow.com/a/17841619
join() { local IFS="$1"; shift; echo "$*"; }
set -ex
declare -A images
echo ">>> Starting local Docker registry..."
# Docker Buildx's `docker-container` driver is needed for multi-platform
# builds, but it can't access existing images on the Docker host (like the
# cross-compiled ones we just built). Those images first need to be pushed to
# a registry -- Docker Hub could be used, but since it's not trivial to clean
# up those intermediate images on Docker Hub, it's easier to just run a local
# Docker registry, which gets cleaned up automatically once the build job ends.
#
# https://docs.docker.com/registry/deploying/
# https://hub.docker.com/_/registry
#
# Use host networking so the buildx container can access the registry via
# localhost.
#
docker run -d --name registry --network host registry:2 # defaults to port 5000
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
LOCAL_REGISTRY="localhost:5000"
REPO="${DOCKER_REPO#*/}"
LOCAL_REPO="${LOCAL_REGISTRY}/${REPO}"
echo ">>> Pushing images to local registry..."
for arch in ${arches[@]}; do
images[$arch]="${DOCKER_REPO}:${DOCKER_TAG}-${arch}"
docker_image="${DOCKER_REPO}:${DOCKER_TAG}-${arch}"
local_image="${LOCAL_REPO}:${DOCKER_TAG}-${arch}"
docker tag "${docker_image}" "${local_image}"
docker push "${local_image}"
done
# Push the images that were just built; manifest list creation fails if the
# images (manifests) referenced don't already exist in the Docker registry.
for image in "${images[@]}"; do
docker push "${image}"
done
echo ">>> Setting up Docker Buildx..."
manifest_lists=("${DOCKER_REPO}:${DOCKER_TAG}")
# Same as earlier, use host networking so the buildx container can access the
# registry via localhost.
#
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
#
docker buildx create --name builder --use --driver-opt network=host
# If the Docker tag starts with a version number, assume the latest release is
# being pushed. Add an extra manifest (`latest` or `alpine`, as appropriate)
echo ">>> Running Docker Buildx..."
tags=("${DOCKER_REPO}:${DOCKER_TAG}")
# If the Docker tag starts with a version number, assume the latest release
# is being pushed. Add an extra tag (`latest` or `alpine`, as appropriate)
# to make it easier for users to track the latest release.
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
if [[ "${DOCKER_TAG}" == *alpine ]]; then
manifest_lists+=(${DOCKER_REPO}:alpine)
tags+=(${DOCKER_REPO}:alpine)
else
manifest_lists+=(${DOCKER_REPO}:latest)
# Add an extra `latest-arm32v6` tag; Docker can't seem to properly
# auto-select that image on Armv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Add this tag only for the SQLite image, as the MySQL and PostgreSQL
# builds don't currently work on non-amd64 arches.
#
# TODO: Also add an `alpine-arm32v6` tag if multi-arch support for
# Alpine-based bitwarden_rs images is implemented before this Docker
# issue is fixed.
if [[ ${DOCKER_REPO} == *server ]]; then
docker tag "${DOCKER_REPO}:${DOCKER_TAG}-arm32v6" "${DOCKER_REPO}:latest-arm32v6"
docker push "${DOCKER_REPO}:latest-arm32v6"
fi
tags+=(${DOCKER_REPO}:latest)
fi
fi
for manifest_list in "${manifest_lists[@]}"; do
# Create the (multi-arch) manifest list of arch-specific images.
docker manifest create ${manifest_list} ${images[@]}
# Make sure each image manifest is annotated with the correct arch info.
# Docker does not auto-detect the arch of each cross-compiled image, so
# everything would appear as `linux/amd64` otherwise.
for arch in "${arches[@]}"; do
docker manifest annotate ${annotations[$arch]} ${manifest_list} ${images[$arch]}
tag_args=()
for tag in "${tags[@]}"; do
tag_args+=(--tag "${tag}")
done
# Push the manifest list.
docker manifest push --purge ${manifest_list}
done
# Avoid logging credentials and tokens.
set +ex
# Delete the arch-specific tags, if credentials for doing so are available.
# Note that `DOCKER_PASSWORD` must be the actual user password. Passing a JWT
# obtained using a personal access token results in a 403 error with
# {"detail": "access to the resource is forbidden with personal access token"}
if [[ -z "${DOCKER_USERNAME}" || -z "${DOCKER_PASSWORD}" ]]; then
exit 0
fi
# Given a JSON input on stdin, extract the string value associated with the
# specified key. This avoids an extra dependency on a tool like `jq`.
extract() {
local key="$1"
# Extract "<key>":"<val>" (assumes key/val won't contain double quotes).
# The colon may have whitespace on either side.
grep -o "\"${key}\"[[:space:]]*:[[:space:]]*\"[^\"]\+\"" |
# Extract just <val> by deleting the last '"', and then greedily deleting
# everything up to '"'.
sed -e 's/"$//' -e 's/.*"//'
}
echo ">>> Getting API token..."
jwt=$(curl -sS -X POST \
-H "Content-Type: application/json" \
-d "{\"username\":\"${DOCKER_USERNAME}\",\"password\": \"${DOCKER_PASSWORD}\"}" \
"https://hub.docker.com/v2/users/login" |
extract 'token')
# Strip the registry portion from `index.docker.io/user/repo`.
repo="${DOCKER_REPO#*/}"
# Docker Buildx takes a list of target platforms (OS/arch/variant), so map
# the arch list to a platform list (assuming the OS is always `linux`).
declare -A arch_to_platform=(
[amd64]="linux/amd64"
[armv6]="linux/arm/v6"
[armv7]="linux/arm/v7"
[arm64]="linux/arm64"
)
platforms=()
for arch in ${arches[@]}; do
# Don't delete the `arm32v6` tag; Docker can't seem to properly
# auto-select that image on Armv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
if [[ ${arch} == 'arm32v6' ]]; then
continue
fi
tag="${DOCKER_TAG}-${arch}"
echo ">>> Deleting '${repo}:${tag}'..."
curl -sS -X DELETE \
-H "Authorization: Bearer ${jwt}" \
"https://hub.docker.com/v2/repositories/${repo}/tags/${tag}/"
platforms+=("${arch_to_platform[$arch]}")
done
platforms="$(join "," "${platforms[@]}")"
# Run the build, pushing the resulting images and multi-arch manifest list to
# Docker Hub. The Dockerfile is read from stdin to avoid sending any build
# context, which isn't needed here since the actual cross-compiled images
# have already been built.
docker buildx build \
--network host \
--build-arg LOCAL_REPO="${LOCAL_REPO}" \
--build-arg DOCKER_TAG="${DOCKER_TAG}" \
--platform "${platforms}" \
"${tag_args[@]}" \
--push \
- < ./docker/Dockerfile.buildx
# Add an extra arch-specific tag for `arm32v6`; Docker can't seem to properly
# auto-select that image on ARMv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Note that we use `arm32v6` instead of `armv6` to be consistent with the
# existing bitwarden_rs tags, which adhere to the naming conventions of the
# Docker per-architecture repos (e.g., https://hub.docker.com/u/arm32v6).
# Unfortunately, these per-arch repo names aren't always consistent with the
# corresponding platform (OS/arch/variant) IDs, particularly in the case of
# 32-bit ARM arches (e.g., `linux/arm/v6` is used, not `linux/arm32/v6`).
#
# TODO: It looks like this issue should be fixed starting in Docker 20.10.0,
# so this step can be removed once fixed versions are in wider distribution.
#
# Tags:
#
# testing => testing-arm32v6
# testing-alpine => <ignored>
# x.y.z => x.y.z-arm32v6, latest-arm32v6
# x.y.z-alpine => <ignored>
#
if [[ "${DOCKER_TAG}" != *alpine ]]; then
image="${DOCKER_REPO}":"${DOCKER_TAG}"
# Fetch the multi-arch manifest list and find the digest of the armv6 image.
filter='.manifests|.[]|select(.platform.architecture=="arm" and .platform.variant=="v6")|.digest'
digest="$(docker manifest inspect "${image}" | jq -r "${filter}")"
# Pull the armv6 image by digest, retag it, and repush it.
docker pull "${DOCKER_REPO}"@"${digest}"
docker tag "${DOCKER_REPO}"@"${digest}" "${image}"-arm32v6
docker push "${image}"-arm32v6
if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
docker tag "${image}"-arm32v6 "${DOCKER_REPO}:latest"-arm32v6
docker push "${DOCKER_REPO}:latest"-arm32v6
fi
fi

View File

@@ -1 +1 @@
nightly-2020-11-22
nightly-2021-01-25

View File

@@ -1,8 +1,9 @@
use once_cell::sync::Lazy;
use serde::de::DeserializeOwned;
use serde_json::Value;
use std::process::Command;
use std::{env, process::Command, time::Duration};
use reqwest::{blocking::Client, header::USER_AGENT};
use rocket::{
http::{Cookie, Cookies, SameSite},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
@@ -12,13 +13,13 @@ use rocket::{
use rocket_contrib::json::Json;
use crate::{
api::{ApiResult, EmptyResult, JsonResult},
api::{ApiResult, EmptyResult, JsonResult, NumberOrString},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, models::*, DbConn, DbConnType},
error::{Error, MapResult},
mail,
util::{get_display_size, format_naive_datetime_local},
util::{format_naive_datetime_local, get_display_size},
CONFIG,
};
@@ -39,6 +40,7 @@ pub fn routes() -> Vec<Route> {
disable_user,
enable_user,
remove_2fa,
update_user_org_type,
update_revision_users,
post_config,
delete_config,
@@ -46,10 +48,22 @@ pub fn routes() -> Vec<Route> {
test_smtp,
users_overview,
organizations_overview,
delete_organization,
diagnostics,
get_diagnostics_config
]
}
static DB_TYPE: Lazy<&str> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| match t {
DbConnType::sqlite => "SQLite",
DbConnType::mysql => "MySQL",
DbConnType::postgresql => "PostgreSQL",
})
.unwrap_or("Unknown")
});
static CAN_BACKUP: Lazy<bool> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| t == DbConnType::sqlite)
@@ -307,7 +321,8 @@ fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
None => json!("Never")
};
usr
}).collect();
})
.collect();
let text = AdminTemplateData::users(users_json).render()?;
Ok(Html(text))
@@ -354,6 +369,41 @@ fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
user.save(&conn)
}
#[derive(Deserialize, Debug)]
struct UserOrgTypeData {
user_type: NumberOrString,
user_uuid: String,
org_uuid: String,
}
#[post("/users/org_type", data = "<data>")]
fn update_user_org_type(data: Json<UserOrgTypeData>, _token: AdminToken, conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
let mut user_to_edit = match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &conn) {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid type"),
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
// Removing owner permmission, check that there are at least another owner
let num_owners = UserOrganization::find_by_org_and_type(&data.org_uuid, UserOrgType::Owner as i32, &conn).len();
if num_owners <= 1 {
err!("Can't change the type of the last owner")
}
}
user_to_edit.atype = new_type as i32;
user_to_edit.save(&conn)
}
#[post("/users/update_revision")]
fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
User::update_all_revisions(&conn)
@@ -362,19 +412,27 @@ fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
#[get("/organizations/overview")]
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations.iter().map(|o| {
let organizations_json: Vec<Value> = organizations.iter()
.map(|o| {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org
}).collect();
})
.collect();
let text = AdminTemplateData::organizations(organizations_json).render()?;
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
fn delete_organization(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&uuid, &conn).map_res("Organization doesn't exist")?;
org.delete(&conn)
}
#[derive(Deserialize)]
struct WebVaultVersion {
version: String,
@@ -391,77 +449,110 @@ struct GitCommit {
}
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
use reqwest::{blocking::Client, header::USER_AGENT};
use std::time::Duration;
let github_api = Client::builder().build()?;
Ok(
github_api.get(url)
Ok(github_api
.get(url)
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()?
.error_for_status()?
.json::<T>()?
)
.json::<T>()?)
}
fn has_http_access() -> bool {
let http_access = Client::builder().build().unwrap();
match http_access
.head("https://github.com/dani-garcia/bitwarden_rs")
.timeout(Duration::from_secs(10))
.header(USER_AGENT, "Bitwarden_RS")
.send()
{
Ok(r) => r.status().is_success(),
_ => false,
}
}
#[get("/diagnostics")]
fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
use std::net::ToSocketAddrs;
use chrono::prelude::*;
use crate::util::read_file_string;
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let vault_version_path = format!("{}/{}", CONFIG.web_vault_folder(), "version.json");
let vault_version_str = read_file_string(&vault_version_path)?;
let web_vault_version: WebVaultVersion = serde_json::from_str(&vault_version_str)?;
let github_ips = ("github.com", 0).to_socket_addrs().map(|mut i| i.next());
let (dns_resolved, dns_ok) = match github_ips {
Ok(Some(a)) => (a.ip().to_string(), true),
_ => ("Could not resolve domain name.".to_string(), false),
// Execute some environment checks
let running_within_docker = std::path::Path::new("/.dockerenv").exists() || std::path::Path::new("/run/.containerenv").exists();
let has_http_access = has_http_access();
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
|| env::var_os("HTTPS_PROXY").is_some()
|| env::var_os("https_proxy").is_some();
// Check if we are able to resolve DNS entries
let dns_resolved = match ("github.com", 0).to_socket_addrs().map(|mut i| i.next()) {
Ok(Some(a)) => a.ip().to_string(),
_ => "Could not resolve domain name.".to_string(),
};
// If the DNS Check failed, do not even attempt to check for new versions since we were not able to resolve github.com
let (latest_release, latest_commit, latest_web_build) = if dns_ok {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
// TODO: Maybe we need to cache this using a LazyStatic or something. Github only allows 60 requests per hour, and we use 3 here already.
let (latest_release, latest_commit, latest_web_build) = if has_http_access {
(
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bitwarden_rs/releases/latest") {
Ok(r) => r.tag_name,
_ => "-".to_string()
_ => "-".to_string(),
},
match get_github_api::<GitCommit>("https://api.github.com/repos/dani-garcia/bitwarden_rs/commits/master") {
Ok(mut c) => {
c.sha.truncate(8);
c.sha
}
_ => "-".to_string(),
},
_ => "-".to_string()
},
// Do not fetch the web-vault version when running within Docker.
// The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_docker {
"-".to_string()
} else {
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") {
Ok(r) => r.tag_name.trim_start_matches('v').to_string(),
_ => "-".to_string()
_ => "-".to_string(),
}
},
)
} else {
("-".to_string(), "-".to_string(), "-".to_string())
};
// Run the date check as the last item right before filling the json.
// This should ensure that the time difference between the browser and the server is as minimal as possible.
let dt = Utc::now();
let server_time = dt.format("%Y-%m-%d %H:%M:%S UTC").to_string();
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
"server_time": server_time,
"web_vault_version": web_vault_version.version,
"latest_release": latest_release,
"latest_commit": latest_commit,
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"has_http_access": has_http_access,
"uses_proxy": uses_proxy,
"db_type": *DB_TYPE,
"admin_url": format!("{}/diagnostics", admin_url(Referer(None))),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the date/time check as the last item to minimize the difference
});
let text = AdminTemplateData::diagnostics(diagnostics_json).render()?;
Ok(Html(text))
}
#[get("/diagnostics/config")]
fn get_diagnostics_config(_token: AdminToken) -> JsonResult {
let support_json = CONFIG.get_support_json();
Ok(Json(support_json))
}
#[post("/config", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner();

View File

@@ -91,7 +91,9 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
let collections = Collection::find_by_user_uuid(&headers.user.uuid, &conn);
let collections_json: Vec<Value> = collections.iter().map(Collection::to_json).collect();
let collections_json: Vec<Value> = collections.iter()
.map(|c| c.to_json_details(&headers.user.uuid, &conn))
.collect();
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
@@ -225,6 +227,17 @@ fn post_ciphers_admin(data: JsonUpcase<ShareCipherData>, headers: Headers, conn:
fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let mut data: ShareCipherData = data.into_inner().data;
// Check if there are one more more collections selected when this cipher is part of an organization.
// err if this is not the case before creating an empty cipher.
if data.Cipher.OrganizationId.is_some() && data.CollectionIds.is_empty() {
err!("You must select at least one collection.");
}
// This check is usually only needed in update_cipher_from_data(), but we
// need it here as well to avoid creating an empty cipher in the call to
// cipher.save() below.
enforce_personal_ownership_policy(&data.Cipher, &headers, &conn)?;
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
cipher.user_uuid = Some(headers.user.uuid.clone());
cipher.save(&conn)?;
@@ -251,6 +264,38 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
}
/// Enforces the personal ownership policy on user-owned ciphers, if applicable.
/// A non-owner/admin user belonging to an org with the personal ownership policy
/// enabled isn't allowed to create new user-owned ciphers or modify existing ones
/// (that were created before the policy was applicable to the user). The user is
/// allowed to delete or share such ciphers to an org, however.
///
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
fn enforce_personal_ownership_policy(
data: &CipherData,
headers: &Headers,
conn: &DbConn
) -> EmptyResult {
if data.OrganizationId.is_none() {
let user_uuid = &headers.user.uuid;
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
if policy.enabled && policy.has_type(OrgPolicyType::PersonalOwnership) {
let org_uuid = &policy.org_uuid;
match UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
Some(user) =>
if user.atype < UserOrgType::Admin &&
user.has_status(UserOrgStatus::Confirmed) {
err!("Due to an Enterprise Policy, you are restricted \
from saving items to your personal vault.")
},
None => err!("Error looking up user type"),
}
}
}
}
Ok(())
}
pub fn update_cipher_from_data(
cipher: &mut Cipher,
data: CipherData,
@@ -260,6 +305,8 @@ pub fn update_cipher_from_data(
nt: &Notify,
ut: UpdateType,
) -> EmptyResult {
enforce_personal_ownership_policy(&data, headers, conn)?;
// Check that the client isn't updating an existing cipher with stale data.
if let Some(dt) = data.LastKnownRevisionDate {
match NaiveDateTime::parse_from_str(&dt, "%+") { // ISO 8601 format
@@ -284,6 +331,11 @@ pub fn update_cipher_from_data(
|| cipher.is_write_accessible_to_user(&headers.user.uuid, &conn)
{
cipher.organization_uuid = Some(org_id);
// After some discussion in PR #1329 re-added the user_uuid = None again.
// TODO: Audit/Check the whole save/update cipher chain.
// Upstream uses the user_uuid to allow a cipher added by a user to an org to still allow the user to view/edit the cipher
// even when the user has hide-passwords configured as there policy.
// Removing the line below would fix that, but we have to check which effect this would have on the rest of the code.
cipher.user_uuid = None;
} else {
err!("You don't have permission to add cipher directly to organization")
@@ -327,6 +379,23 @@ pub fn update_cipher_from_data(
}
}
// Cleanup cipher data, like removing the 'Response' key.
// This key is somewhere generated during Javascript so no way for us this fix this.
// Also, upstream only retrieves keys they actually want to store, and thus skip the 'Response' key.
// We do not mind which data is in it, the keep our model more flexible when there are upstream changes.
// But, we at least know we do not need to store and return this specific key.
fn _clean_cipher_data(mut json_data: Value) -> Value {
if json_data.is_array() {
json_data.as_array_mut()
.unwrap()
.iter_mut()
.for_each(|ref mut f| {
f.as_object_mut().unwrap().remove("Response");
});
};
json_data
}
let type_data_opt = match data.Type {
1 => data.Login,
2 => data.SecureNote,
@@ -335,23 +404,22 @@ pub fn update_cipher_from_data(
_ => err!("Invalid type"),
};
let mut type_data = match type_data_opt {
Some(data) => data,
let type_data = match type_data_opt {
Some(mut data) => {
// Remove the 'Response' key from the base object.
data.as_object_mut().unwrap().remove("Response");
// Remove the 'Response' key from every Uri.
if data["Uris"].is_array() {
data["Uris"] = _clean_cipher_data(data["Uris"].clone());
}
data
},
None => err!("Data missing"),
};
// TODO: ******* Backwards compat start **********
// To remove backwards compatibility, just delete this code,
// and remove the compat code from cipher::to_json
type_data["Name"] = Value::String(data.Name.clone());
type_data["Notes"] = data.Notes.clone().map(Value::String).unwrap_or(Value::Null);
type_data["Fields"] = data.Fields.clone().unwrap_or(Value::Null);
type_data["PasswordHistory"] = data.PasswordHistory.clone().unwrap_or(Value::Null);
// TODO: ******* Backwards compat end **********
cipher.name = data.Name;
cipher.notes = data.Notes;
cipher.fields = data.Fields.map(|f| f.to_string());
cipher.fields = data.Fields.map(|f| _clean_cipher_data(f).to_string() );
cipher.data = type_data.to_string();
cipher.password_history = data.PasswordHistory.map(|f| f.to_string());
@@ -928,18 +996,18 @@ fn delete_cipher_selected_put_admin(data: JsonUpcase<Value>, headers: Headers, c
}
#[put("/ciphers/<uuid>/restore")]
fn restore_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
fn restore_cipher_put(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
}
#[put("/ciphers/<uuid>/restore-admin")]
fn restore_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
fn restore_cipher_put_admin(uuid: String, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_cipher_by_uuid(&uuid, &headers, &conn, &nt)
}
#[put("/ciphers/restore", data = "<data>")]
fn restore_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_restore_multiple_ciphers(data, headers, conn, nt)
fn restore_cipher_selected(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
_restore_multiple_ciphers(data, &headers, &conn, &nt)
}
#[derive(Deserialize)]
@@ -1025,7 +1093,6 @@ fn delete_all(
Some(user_org) => {
if user_org.atype == UserOrgType::Owner {
Cipher::delete_all_by_organization(&org_data.org_id, &conn)?;
Collection::delete_all_by_organization(&org_data.org_id, &conn)?;
nt.send_user_update(UpdateType::Vault, &user);
Ok(())
} else {
@@ -1095,7 +1162,7 @@ fn _delete_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbC
Ok(())
}
fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> EmptyResult {
fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
let mut cipher = match Cipher::find_by_uuid(&uuid, &conn) {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
@@ -1109,10 +1176,10 @@ fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, nt: &No
cipher.save(&conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
Ok(())
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, &conn)))
}
fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: &Headers, conn: &DbConn, nt: &Notify) -> JsonResult {
let data: Value = data.into_inner().data;
let uuids = match data.get("Ids") {
@@ -1123,13 +1190,19 @@ fn _restore_multiple_ciphers(data: JsonUpcase<Value>, headers: Headers, conn: Db
None => err!("Request missing ids field"),
};
let mut ciphers: Vec<Value> = Vec::new();
for uuid in uuids {
if let error @ Err(_) = _restore_cipher_by_uuid(uuid, &headers, &conn, &nt) {
return error;
};
match _restore_cipher_by_uuid(uuid, headers, conn, nt) {
Ok(json) => ciphers.push(json.into_inner()),
err => return err
}
}
Ok(())
Ok(Json(json!({
"Data": ciphers,
"Object": "list",
"ContinuationToken": null
})))
}
fn _delete_cipher_attachment_by_id(

View File

@@ -172,7 +172,7 @@ fn hibp_breach(username: String) -> JsonResult {
"Domain": "haveibeenpwned.com",
"BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noopener\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{account}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{account}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>", account=username),
"LogoPath": "bwrs_static/hibp.png",
"PwnCount": 0,
"DataClasses": [

View File

@@ -47,7 +47,10 @@ pub fn routes() -> Vec<Route> {
list_policies_token,
get_policy,
put_policy,
get_organization_tax,
get_plans,
get_plans_tax_rates,
import,
]
}
@@ -966,7 +969,7 @@ fn list_policies_token(org_id: String, token: String, conn: DbConn) -> JsonResul
fn get_policy(org_id: String, pol_type: i32, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
let pol_type_enum = match OrgPolicyType::from_i32(pol_type) {
Some(pt) => pt,
None => err!("Invalid policy type"),
None => err!("Invalid or unsupported policy type"),
};
let policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type, &conn) {
@@ -1006,6 +1009,13 @@ fn put_policy(org_id: String, pol_type: i32, data: Json<PolicyData>, _headers: A
Ok(Json(policy.to_json()))
}
#[allow(unused_variables)]
#[get("/organizations/<org_id>/tax")]
fn get_organization_tax(org_id: String, _headers: Headers, _conn: DbConn) -> EmptyResult {
// Prevent a 404 error, which also causes Javascript errors.
err!("Only allowed when not self hosted.")
}
#[get("/plans")]
fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
Ok(Json(json!({
@@ -1057,3 +1067,110 @@ fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
"ContinuationToken": null
})))
}
#[get("/plans/sales-tax-rates")]
fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> JsonResult {
// Prevent a 404 error, which also causes Javascript errors.
Ok(Json(json!({
"Object": "list",
"Data": [],
"ContinuationToken": null
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportGroupData {
Name: String, // "GroupName"
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
Users: Vec<String>, // ["uid=user,ou=People,dc=example,dc=com"]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportUserData {
Email: String, // "user@maildomain.net"
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
Deleted: bool,
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct OrgImportData {
Groups: Vec<OrgImportGroupData>,
OverwriteExisting: bool,
Users: Vec<OrgImportUserData>,
}
#[post("/organizations/<org_id>/import", data = "<data>")]
fn import(org_id: String, data: JsonUpcase<OrgImportData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data = data.into_inner().data;
// TODO: Currently we aren't storing the externalId's anywhere, so we also don't have a way
// to differentiate between auto-imported users and manually added ones.
// This means that this endpoint can end up removing users that were added manually by an admin,
// as opposed to upstream which only removes auto-imported users.
// User needs to be admin or owner to use the Directry Connector
match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) {
Some(user_org) if user_org.atype >= UserOrgType::Admin => { /* Okay, nothing to do */ }
Some(_) => err!("User has insufficient permissions to use Directory Connector"),
None => err!("User not part of organization"),
};
for user_data in &data.Users {
if user_data.Deleted {
// If user is marked for deletion and it exists, delete it
if let Some(user_org) = UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn) {
user_org.delete(&conn)?;
}
// If user is not part of the organization, but it exists
} else if UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &conn).is_none() {
if let Some (user) = User::find_by_mail(&user_data.Email, &conn) {
let user_org_status = if CONFIG.mail_enabled() {
UserOrgStatus::Invited as i32
} else {
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
};
let mut new_org_user = UserOrganization::new(user.uuid.clone(), org_id.clone());
new_org_user.access_all = false;
new_org_user.atype = UserOrgType::User as i32;
new_org_user.status = user_org_status;
new_org_user.save(&conn)?;
if CONFIG.mail_enabled() {
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
Some(org) => org.name,
None => err!("Error looking up organization"),
};
mail::send_invite(
&user_data.Email,
&user.uuid,
Some(org_id.clone()),
Some(new_org_user.uuid),
&org_name,
Some(headers.user.email.clone()),
)?;
}
}
}
}
// If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true)
if data.OverwriteExisting {
for user_org in UserOrganization::find_by_org_and_type(&org_id, UserOrgType::User as i32, &conn) {
if let Some (user_email) = User::find_by_uuid(&user_org.user_uuid, &conn).map(|u| u.email) {
if !data.Users.iter().any(|u| u.Email == user_email) {
user_org.delete(&conn)?;
}
}
}
}
Ok(())
}

View File

@@ -19,13 +19,12 @@ static SHOW_WEBSOCKETS_MSG: AtomicBool = AtomicBool::new(true);
#[get("/hub")]
fn websockets_err() -> EmptyResult {
if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_and_swap(true, false, Ordering::Relaxed) {
err!(
"###########################################################
if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed).is_ok() {
err!("
###########################################################
'/notifications/hub' should be proxied to the websocket server or notifications won't work.
Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false.
###########################################################################################"
)
###########################################################################################\n")
} else {
Err(Error::empty())
}

View File

@@ -330,9 +330,9 @@ pub struct OrgHeaders {
pub org_id: String,
}
// org_id is usually the second param ("/organizations/<org_id>")
// But there are cases where it is located in a query value.
// First check the param, if this is not a valid uuid, we will try the query value.
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_org_id(request: &Request) -> Option<String> {
if let Some(Ok(org_id)) = request.get_param::<String>(1) {
if uuid::Uuid::parse_str(&org_id).is_ok() {
@@ -439,9 +439,9 @@ impl Into<Headers> for AdminHeaders {
}
}
// col_id is usually the forth param ("/organizations/<org_id>/collections/<col_id>")
// But there cloud be cases where it is located in a query value.
// First check the param, if this is not a valid uuid, we will try the query value.
// col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"),
// but there could be cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_col_id(request: &Request) -> Option<String> {
if let Some(Ok(col_id)) = request.get_param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() {
@@ -484,7 +484,7 @@ impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.access_all {
if !headers.org_user.has_full_access() {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn) {
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),

View File

@@ -2,6 +2,7 @@ use std::process::exit;
use std::sync::RwLock;
use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::Url;
use crate::{
@@ -22,6 +23,21 @@ pub static CONFIG: Lazy<Config> = Lazy::new(|| {
})
});
static PRIVACY_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[\w]").unwrap());
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"database_url",
"domain_origin",
"domain_path",
"domain",
"helo_name",
"org_creation_users",
"signups_domains_whitelist",
"smtp_from",
"smtp_host",
"smtp_username",
];
pub type Pass = String;
macro_rules! make_config {
@@ -52,6 +68,7 @@ macro_rules! make_config {
}
impl ConfigBuilder {
#[allow(clippy::field_reassign_with_default)]
fn from_env() -> Self {
match dotenv::from_path(".env") {
Ok(_) => (),
@@ -196,6 +213,35 @@ macro_rules! make_config {
}, )+
]}, )+ ])
}
pub fn get_support_json(&self) -> serde_json::Value {
let cfg = {
let inner = &self.inner.read().unwrap();
inner.config.clone()
};
json!({ $($(
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
)+)+ })
}
}
};
// Support string print
( @supportstr $name:ident, $value:expr, Pass, option ) => { $value.as_ref().map(|_| String::from("***")) }; // Optional pass, we map to an Option<String> with "***"
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { String::from("***") }; // Required pass, we return "***"
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!($value.as_ref().map(|x| PRIVACY_REGEX.replace_all(&x.to_string(), "${1}*").to_string()))
} else {
json!($value)
}
};
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
if PRIVACY_CONFIG.contains(&stringify!($name)) {
json!(PRIVACY_REGEX.replace_all(&$value.to_string(), "${1}*").to_string())
} else {
json!($value)
}
};
@@ -458,7 +504,6 @@ make_config! {
}
fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
// Validate connection URL is valid and DB feature is enabled
DbConnType::from_url(&cfg.database_url)?;
@@ -472,7 +517,9 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let dom = cfg.domain.to_lowercase();
if !dom.starts_with("http://") && !dom.starts_with("https://") {
err!("DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'");
err!(
"DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'"
);
}
let whitelist = &cfg.signups_domains_whitelist;
@@ -481,11 +528,11 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
let org_creation_users = cfg.org_creation_users.trim().to_lowercase();
if !(org_creation_users.is_empty() || org_creation_users == "all" || org_creation_users == "none") {
if org_creation_users.split(',').any(|u| !u.contains('@')) {
if !(org_creation_users.is_empty() || org_creation_users == "all" || org_creation_users == "none")
&& org_creation_users.split(',').any(|u| !u.contains('@'))
{
err!("`ORG_CREATION_USERS` contains invalid email addresses");
}
}
if let Some(ref token) = cfg.admin_token {
if token.trim().is_empty() && !cfg.disable_admin_token {
@@ -510,6 +557,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("Both `SMTP_HOST` and `SMTP_FROM` need to be set for email support")
}
if cfg.smtp_host.is_some() && !cfg.smtp_from.contains('@') {
err!("SMTP_FROM does not contain a mandatory @ sign")
}
if cfg.smtp_username.is_some() != cfg.smtp_password.is_some() {
err!("Both `SMTP_USERNAME` and `SMTP_PASSWORD` need to be set to enable email authentication")
}
@@ -529,7 +580,6 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
use regex::Regex;
let validate_regex = Regex::new(&r);
match validate_regex {
Ok(_) => (),
@@ -577,7 +627,12 @@ impl Config {
validate_config(&config)?;
Ok(Config {
inner: RwLock::new(Inner { templates: load_templates(&config.templates_folder), config, _env, _usr }),
inner: RwLock::new(Inner {
templates: load_templates(&config.templates_folder),
config,
_env,
_usr,
}),
})
}
@@ -650,7 +705,7 @@ impl Config {
/// Tests whether the specified user is allowed to create an organization.
pub fn is_org_creation_allowed(&self, email: &str) -> bool {
let users = self.org_creation_users();
if users == "" || users == "all" {
if users.is_empty() || users == "all" {
true
} else if users == "none" {
false
@@ -704,8 +759,10 @@ impl Config {
let akey_s = data_encoding::BASE64.encode(&akey);
// Save the new value
let mut builder = ConfigBuilder::default();
builder._duo_akey = Some(akey_s.clone());
let builder = ConfigBuilder {
_duo_akey: Some(akey_s.clone()),
..Default::default()
};
self.update_config_partial(builder).ok();
akey_s
@@ -819,14 +876,20 @@ fn js_escape_helper<'reg, 'rc>(
.param(0)
.ok_or_else(|| RenderError::new("Param not found for helper \"js_escape\""))?;
let no_quote = h
.param(1)
.is_some();
let value = param
.value()
.as_str()
.ok_or_else(|| RenderError::new("Param for helper \"js_escape\" is not a String"))?;
let escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
let quoted_value = format!("&quot;{}&quot;", escaped_value);
let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
if ! no_quote {
escaped_value = format!("&quot;{}&quot;", escaped_value);
}
out.write(&quoted_value)?;
out.write(&escaped_value)?;
Ok(())
}

View File

@@ -67,7 +67,7 @@ pub fn generate_token(token_size: u32) -> Result<String, Error> {
// token of fixed width, left-padding with 0 as needed.
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
let number: u64 = rng.gen_range(low, high);
let number: u64 = rng.gen_range(low..high);
let token = format!("{:0size$}", number, size = token_size as usize);
Ok(token)

View File

@@ -83,7 +83,12 @@ impl Cipher {
use crate::util::format_date;
let attachments = Attachment::find_by_cipher(&self.uuid, conn);
let attachments_json: Vec<Value> = attachments.iter().map(|c| c.to_json(host)).collect();
// When there are no attachments use null instead of an empty array
let attachments_json = if attachments.is_empty() {
Value::Null
} else {
attachments.iter().map(|c| c.to_json(host)).collect()
};
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
@@ -97,28 +102,31 @@ impl Cipher {
},
};
// Get the data or a default empty value to avoid issues with the mobile apps
let mut data_json: Value = serde_json::from_str(&self.data).unwrap_or_else(|_| json!({
"Fields":null,
"Name": self.name,
"Notes":null,
"Password":null,
"PasswordHistory":null,
"PasswordRevisionDate":null,
"Response":null,
"Totp":null,
"Uris":null,
"Username":null
}));
// Get the type_data or a default to an empty json object '{}'.
// If not passing an empty object, mobile clients will crash.
let mut type_data_json: Value = serde_json::from_str(&self.data).unwrap_or(json!({}));
// TODO: ******* Backwards compat start **********
// To remove backwards compatibility, just remove this entire section
// and remove the compat code from ciphers::update_cipher_from_data
if self.atype == 1 && data_json["Uris"].is_array() {
let uri = data_json["Uris"][0]["Uri"].clone();
data_json["Uri"] = uri;
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
if self.atype == 1 {
if type_data_json["Uris"].is_array() {
let uri = type_data_json["Uris"][0]["Uri"].clone();
type_data_json["Uri"] = uri;
} else {
// Upstream always has an Uri key/value
type_data_json["Uri"] = Value::Null;
}
// TODO: ******* Backwards compat end **********
}
// Clone the type_data and add some default value.
let mut data_json = type_data_json.clone();
// NOTE: This was marked as *Backwards Compatibilty Code*, but as of January 2021 this is still being used by upstream
// data_json should always contain the following keys with every atype
data_json["Fields"] = json!(fields_json);
data_json["Name"] = json!(self.name);
data_json["Notes"] = json!(self.notes);
data_json["PasswordHistory"] = json!(password_history_json);
// There are three types of cipher response models in upstream
// Bitwarden: "cipherMini", "cipher", and "cipherDetails" (in order
@@ -137,6 +145,8 @@ impl Cipher {
"Favorite": self.is_favorite(&user_uuid, conn),
"OrganizationId": self.organization_uuid,
"Attachments": attachments_json,
// We have UseTotp set to true by default within the Organization model.
// This variable together with UsersGetPremium is used to show or hide the TOTP counter.
"OrganizationUseTotp": true,
// This field is specific to the cipherDetails type.
@@ -155,6 +165,12 @@ impl Cipher {
"ViewPassword": !hide_passwords,
"PasswordHistory": password_history_json,
// All Cipher types are included by default as null, but only the matching one will be populated
"Login": null,
"SecureNote": null,
"Card": null,
"Identity": null,
});
let key = match self.atype {
@@ -165,7 +181,7 @@ impl Cipher {
_ => panic!("Wrong type"),
};
json_object[key] = data_json;
json_object[key] = type_data_json;
json_object
}
@@ -448,7 +464,10 @@ impl Cipher {
pub fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.filter(
ciphers::user_uuid.eq(user_uuid)
.and(ciphers::organization_uuid.is_null())
)
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}

View File

@@ -49,12 +49,21 @@ impl Collection {
pub fn to_json(&self) -> Value {
json!({
"ExternalId": null, // Not support by us
"Id": self.uuid,
"OrganizationId": self.org_uuid,
"Name": self.name,
"Object": "collection",
})
}
pub fn to_json_details(&self, user_uuid: &str, conn: &DbConn) -> Value {
let mut json_object = self.to_json();
json_object["Object"] = json!("collectionDetails");
json_object["ReadOnly"] = json!(!self.is_writable_by_user(user_uuid, conn));
json_object["HidePasswords"] = json!(self.hide_passwords_for_user(user_uuid, conn));
json_object
}
}
use crate::db::DbConn;
@@ -236,6 +245,28 @@ impl Collection {
}
}
}
pub fn hide_passwords_for_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(&user_uuid, &self.org_uuid, &conn) {
None => true, // Not in Org
Some(user_org) => {
if user_org.has_full_access() {
return false;
}
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_uuid))
.filter(users_collections::hide_passwords.eq(true))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
}
}
}
/// Database methods
@@ -364,7 +395,6 @@ impl CollectionUser {
diesel::delete(users_collections::table.filter(
users_collections::user_uuid.eq(user_uuid)
.and(users_collections::collection_uuid.eq(user.collection_uuid))
))
.execute(conn)
.map_res("Error removing user from collections")?;

View File

@@ -26,6 +26,9 @@ pub enum OrgPolicyType {
TwoFactorAuthentication = 0,
MasterPassword = 1,
PasswordGenerator = 2,
// SingleOrg = 3, // Not currently supported.
// RequireSso = 4, // Not currently supported.
PersonalOwnership = 5,
}
/// Local methods
@@ -40,6 +43,10 @@ impl OrgPolicy {
}
}
pub fn has_type(&self, policy_type: OrgPolicyType) -> bool {
self.atype == policy_type as i32
}
pub fn to_json(&self) -> Value {
let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null);
json!({

View File

@@ -147,9 +147,10 @@ impl Organization {
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
"Identifier": null, // not supported by us
"Name": self.name,
"Seats": 10,
"MaxCollections": 10,
"Seats": 10, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, // The value doesn't matter, we don't check server-side
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
"Use2fa": true,
"UseDirectory": false,
@@ -157,6 +158,9 @@ impl Organization {
"UseGroups": false,
"UseTotp": true,
"UsePolicies": true,
"UseSso": false, // We do not support SSO
"SelfHost": true,
"UseApi": false, // not supported by us
"BusinessName": null,
"BusinessAddress1": null,
@@ -274,9 +278,10 @@ impl UserOrganization {
json!({
"Id": self.org_uuid,
"Identifier": null, // not supported by us
"Name": org.name,
"Seats": 10,
"MaxCollections": 10,
"Seats": 10, // The value doesn't matter, we don't check server-side
"MaxCollections": 10, // The value doesn't matter, we don't check server-side
"UsersGetPremium": true,
"Use2fa": true,
@@ -285,8 +290,30 @@ impl UserOrganization {
"UseGroups": false,
"UseTotp": true,
"UsePolicies": true,
"UseApi": false,
"UseApi": false, // not supported by us
"SelfHost": true,
"SsoBound": false, // We do not support SSO
"UseSso": false, // We do not support SSO
// TODO: Add support for Business Portal
// Upstream is moving Policies and SSO management outside of the web-vault to /portal
// For now they still have that code also in the web-vault, but they will remove it at some point.
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
"UseBusinessPortal": false, // Disable BusinessPortal Button
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
// "Permissions": {
// "AccessBusinessPortal": false,
// "AccessEventLogs": false,
// "AccessImportExport": false,
// "AccessReports": false,
// "ManageAllCollections": false,
// "ManageAssignedCollections": false,
// "ManageGroups": false,
// "ManagePolicies": false,
// "ManageSso": false,
// "ManageUsers": false
// },
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
@@ -412,11 +439,25 @@ impl UserOrganization {
Ok(())
}
pub fn has_status(self, status: UserOrgStatus) -> bool {
pub fn find_by_email_and_org(email: &str, org_id: &str, conn: &DbConn) -> Option<UserOrganization> {
if let Some(user) = super::User::find_by_mail(email, conn) {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user.uuid, org_id, &conn) {
return Some(user_org);
}
}
None
}
pub fn has_status(&self, status: UserOrgStatus) -> bool {
self.status == status as i32
}
pub fn has_full_access(self) -> bool {
pub fn has_type(&self, user_type: UserOrgType) -> bool {
self.atype == user_type as i32
}
pub fn has_full_access(&self) -> bool {
(self.access_all || self.atype >= UserOrgType::Admin) &&
self.has_status(UserOrgStatus::Confirmed)
}

View File

@@ -302,30 +302,32 @@ fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) ->
let address = format!("{}@{}", address_split[1], domain_puny);
let html = SinglePart::base64()
let html = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(body_html);
.body(String::from(body_html));
let text = SinglePart::base64()
let text = SinglePart::builder()
// We force Base64 encoding because in the past we had issues with different encodings.
.header(header::ContentTransferEncoding::Base64)
.header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(body_text);
.body(String::from(body_text));
// The boundary generated by Lettre it self is mostly too large based on the RFC822, so we generate one our selfs.
use uuid::Uuid;
let unique_id = Uuid::new_v4().to_simple();
let boundary = format!("_Part_{}_", unique_id);
let alternative = MultiPart::alternative().boundary(boundary).singlepart(text).singlepart(html);
let smtp_from = &CONFIG.smtp_from();
let email = Message::builder()
.message_id(Some(format!("<{}.{}>", unique_id, smtp_from)))
.message_id(Some(format!("<{}@{}>", crate::util::get_uuid(), smtp_from.split('@').collect::<Vec<&str>>()[1] )))
.to(Mailbox::new(None, Address::from_str(&address)?))
.from(Mailbox::new(
Some(CONFIG.smtp_from_name()),
Address::from_str(smtp_from)?,
))
.subject(subject)
.multipart(alternative)?;
.multipart(
MultiPart::alternative()
.singlepart(text)
.singlepart(html)
)?;
match mailer().send(&email) {
Ok(_) => Ok(()),

View File

@@ -1,12 +1,12 @@
#![forbid(unsafe_code)]
#![cfg_attr(feature = "unstable", feature(ip))]
#![recursion_limit = "256"]
#![recursion_limit = "512"]
extern crate openssl;
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate serde_derive;
extern crate serde;
#[macro_use]
extern crate serde_json;
#[macro_use]

View File

@@ -508,7 +508,8 @@
"disneymoviesanywhere.com",
"go.com",
"disney.com",
"dadt.com"
"dadt.com",
"disneyplus.com"
],
"Excluded": false
},
@@ -885,5 +886,13 @@
"yandex.uz"
],
"Excluded": false
},
{
"Type": 84,
"Domains": [
"sonyentertainmentnetwork.com",
"sony.com"
],
"Excluded": false
}
]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -1,6 +1,6 @@
/*!
* Native JavaScript for Bootstrap v3.0.10 (https://thednp.github.io/bootstrap.native/)
* Copyright 2015-2020 © dnp_theme
* Native JavaScript for Bootstrap v3.0.15 (https://thednp.github.io/bootstrap.native/)
* Copyright 2015-2021 © dnp_theme
* Licensed under MIT (https://github.com/thednp/bootstrap.native/blob/master/LICENSE)
*/
(function (global, factory) {
@@ -15,10 +15,14 @@
var transitionDuration = 'webkitTransition' in document.head.style ? 'webkitTransitionDuration' : 'transitionDuration';
var transitionProperty = 'webkitTransition' in document.head.style ? 'webkitTransitionProperty' : 'transitionProperty';
function getElementTransitionDuration(element) {
var duration = supportTransition ? parseFloat(getComputedStyle(element)[transitionDuration]) : 0;
duration = typeof duration === 'number' && !isNaN(duration) ? duration * 1000 : 0;
return duration;
var computedStyle = getComputedStyle(element),
property = computedStyle[transitionProperty],
duration = supportTransition && property && property !== 'none'
? parseFloat(computedStyle[transitionDuration]) : 0;
return !isNaN(duration) ? duration * 1000 : 0;
}
function emulateTransitionEnd(element,handler){
@@ -35,9 +39,15 @@
return selector instanceof Element ? selector : lookUp.querySelector(selector);
}
function bootstrapCustomEvent(eventName, componentName, related) {
function bootstrapCustomEvent(eventName, componentName, eventProperties) {
var OriginalCustomEvent = new CustomEvent( eventName + '.bs.' + componentName, {cancelable: true});
OriginalCustomEvent.relatedTarget = related;
if (typeof eventProperties !== 'undefined') {
Object.keys(eventProperties).forEach(function (key) {
Object.defineProperty(OriginalCustomEvent, key, {
value: eventProperties[key]
});
});
}
return OriginalCustomEvent;
}
@@ -352,7 +362,7 @@
};
self.slideTo = function (next) {
if (vars.isSliding) { return; }
var activeItem = self.getActiveIndex(), orientation;
var activeItem = self.getActiveIndex(), orientation, eventProperties;
if ( activeItem === next ) {
return;
} else if ( (activeItem < next ) || (activeItem === 0 && next === slides.length -1 ) ) {
@@ -363,8 +373,9 @@
if ( next < 0 ) { next = slides.length - 1; }
else if ( next >= slides.length ){ next = 0; }
orientation = vars.direction === 'left' ? 'next' : 'prev';
slideCustomEvent = bootstrapCustomEvent('slide', 'carousel', slides[next]);
slidCustomEvent = bootstrapCustomEvent('slid', 'carousel', slides[next]);
eventProperties = { relatedTarget: slides[next], direction: vars.direction, from: activeItem, to: next };
slideCustomEvent = bootstrapCustomEvent('slide', 'carousel', eventProperties);
slidCustomEvent = bootstrapCustomEvent('slid', 'carousel', eventProperties);
dispatchCustomEvent.call(element, slideCustomEvent);
if (slideCustomEvent.defaultPrevented) { return; }
vars.index = next;
@@ -615,7 +626,7 @@
}
}
self.show = function () {
showCustomEvent = bootstrapCustomEvent('show', 'dropdown', relatedTarget);
showCustomEvent = bootstrapCustomEvent('show', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; }
menu.classList.add('show');
@@ -626,12 +637,12 @@
setTimeout(function () {
setFocus( menu.getElementsByTagName('INPUT')[0] || element );
toggleDismiss();
shownCustomEvent = bootstrapCustomEvent( 'shown', 'dropdown', relatedTarget);
shownCustomEvent = bootstrapCustomEvent('shown', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, shownCustomEvent);
},1);
};
self.hide = function () {
hideCustomEvent = bootstrapCustomEvent('hide', 'dropdown', relatedTarget);
hideCustomEvent = bootstrapCustomEvent('hide', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, hideCustomEvent);
if ( hideCustomEvent.defaultPrevented ) { return; }
menu.classList.remove('show');
@@ -643,7 +654,7 @@
setTimeout(function () {
element.Dropdown && element.addEventListener('click',clickHandler,false);
},1);
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'dropdown', relatedTarget);
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'dropdown', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(parent, hiddenCustomEvent);
};
self.toggle = function () {
@@ -749,7 +760,7 @@
setFocus(modal);
modal.isAnimating = false;
toggleEvents(1);
shownCustomEvent = bootstrapCustomEvent('shown', 'modal', relatedTarget);
shownCustomEvent = bootstrapCustomEvent('shown', 'modal', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(modal, shownCustomEvent);
}
function triggerHide(force) {
@@ -804,7 +815,7 @@
};
self.show = function () {
if (modal.classList.contains('show') && !!modal.isAnimating ) {return}
showCustomEvent = bootstrapCustomEvent('show', 'modal', relatedTarget);
showCustomEvent = bootstrapCustomEvent('show', 'modal', { relatedTarget: relatedTarget });
dispatchCustomEvent.call(modal, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; }
modal.isAnimating = true;
@@ -1193,7 +1204,7 @@
if (dropLink && !dropLink.classList.contains('active') ) {
dropLink.classList.add('active');
}
dispatchCustomEvent.call(element, bootstrapCustomEvent( 'activate', 'scrollspy', vars.items[index]));
dispatchCustomEvent.call(element, bootstrapCustomEvent( 'activate', 'scrollspy', { relatedTarget: vars.items[index] }));
} else if ( isActive && !inside ) {
item.classList.remove('active');
if (dropLink && dropLink.classList.contains('active') && !item.parentNode.getElementsByClassName('active').length ) {
@@ -1278,7 +1289,7 @@
} else {
tabs.isAnimating = false;
}
shownCustomEvent = bootstrapCustomEvent('shown', 'tab', activeTab);
shownCustomEvent = bootstrapCustomEvent('shown', 'tab', { relatedTarget: activeTab });
dispatchCustomEvent.call(next, shownCustomEvent);
}
function triggerHide() {
@@ -1287,8 +1298,8 @@
nextContent.style.float = 'left';
containerHeight = activeContent.scrollHeight;
}
showCustomEvent = bootstrapCustomEvent('show', 'tab', activeTab);
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'tab', next);
showCustomEvent = bootstrapCustomEvent('show', 'tab', { relatedTarget: activeTab });
hiddenCustomEvent = bootstrapCustomEvent('hidden', 'tab', { relatedTarget: next });
dispatchCustomEvent.call(next, showCustomEvent);
if ( showCustomEvent.defaultPrevented ) { return; }
nextContent.classList.add('active');
@@ -1331,7 +1342,7 @@
nextContent = queryElement(next.getAttribute('href'));
activeTab = getActiveTab();
activeContent = getActiveContent();
hideCustomEvent = bootstrapCustomEvent( 'hide', 'tab', next);
hideCustomEvent = bootstrapCustomEvent( 'hide', 'tab', { relatedTarget: next });
dispatchCustomEvent.call(activeTab, hideCustomEvent);
if (hideCustomEvent.defaultPrevented) { return; }
tabs.isAnimating = true;
@@ -1637,7 +1648,7 @@
}
}
var version = "3.0.10";
var version = "3.0.15";
var index = {
Alert: Alert,

View File

@@ -1,5 +1,5 @@
/*!
* Bootstrap v4.5.2 (https://getbootstrap.com/)
* Bootstrap v4.5.3 (https://getbootstrap.com/)
* Copyright 2011-2020 The Bootstrap Authors
* Copyright 2011-2020 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
@@ -216,6 +216,7 @@ caption {
th {
text-align: inherit;
text-align: -webkit-match-parent;
}
label {
@@ -3750,6 +3751,8 @@ input[type="button"].btn-block {
display: block;
min-height: 1.5rem;
padding-left: 1.5rem;
-webkit-print-color-adjust: exact;
color-adjust: exact;
}
.custom-control-inline {
@@ -5289,6 +5292,7 @@ a.badge-dark:focus, a.badge-dark.focus {
position: absolute;
top: 0;
right: 0;
z-index: 2;
padding: 0.75rem 1.25rem;
color: inherit;
}
@@ -10163,7 +10167,7 @@ a.text-dark:hover, a.text-dark:focus {
.text-break {
word-break: break-word !important;
overflow-wrap: break-word !important;
word-wrap: break-word !important;
}
.text-reset {
@@ -10256,3 +10260,4 @@ a.text-dark:hover, a.text-dark:focus {
border-color: #dee2e6;
}
}
/*# sourceMappingURL=bootstrap.css.map */

View File

@@ -4,12 +4,13 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs4/dt-1.10.22
* https://datatables.net/download/#bs4/dt-1.10.23
*
* Included libraries:
* DataTables 1.10.22
* DataTables 1.10.23
*/
@charset "UTF-8";
table.dataTable {
clear: both;
margin-top: 6px !important;
@@ -114,7 +115,7 @@ table.dataTable > thead .sorting_desc:before,
table.dataTable > thead .sorting_asc_disabled:before,
table.dataTable > thead .sorting_desc_disabled:before {
right: 1em;
content: "\2191";
content: "";
}
table.dataTable > thead .sorting:after,
table.dataTable > thead .sorting_asc:after,
@@ -122,7 +123,7 @@ table.dataTable > thead .sorting_desc:after,
table.dataTable > thead .sorting_asc_disabled:after,
table.dataTable > thead .sorting_desc_disabled:after {
right: 0.5em;
content: "\2193";
content: "";
}
table.dataTable > thead .sorting_asc:before,
table.dataTable > thead .sorting_desc:after {
@@ -213,10 +214,10 @@ div.dataTables_scrollHead table.table-bordered {
div.table-responsive > div.dataTables_wrapper > div.row {
margin: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:first-child {
div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:first-child {
padding-left: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:last-child {
div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:last-child {
padding-right: 0;
}

View File

@@ -4,20 +4,20 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs4/dt-1.10.22
* https://datatables.net/download/#bs4/dt-1.10.23
*
* Included libraries:
* DataTables 1.10.22
* DataTables 1.10.23
*/
/*! DataTables 1.10.22
/*! DataTables 1.10.23
* ©2008-2020 SpryMedia Ltd - datatables.net/license
*/
/**
* @summary DataTables
* @description Paginate, search and order HTML tables
* @version 1.10.22
* @version 1.10.23
* @file jquery.dataTables.js
* @author SpryMedia Ltd
* @contact www.datatables.net
@@ -2775,7 +2775,7 @@
for ( var i=0, iLen=a.length-1 ; i<iLen ; i++ )
{
// Protect against prototype pollution
if (a[i] === '__proto__') {
if (a[i] === '__proto__' || a[i] === 'constructor') {
throw new Error('Cannot set prototype values');
}
@@ -3157,7 +3157,7 @@
cells.push( nTd );
// Need to create the HTML if new, or if a rendering function is defined
if ( create || ((!nTrIn || oCol.mRender || oCol.mData !== i) &&
if ( create || ((oCol.mRender || oCol.mData !== i) &&
(!$.isPlainObject(oCol.mData) || oCol.mData._ !== i+'.display')
)) {
nTd.innerHTML = _fnGetCellData( oSettings, iRow, i, 'display' );
@@ -3189,10 +3189,6 @@
_fnCallbackFire( oSettings, 'aoRowCreatedCallback', null, [nTr, rowData, iRow, cells] );
}
// Remove once webkit bug 131819 and Chromium bug 365619 have been resolved
// and deployed
row.nTr.setAttribute( 'role', 'row' );
}
@@ -9546,7 +9542,7 @@
* @type string
* @default Version number
*/
DataTable.version = "1.10.22";
DataTable.version = "1.10.23";
/**
* Private data store, containing all of the settings objects that are
@@ -13970,7 +13966,7 @@
*
* @type string
*/
build:"bs4/dt-1.10.22",
build:"bs4/dt-1.10.23",
/**

View File

@@ -4,6 +4,7 @@
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="robots" content="noindex,nofollow" />
<link rel="icon" type="image/png" href="{{urlpath}}/bwrs_static/shield-white.png">
<title>Bitwarden_rs Admin Panel</title>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/bootstrap.css" />
<style>
@@ -73,7 +74,7 @@
<body class="bg-light">
<nav class="navbar navbar-expand-md navbar-dark bg-dark mb-4 shadow fixed-top">
<div class="container">
<div class="container-xl">
<a class="navbar-brand" href="{{urlpath}}/admin"><img class="pr-1" src="{{urlpath}}/bwrs_static/shield-white.png">Bitwarden_rs Admin</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse"
aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation">
@@ -96,7 +97,7 @@
</li>
{{/if}}
<li class="nav-item">
<a class="nav-link" href="{{urlpath}}/">Vault</a>
<a class="nav-link" href="{{urlpath}}/" target="_blank" rel="noreferrer">Vault</a>
</li>
</ul>

View File

@@ -1,4 +1,4 @@
<main class="container">
<main class="container-xl">
<div id="diagnostics-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-2">Diagnostics</h6>
@@ -15,7 +15,7 @@
<span id="server-installed">{{version}}</span>
</dd>
<dt class="col-sm-5">Server Latest
<span class="badge badge-danger d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span>
<span class="badge badge-secondary d-none" id="server-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="server-latest">{{diagnostics.latest_release}}<span id="server-latest-commit" class="d-none">-{{diagnostics.latest_commit}}</span></span>
@@ -27,12 +27,14 @@
<dd class="col-sm-7">
<span id="web-installed">{{diagnostics.web_vault_version}}</span>
</dd>
{{#unless diagnostics.running_within_docker}}
<dt class="col-sm-5">Web Latest
<span class="badge badge-danger d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
<span class="badge badge-secondary d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
<dd class="col-sm-7">
<span id="web-latest">{{diagnostics.latest_web_build}}</span>
</dd>
{{/unless}}
</dl>
</div>
</div>
@@ -41,6 +43,40 @@
<div class="row">
<div class="col-md">
<dl class="row">
<dt class="col-sm-5">Running within Docker</dt>
<dd class="col-sm-7">
{{#if diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.running_within_docker}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Uses a proxy</dt>
<dd class="col-sm-7">
{{#if diagnostics.uses_proxy}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.uses_proxy}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">Internet access
{{#if diagnostics.has_http_access}}
<span class="badge badge-success" id="internet-success" title="We have internet access!">Ok</span>
{{/if}}
{{#unless diagnostics.has_http_access}}
<span class="badge badge-danger" id="internet-warning" title="There seems to be no internet access. Please fix.">Error</span>
{{/unless}}
</dt>
<dd class="col-sm-7">
{{#if diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>Yes</b></span>
{{/if}}
{{#unless diagnostics.has_http_access}}
<span id="running-docker" class="d-block"><b>No</b></span>
{{/unless}}
</dd>
<dt class="col-sm-5">DNS (github.com)
<span class="badge badge-success d-none" id="dns-success" title="DNS Resolving works!">Ok</span>
<span class="badge badge-danger d-none" id="dns-warning" title="DNS Resolving failed. Please fix.">Error</span>
@@ -57,6 +93,46 @@
<span id="time-server" class="d-block"><b>Server:</b> <span id="time-server-string">{{diagnostics.server_time}}</span></span>
<span id="time-browser" class="d-block"><b>Browser:</b> <span id="time-browser-string"></span></span>
</dd>
<dt class="col-sm-5">Domain configuration
<span class="badge badge-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
<span class="badge badge-danger d-none" id="domain-warning" title="The domain variable does not matches the browsers location.&#013;&#010;The domain variable does not seem to be configured correctly.&#013;&#010;Some features may not work as expected!">No Match</span>
<span class="badge badge-success d-none" id="https-success" title="Configurued to use HTTPS">HTTPS</span>
<span class="badge badge-danger d-none" id="https-warning" title="Not configured to use HTTPS.&#013;&#010;Some features may not work as expected!">No HTTPS</span>
</dt>
<dd class="col-sm-7">
<span id="domain-server" class="d-block"><b>Server:</b> <span id="domain-server-string">{{diagnostics.admin_url}}</span></span>
<span id="domain-browser" class="d-block"><b>Browser:</b> <span id="domain-browser-string"></span></span>
</dd>
</dl>
</div>
</div>
<h3>Support</h3>
<div class="row">
<div class="col-md">
<dl class="row">
<dd class="col-sm-12">
If you need support please check the following links first before you create a new issue:
<a href="https://bitwardenrs.discourse.group/" target="_blank" rel="noreferrer">Bitwarden_RS Forum</a>
| <a href="https://github.com/dani-garcia/bitwarden_rs/discussions" target="_blank" rel="noreferrer">Github Discussions</a>
</dd>
</dl>
<dl class="row">
<dd class="col-sm-12">
You can use the button below to pre-generate a string which you can copy/paste on either the Forum or when Creating a new issue at Github.<br>
We try to hide the most sensitive values from the generated support string by default, but please verify if there is nothing in there which you want to hide!<br>
</dd>
</dl>
<dl class="row">
<dt class="col-sm-3">
<button type="button" id="gen-support" class="btn btn-primary" onclick="generateSupportString(); return false;">Generate Support String</button>
<br><br>
<button type="button" id="copy-support" class="btn btn-info d-none" onclick="copyToClipboard(); return false;">Copy To Clipboard</button>
</dt>
<dd class="col-sm-9">
<pre id="support-string" class="pre-scrollable d-none" style="width: 100%; height: 16em; size: 0.6em; border: 1px solid; padding: 4px;"></pre>
</dd>
</dl>
</div>
</div>
@@ -64,7 +140,13 @@
</main>
<script>
dnsCheck = false;
timeCheck = false;
domainCheck = false;
httpsCheck = false;
(() => {
// ================================
// Date & Time Check
const d = new Date();
const year = d.getUTCFullYear();
const month = String(d.getUTCMonth()+1).padStart(2, '0');
@@ -81,16 +163,21 @@
document.getElementById('time-warning').classList.remove('d-none');
} else {
document.getElementById('time-success').classList.remove('d-none');
timeCheck = true;
}
// ================================
// Check if the output is a valid IP
const isValidIp = value => (/^(?:(?:^|\.)(?:2(?:5[0-5]|[0-4]\d)|1?\d?\d)){4}$/.test(value) ? true : false);
if (isValidIp(document.getElementById('dns-resolved').innerText)) {
document.getElementById('dns-success').classList.remove('d-none');
dnsCheck = true;
} else {
document.getElementById('dns-warning').classList.remove('d-none');
}
// ================================
// Version check for both bitwarden_rs and web-vault
let serverInstalled = document.getElementById('server-installed').innerText;
let serverLatest = document.getElementById('server-latest').innerText;
let serverLatestCommit = document.getElementById('server-latest-commit').innerText.replace('-', '');
@@ -99,10 +186,12 @@
}
const webInstalled = document.getElementById('web-installed').innerText;
const webLatest = document.getElementById('web-latest').innerText;
checkVersions('server', serverInstalled, serverLatest, serverLatestCommit);
{{#unless diagnostics.running_within_docker}}
const webLatest = document.getElementById('web-latest').innerText;
checkVersions('web', webInstalled, webLatest);
{{/unless}}
function checkVersions(platform, installed, latest, commit=null) {
if (installed === '-' || latest === '-') {
@@ -146,5 +235,68 @@
}
}
}
// ================================
// Check valid DOMAIN configuration
document.getElementById('domain-browser-string').innerText = location.href.toLowerCase();
if (document.getElementById('domain-server-string').innerText.toLowerCase() == location.href.toLowerCase()) {
document.getElementById('domain-success').classList.remove('d-none');
domainCheck = true;
} else {
document.getElementById('domain-warning').classList.remove('d-none');
}
// Check for HTTPS at domain-server-string
if (document.getElementById('domain-server-string').innerText.toLowerCase().startsWith('https://') ) {
document.getElementById('https-success').classList.remove('d-none');
httpsCheck = true;
} else {
document.getElementById('https-warning').classList.remove('d-none');
}
})();
// ================================
// Generate support string to be pasted on github or the forum
async function generateSupportString() {
supportString = "### Your environment (Generated via diagnostics page)\n";
supportString += "* Bitwarden_rs version: v{{ version }}\n";
supportString += "* Web-vault version: v{{ diagnostics.web_vault_version }}\n";
supportString += "* Running within Docker: {{ diagnostics.running_within_docker }}\n";
supportString += "* Internet access: {{ diagnostics.has_http_access }}\n";
supportString += "* Uses a proxy: {{ diagnostics.uses_proxy }}\n";
supportString += "* DNS Check: " + dnsCheck + "\n";
supportString += "* Time Check: " + timeCheck + "\n";
supportString += "* Domain Configuration Check: " + domainCheck + "\n";
supportString += "* HTTPS Check: " + httpsCheck + "\n";
supportString += "* Database type: {{ diagnostics.db_type }}\n";
{{#case diagnostics.db_type "MySQL" "PostgreSQL"}}
supportString += "* Database version: [PLEASE PROVIDE DATABASE VERSION]\n";
{{/case}}
supportString += "* Clients used: \n";
supportString += "* Reverse proxy and version: \n";
supportString += "* Other relevant information: \n";
jsonResponse = await fetch('{{urlpath}}/admin/diagnostics/config');
configJson = await jsonResponse.json();
supportString += "\n### Config (Generated via diagnostics page)\n```json\n" + JSON.stringify(configJson, undefined, 2) + "\n```\n";
document.getElementById('support-string').innerText = supportString;
document.getElementById('support-string').classList.remove('d-none');
document.getElementById('copy-support').classList.remove('d-none');
}
function copyToClipboard() {
const str = document.getElementById('support-string').innerText;
const el = document.createElement('textarea');
el.value = str;
el.setAttribute('readonly', '');
el.style.position = 'absolute';
el.style.left = '-9999px';
document.body.appendChild(el);
el.select();
document.execCommand('copy');
document.body.removeChild(el);
}
</script>

View File

@@ -1,4 +1,4 @@
<main class="container">
<main class="container-xl">
{{#if error}}
<div class="align-items-center p-3 mb-3 text-white-50 bg-warning rounded shadow">
<div>

View File

@@ -1,4 +1,4 @@
<main class="container">
<main class="container-xl">
<div id="organizations-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Organizations</h6>
@@ -10,6 +10,7 @@
<th>Users</th>
<th>Items</th>
<th>Attachments</th>
<th style="width: 120px; min-width: 120px;">Actions</th>
</tr>
</thead>
<tbody>
@@ -37,6 +38,9 @@
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
{{/if}}
</td>
<td style="font-size: 90%; text-align: right; padding-right: 15px">
<a class="d-block" href="#" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</a>
</td>
</tr>
{{/each}}
</tbody>
@@ -50,6 +54,25 @@
<script src="{{urlpath}}/bwrs_static/jquery-3.5.1.slim.js"></script>
<script src="{{urlpath}}/bwrs_static/datatables.js"></script>
<script>
function deleteOrganization(id, name, billing_email) {
// First make sure the user wants to delete this organization
var continueDelete = confirm("WARNING: All data of this organization ("+ name +") will be lost!\nMake sure you have a backup, this cannot be undone!");
if (continueDelete == true) {
var input_org_uuid = prompt("To delete the organization '" + name + " (" + billing_email +")', please type the organization uuid below.")
if (input_org_uuid != null) {
if (input_org_uuid == id) {
_post("{{urlpath}}/admin/organizations/" + id + "/delete",
"Organization deleted correctly",
"Error deleting organization");
} else {
alert("Wrong organization uuid, please try again")
}
}
}
return false;
}
document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src);
});
@@ -59,6 +82,9 @@
"responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all
"columnDefs": [
{ "targets": 4, "searchable": false, "orderable": false }
]
});
});
</script>

View File

@@ -1,4 +1,4 @@
<main class="container">
<main class="container-xl">
<div id="config-block" class="align-items-center p-3 mb-3 bg-secondary rounded shadow">
<div>
<h6 class="text-white mb-3">Configuration</h6>

View File

@@ -1,4 +1,4 @@
<main class="container">
<main class="container-xl">
<div id="users-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-3">Registered Users</h6>
@@ -7,10 +7,12 @@
<thead>
<tr>
<th>User</th>
<th style="width:60px; min-width: 60px;">Items</th>
<th style="width:65px; min-width: 65px;">Created at</th>
<th style="width:70px; min-width: 65px;">Last Active</th>
<th style="width:35px; min-width: 35px;">Items</th>
<th>Attachments</th>
<th style="min-width: 120px;">Organizations</th>
<th style="width: 140px; min-width: 140px;">Actions</th>
<th style="width: 120px; min-width: 120px;">Actions</th>
</tr>
</thead>
<tbody>
@@ -21,8 +23,6 @@
<div class="float-left">
<strong>{{Name}}</strong>
<span class="d-block">{{Email}}</span>
<span class="d-block">Created at: {{created_at}}</span>
<span class="d-block">Last active: {{last_active}}</span>
<span class="d-block">
{{#unless user_enabled}}
<span class="badge badge-danger mr-2" title="User is disabled">Disabled</span>
@@ -39,6 +39,12 @@
</span>
</div>
</td>
<td>
<span class="d-block">{{created_at}}</span>
</td>
<td>
<span class="d-block">{{last_active}}</span>
</td>
<td>
<span class="d-block">{{cipher_count}}</span>
</td>
@@ -49,9 +55,11 @@
{{/if}}
</td>
<td>
<div class="overflow-auto" style="max-height: 120px;">
{{#each Organizations}}
<span class="badge badge-primary" data-orgtype="{{Type}}">{{Name}}</span>
<button class="badge badge-primary" data-toggle="modal" data-target="#userOrgTypeDialog" data-orgtype="{{Type}}" data-orguuid="{{jsesc Id no_quote}}" data-orgname="{{jsesc Name no_quote}}" data-useremail="{{jsesc ../Email no_quote}}" data-useruuid="{{jsesc ../Id no_quote}}">{{Name}}</button>
{{/each}}
</div>
</td>
<td style="font-size: 90%; text-align: right; padding-right: 15px">
{{#if TwoFactorEnabled}}
@@ -92,6 +100,41 @@
</form>
</div>
</div>
<div id="userOrgTypeDialog" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered modal-sm">
<div class="modal-content">
<div class="modal-header">
<h6 class="modal-title" id="userOrgTypeDialogTitle"></h6>
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">&times;</span>
</button>
</div>
<form class="form" id="userOrgTypeForm" onsubmit="updateUserOrgType(); return false;">
<input type="hidden" name="user_uuid" id="userOrgTypeUserUuid" value="">
<input type="hidden" name="org_uuid" id="userOrgTypeOrgUuid" value="">
<div class="modal-body">
<div class="radio">
<label><input type="radio" value="2" class="form-radio-input" name="user_type" id="userOrgTypeUser">&nbsp;User</label>
</div>
<div class="radio">
<label><input type="radio" value="3" class="form-radio-input" name="user_type" id="userOrgTypeManager">&nbsp;Manager</label>
</div>
<div class="radio">
<label><input type="radio" value="1" class="form-radio-input" name="user_type" id="userOrgTypeAdmin">&nbsp;Admin</label>
</div>
<div class="radio">
<label><input type="radio" value="0" class="form-radio-input" name="user_type" id="userOrgTypeOwner">&nbsp;Owner</label>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-sm btn-secondary" data-dismiss="modal">Cancel</button>
<button type="submit" class="btn btn-sm btn-primary">Change Role</button>
</div>
</form>
</div>
</div>
</div>
</main>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/datatables.css" />
@@ -173,18 +216,76 @@
e.title = orgtype.name;
});
// Special sort function to sort dates in ISO format
jQuery.extend( jQuery.fn.dataTableExt.oSort, {
"date-iso-pre": function ( a ) {
let x;
let sortDate = a.replace(/(<([^>]+)>)/gi, "").trim();
if ( sortDate !== '' ) {
let dtParts = sortDate.split(' ');
var timeParts = (undefined != dtParts[1]) ? dtParts[1].split(':') : [00,00,00];
var dateParts = dtParts[0].split('-');
x = (dateParts[0] + dateParts[1] + dateParts[2] + timeParts[0] + timeParts[1] + ((undefined != timeParts[2]) ? timeParts[2] : 0)) * 1;
if ( isNaN(x) ) {
x = 0;
}
} else {
x = Infinity;
}
return x;
},
"date-iso-asc": function ( a, b ) {
return a - b;
},
"date-iso-desc": function ( a, b ) {
return b - a;
}
});
document.addEventListener("DOMContentLoaded", function(event) {
$('#users-table').DataTable({
"responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all
"columns": [
null, // Userdata
null, // Items
null, // Attachments
null, // Organizations
{ "searchable": false, "orderable": false }, // Actions
],
"columnDefs": [
{ "targets": [1,2], "type": "date-iso" },
{ "targets": 6, "searchable": false, "orderable": false }
]
});
});
var userOrgTypeDialog = document.getElementById('userOrgTypeDialog');
// Fill the form and title
userOrgTypeDialog.addEventListener('show.bs.modal', function(event){
let userOrgType = event.relatedTarget.getAttribute("data-orgtype");
let userOrgTypeName = OrgTypes[userOrgType]["name"];
let orgName = event.relatedTarget.getAttribute("data-orgname");
let userEmail = event.relatedTarget.getAttribute("data-useremail");
let orgUuid = event.relatedTarget.getAttribute("data-orguuid");
let userUuid = event.relatedTarget.getAttribute("data-useruuid");
document.getElementById("userOrgTypeDialogTitle").innerHTML = "<b>Update User Type:</b><br><b>Organization:</b> " + orgName + "<br><b>User:</b> " + userEmail;
document.getElementById("userOrgTypeUserUuid").value = userUuid;
document.getElementById("userOrgTypeOrgUuid").value = orgUuid;
document.getElementById("userOrgType"+userOrgTypeName).checked = true;
}, false);
// Prevent accidental submission of the form with valid elements after the modal has been hidden.
userOrgTypeDialog.addEventListener('hide.bs.modal', function(event){
document.getElementById("userOrgTypeDialogTitle").innerHTML = '';
document.getElementById("userOrgTypeUserUuid").value = '';
document.getElementById("userOrgTypeOrgUuid").value = '';
}, false);
function updateUserOrgType() {
let orgForm = document.getElementById("userOrgTypeForm");
const data = JSON.stringify(Object.fromEntries(new FormData(orgForm).entries()));
_post("{{urlpath}}/admin/users/org_type",
"Updated organization type of the user successfully",
"Error updating organization type of the user", data);
return false;
}
</script>

View File

@@ -10,16 +10,17 @@ import urllib.request
from collections import OrderedDict
if len(sys.argv) != 2:
print("usage: %s <OUTPUT-FILE>" % sys.argv[0])
if not (2 <= len(sys.argv) <= 3):
print("usage: %s <OUTPUT-FILE> [GIT-REF]" % sys.argv[0])
print()
print("This script generates a global equivalent domains JSON file from")
print("the upstream Bitwarden source repo.")
sys.exit(1)
OUTPUT_FILE = sys.argv[1]
GIT_REF = 'master' if len(sys.argv) == 2 else sys.argv[2]
BASE_URL = 'https://github.com/bitwarden/server/raw/master'
BASE_URL = 'https://github.com/bitwarden/server/raw/%s' % GIT_REF
ENUMS_URL = '%s/src/Core/Enums/GlobalEquivalentDomainsType.cs' % BASE_URL
DOMAIN_LISTS_URL = '%s/src/Core/Utilities/StaticStore.cs' % BASE_URL