@@ -29,6 +29,7 @@ Stump is a free and open source comics, manga and digital book server with OPDS
+
Table of Contents
@@ -39,14 +40,13 @@ Stump is a free and open source comics, manga and digital book server with OPDS
- [Where to start?](#where-to-start)
- [Project Structure 📦](#project-structure-)
- [/apps](#apps)
- - [/common](#common)
+ - [/packages](#packages)
- [/core](#core)
- [Similar Projects 👯](#similar-projects-)
- [Acknowledgements 🙏](#acknowledgements-)
-
-
+
-> **🚧 Disclaimer 🚧**: Stump is _very much_ an ongoing **WIP**, under active development. Anyone is welcome to try it out, but please keep in mind that installation and general usage at this point should be for **testing purposes only**. Do **not** expect a fully featured, bug-free experience if you spin up a development environment or use a testing Docker image. Before the first release, I will likely flatten the migrations anyways, which would break anyone's Stump installations. If you'd like to contribute and help expedite Stump's first release, please see the [contributing guide](https://www.stumpapp.dev/contributing) for more information on how you can help. Otherwise, stay tuned for the first release!
+> **🚧 Disclaimer 🚧**: Stump is _very much_ an ongoing **WIP**, under active development. Anyone is welcome to try it out, but please keep in mind that installation and general usage at this point should be for **testing purposes only**. Do **not** expect a fully featured, bug-free experience if you spin up a development environment or use a testing Docker image. Before the first release, I will likely flatten the migrations anyways, which would break anyone's Stump installations. If you'd like to contribute and help expedite Stump's first release, please review the [developer guide](#developing-). Otherwise, stay tuned for the first release!
## Roadmap 🗺
@@ -65,9 +65,12 @@ The following items are the major targets for Stump's first release:
Things you can expect to see after the first release:
-- 🖥️ Desktop app ([Tauri](https://tauri.app/))
+- 🖥️ Desktop app ([Tauri](https://github.com/aaronleopold/stump/tree/main/apps/desktop))
- 📱 Mobile app ([Tachiyomi](https://github.com/aaronleopold/tachiyomi-extensions) and/or [custom application](https://github.com/aaronleopold/stump/tree/main/apps/mobile))
-- 📺 A utility [TUI](https://github.com/aaronleopold/stump/tree/main/apps/tui) for managing a Stump instance from the command line
+
+Things you might see in the future:
+
+- 📺 A utility [TUI](https://github.com/aaronleopold/stump/tree/main/apps/tui) for managing a Stump instance(s) from the command line
I am very open to suggestions and ideas, so feel free to reach out if you have anything you'd like to see!
@@ -77,13 +80,13 @@ I am very open to suggestions and ideas, so feel free to reach out if you have a
Stump isn't ready for normal, non-development usage yet. Once a release has been made, this will be updated. For now, follow the [Developing](#developing-) section to build from source and run locally.
-There is a [docker image](https://hub.docker.com/repository/docker/aaronleopold/stump-preview) available for those interested. However, **this is only meant for testing purposes and will not be updated frequently**, so do not expect a fully featured, bug-free experience if you spin up a container.
+There is a [docker image](https://hub.docker.com/repository/docker/aaronleopold/stump) available for those interested. However, **this is only meant for testing purposes and will not be updated frequently**, so do not expect a fully featured, bug-free experience if you spin up a container. Also keep in mind migrations won't be stacked until a release, so each update until then might require a wipe of the database file.
For more information about getting started, how Stump works and how it manages your library, and much more, please visit [stumpapp.dev](https://stumpapp.dev/guides).
## Developer Guide 💻
-Contributions are very **encouraged** and **welcome**! Please review the [contributing guide](https://www.stumpapp.dev/contributing) for more thorough information on how to get started.
+Contributions are very **encouraged** and **welcome**! Please review the [CONTRIBUTING.md](https://github.com/aaronleopold/stump/tree/develop/.github/CONTRIBUTING.md) before getting started.
A quick summary of the steps required to get going:
@@ -99,9 +102,13 @@ pnpm run setup
4. Start one of the apps:
+I use [moonrepo](https://moonrepo.dev/) for Stump's repository management
+
```bash
-pnpm dev:web # Web app
-pnpm dev:desktop # Desktop app
+# webapp + server
+moon run :dev
+# desktop app + server
+moon run server:start desktop:desktop-dev
```
And that's it!
@@ -116,15 +123,12 @@ Some other good places to start:
- Translation, so Stump is accessible to non-English speakers.
- An automated translation system would be immensely helpful! If you're knowledgeable in this area, please reach out!
-- Writing comprehensive integration tests.
- - [look here](https://github.com/aaronleopold/stump/tree/develop/core/integration-tests)
+- Writing comprehensive [integration tests](https://github.com/aaronleopold/stump/tree/develop/core/integration-tests).
- Designing and/or implementing UI elements.
- Docker build optimizations (it is currently _horrendously_ slow).
-- CI pipelines / workflows (given above issue with Docker is resolved).
+- CI pipelines / workflows.
- And lots more!
-I keep track of all non-code contributions in the [CONTRIBUTORS.md](https://github.com/aaronleopold/stump/tree/develop/.github/CONTRIBUTORS.md) file. If you contribute in that manner, please add yourself to the list!
-
[![Run in Postman](https://run.pstmn.io/button.svg)](https://app.getpostman.com/run-collection/6434946-9cf51d71-d680-46f5-89da-7b6cf7213a20?action=collection%2Ffork&collection-url=entityId%3D6434946-9cf51d71-d680-46f5-89da-7b6cf7213a20%26entityType%3Dcollection%26workspaceId%3D722014ea-55eb-4a49-b29d-814300c1016d)
## Project Structure 📦
@@ -137,15 +141,16 @@ Stump has a monorepo structure that follows a similar pattern to that of [Spaced
- `server`: An [Axum](https://github.com/tokio-rs/axum) server.
- `web`: The React application that is served by the Axum server.
-### /common
+### /packages
- `client`: Everything needed to create a react-based client for Stump. Contains Zustand and React Query configuration, used by the `interface` package, as well as the generated TypeScript types.
- `config`: Configuration files for the project, e.g. `tsconfig.json`, etc.
- `interface`: Stump's main React-based interface, shared between the web and desktop applications.
+- `prisma-cli`: A small rust app to run the prisma cli (generating the prisma client)
### /core
-- `core`: Stump's 'core' functionality is located here, written in Rust. The `server` was previously part of the core, but was extracted to support integration testing.
+- `core`: Stump's 'core' functionality is located here, written in Rust. The `server` was previously part of the core, but was extracted for better isolation.
## Similar Projects 👯
diff --git a/apps/desktop/dist/.placeholder b/apps/desktop/dist/.placeholder
new file mode 100644
index 000000000..e69de29bb
diff --git a/apps/desktop/moon.yml b/apps/desktop/moon.yml
new file mode 100644
index 000000000..27ae2bb82
--- /dev/null
+++ b/apps/desktop/moon.yml
@@ -0,0 +1,43 @@
+type: 'application'
+
+workspace:
+ inheritedTasks:
+ exclude: ['buildPackage']
+
+fileGroups:
+ app:
+ - 'src/**/*'
+ - 'src-tauri/**/*'
+
+language: 'rust'
+
+tasks:
+ # Note: naming it not 'dev' so I can run web+server easier
+ desktop-dev:
+ command: 'pnpm tauri dev'
+ local: true
+
+ lint:
+ command: 'cargo clippy --package stump_desktop -- -D warnings'
+ options:
+ mergeArgs: 'replace'
+ mergeDeps: 'replace'
+ mergeInputs: 'replace'
+
+ format:
+ command: 'cargo fmt --package stump_desktop'
+ options:
+ mergeArgs: 'replace'
+ mergeDeps: 'replace'
+ mergeInputs: 'replace'
+
+ # # TODO: need to have more targets.
+ # build:
+ # # tauri build --target universal-apple-darwin
+ # command: 'pnpm tauri build'
+ # local: true
+ # deps:
+ # - '~:build-webapp'
+
+ # build-webapp:
+ # command: 'pnpm vite build'
diff --git a/apps/desktop/package.json b/apps/desktop/package.json
index 5424a2b54..4070fd707 100644
--- a/apps/desktop/package.json
+++ b/apps/desktop/package.json
@@ -7,31 +7,31 @@
"tauri": "tauri",
"vite": "vite --",
"dev": "tauri dev",
- "build": "tauri build",
- "build:mac": "tauri build --target universal-apple-darwin",
+ "build": "pnpm build:web && tauri build",
+ "build:mac": "pnpm build:web && tauri build --target universal-apple-darwin",
"build:web": "vite build"
},
"dependencies": {
"@stump/client": "workspace:*",
"@stump/interface": "workspace:*",
- "@tanstack/react-query": "^4.10.3",
- "@tauri-apps/api": "^1.1.0",
+ "@tanstack/react-query": "^4.20.4",
+ "@tauri-apps/api": "^1.2.0",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
- "@tailwindcss/typography": "^0.5.7",
- "@tauri-apps/cli": "^1.1.1",
- "@types/react": "^18.0.21",
- "@types/react-dom": "^18.0.6",
+ "@tailwindcss/typography": "^0.5.9",
+ "@tauri-apps/cli": "^1.2.3",
+ "@types/react": "^18.0.28",
+ "@types/react-dom": "^18.0.11",
"@vitejs/plugin-react": "^2.0.0",
"autoprefixer": "^10.4.12",
- "postcss": "^8.4.17",
+ "postcss": "^8.4.21",
"tailwind": "^4.0.0",
"tailwind-scrollbar-hide": "^1.1.7",
- "tailwindcss": "^3.1.8",
- "typescript": "^4.8.4",
- "vite": "^3.1.6",
+ "tailwindcss": "^3.2.7",
+ "typescript": "^4.9.5",
+ "vite": "^3.2.5",
"vite-plugin-tsconfig-paths": "^1.1.0"
}
}
\ No newline at end of file
diff --git a/apps/desktop/postcss.config.js b/apps/desktop/postcss.config.js
index e873f1a4f..65994d328 100644
--- a/apps/desktop/postcss.config.js
+++ b/apps/desktop/postcss.config.js
@@ -1,6 +1,6 @@
module.exports = {
plugins: {
- tailwindcss: {},
autoprefixer: {},
+ tailwindcss: {},
},
-};
+}
diff --git a/apps/desktop/src-tauri/Cargo.toml b/apps/desktop/src-tauri/Cargo.toml
index 85a04c966..12c18fafb 100644
--- a/apps/desktop/src-tauri/Cargo.toml
+++ b/apps/desktop/src-tauri/Cargo.toml
@@ -7,12 +7,12 @@ license = "MIT"
edition = "2021"
[build-dependencies]
-tauri-build = { version = "1.0.4", features = [] }
+tauri-build = { version = "1.1.1", features = [] }
[dependencies]
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
-tauri = { version = "1.0.5", features = ["api-all", "devtools"] }
+tauri = { version = "1.1.1", features = ["api-all", "devtools"] }
### MISC ###
discord-rich-presence = "0.2.3"
diff --git a/apps/desktop/src-tauri/tauri.conf.json b/apps/desktop/src-tauri/tauri.conf.json
index 75aca7823..406ebc18d 100644
--- a/apps/desktop/src-tauri/tauri.conf.json
+++ b/apps/desktop/src-tauri/tauri.conf.json
@@ -1,69 +1,68 @@
{
- "$schema": "../node_modules/@tauri-apps/cli/schema.json",
- "build": {
- "beforeBuildCommand": "pnpm build:web",
- "beforeDevCommand": "pnpm vite --clearScreen=false",
- "devPath": "http://localhost:3000",
- "distDir": "../dist"
- },
- "package": {
- "productName": "Stump",
- "version": "0.0.0"
- },
- "tauri": {
- "allowlist": {
- "all": true
- },
- "bundle": {
- "active": true,
- "category": "DeveloperTool",
- "copyright": "",
- "deb": {
- "depends": []
- },
- "externalBin": [],
- "icon": [
- "icons/32x32.png",
- "icons/128x128.png",
- "icons/128x128@2x.png",
- "icons/icon.icns",
- "icons/icon.ico"
- ],
- "identifier": "com.oromei.stump",
- "longDescription": "",
- "macOS": {
- "entitlements": null,
- "exceptionDomain": "",
- "frameworks": [],
- "providerShortName": null,
- "signingIdentity": null
- },
- "resources": [],
- "shortDescription": "",
- "targets": "all",
- "windows": {
- "certificateThumbprint": null,
- "digestAlgorithm": "sha256",
- "timestampUrl": ""
- }
- },
- "security": {
- "csp": null
- },
- "updater": {
- "active": false
- },
- "windows": [
- {
- "fullscreen": false,
- "height": 700,
- "resizable": true,
- "title": "Stump",
- "width": 1200,
- "decorations": true,
- "transparent": false,
- "center": true
- }
- ]
- }
+ "$schema": "../node_modules/@tauri-apps/cli/schema.json",
+ "build": {
+ "beforeDevCommand": "pnpm vite --clearScreen=false",
+ "devPath": "http://localhost:3000",
+ "distDir": "../../web/dist"
+ },
+ "package": {
+ "productName": "Stump",
+ "version": "0.0.0"
+ },
+ "tauri": {
+ "allowlist": {
+ "all": true
+ },
+ "bundle": {
+ "active": true,
+ "category": "DeveloperTool",
+ "copyright": "",
+ "deb": {
+ "depends": []
+ },
+ "externalBin": [],
+ "icon": [
+ "icons/32x32.png",
+ "icons/128x128.png",
+ "icons/128x128@2x.png",
+ "icons/icon.icns",
+ "icons/icon.ico"
+ ],
+ "identifier": "com.oromei.stump",
+ "longDescription": "",
+ "macOS": {
+ "entitlements": null,
+ "exceptionDomain": "",
+ "frameworks": [],
+ "providerShortName": null,
+ "signingIdentity": null
+ },
+ "resources": [],
+ "shortDescription": "",
+ "targets": "all",
+ "windows": {
+ "certificateThumbprint": null,
+ "digestAlgorithm": "sha256",
+ "timestampUrl": ""
+ }
+ },
+ "security": {
+ "csp": null
+ },
+ "updater": {
+ "active": false
+ },
+ "windows": [
+ {
+ "fullscreen": false,
+ "height": 700,
+ "resizable": true,
+ "title": "Stump",
+ "width": 1200,
+ "decorations": true,
+ "transparent": false,
+ "center": true
+ }
+ ]
+ }
}
\ No newline at end of file
diff --git a/apps/desktop/src/App.tsx b/apps/desktop/src/App.tsx
index ba538ea56..0f7ba85fe 100644
--- a/apps/desktop/src/App.tsx
+++ b/apps/desktop/src/App.tsx
@@ -1,58 +1,52 @@
-import { useEffect, useState } from 'react';
-
-import { Platform, StumpQueryProvider } from '@stump/client';
-import { os, invoke } from '@tauri-apps/api';
-
-import StumpInterface from '@stump/interface';
-
-import '@stump/interface/styles';
+import { Platform } from '@stump/client'
+import StumpInterface from '@stump/interface'
+import { invoke, os } from '@tauri-apps/api'
+import { useEffect, useState } from 'react'
export default function App() {
function getPlatform(platform: string): Platform {
switch (platform) {
case 'darwin':
- return 'macOS';
+ return 'macOS'
case 'win32':
- return 'windows';
+ return 'windows'
case 'linux':
- return 'linux';
+ return 'linux'
default:
- return 'browser';
+ return 'browser'
}
}
const setDiscordPresence = (status?: string, details?: string) =>
- invoke('set_discord_presence', { status, details });
+ invoke('set_discord_presence', { details, status })
const setUseDiscordPresence = (connect: boolean) =>
- invoke('set_use_discord_connection', { connect });
+ invoke('set_use_discord_connection', { connect })
- const [platform, setPlatform] = useState('unknown');
- const [mounted, setMounted] = useState(false);
+ const [platform, setPlatform] = useState('unknown')
+ const [mounted, setMounted] = useState(false)
useEffect(() => {
os.platform().then((platform) => {
- setPlatform(getPlatform(platform));
+ setPlatform(getPlatform(platform))
// TODO: remove this, should be handled in the interface :D
- setUseDiscordPresence(true);
- setDiscordPresence();
+ setUseDiscordPresence(true)
+ setDiscordPresence()
// ^^
- setMounted(true);
- });
- }, []);
+ setMounted(true)
+ })
+ }, [])
// I want to wait until platform is properly set before rendering the interface
if (!mounted) {
- return null;
+ return null
}
return (
-
-
-
- );
+
+ )
}
diff --git a/apps/desktop/tailwind.config.js b/apps/desktop/tailwind.config.js
index b902b45ed..6ccd50bf8 100644
--- a/apps/desktop/tailwind.config.js
+++ b/apps/desktop/tailwind.config.js
@@ -1 +1 @@
-module.exports = require('../../common/config/tailwind.js')('desktop');
+module.exports = require('../../packages/components/tailwind.js')('desktop')
diff --git a/apps/desktop/tsconfig.json b/apps/desktop/tsconfig.json
index a7354a4e4..e528e9f08 100644
--- a/apps/desktop/tsconfig.json
+++ b/apps/desktop/tsconfig.json
@@ -1,7 +1,34 @@
{
- "extends": "../../common/config/base.tsconfig.json",
- "compilerOptions": {
- "types": ["vite/client"]
- },
- "include": ["src"]
+ "extends": "../../tsconfig.json",
+ "compilerOptions": {
+ "types": [
+ "vite/client"
+ ],
+ "outDir": "../../.moon/cache/types/apps/desktop",
+ "paths": {
+ "@stump/client": [
+ "../../packages/client/src/index.ts"
+ ],
+ "@stump/client/*": [
+ "../../packages/client/src/*"
+ ],
+ "@stump/interface": [
+ "../../packages/interface/src/index.ts"
+ ],
+ "@stump/interface/*": [
+ "../../packages/interface/src/*"
+ ]
+ }
+ },
+ "include": [
+ "src"
+ ],
+ "references": [
+ {
+ "path": "../../packages/client"
+ },
+ {
+ "path": "../../packages/interface"
+ }
+ ]
}
diff --git a/apps/desktop/vite.config.ts b/apps/desktop/vite.config.ts
index 9851602b6..724934fa6 100644
--- a/apps/desktop/vite.config.ts
+++ b/apps/desktop/vite.config.ts
@@ -1,26 +1,25 @@
+import react from '@vitejs/plugin-react';
import { defineConfig } from 'vite';
import tsconfigPaths from 'vite-plugin-tsconfig-paths';
-import react from '@vitejs/plugin-react';
-
import { name, version } from './package.json';
// TODO: move this to common/config?
// https://vitejs.dev/config/
export default defineConfig({
- server: {
- port: 3000,
- },
- plugins: [react(), tsconfigPaths()],
- root: 'src',
- publicDir: '../../../common/interface/public',
base: '/',
- define: {
- pkgJson: { name, version },
- },
build: {
- outDir: '../dist',
assetsDir: './assets',
manifest: true,
+ outDir: '../dist',
+ },
+ define: {
+ pkgJson: { name, version },
+ },
+ plugins: [react(), tsconfigPaths()],
+ publicDir: '../../../packages/interface/public',
+ root: 'src',
+ server: {
+ port: 3000,
},
});
diff --git a/apps/mobile/package.json b/apps/mobile/package.json
index f190c32c4..f1b26fee9 100644
--- a/apps/mobile/package.json
+++ b/apps/mobile/package.json
@@ -1,8 +1,8 @@
{
- "name": "@stump/mobile",
- "version": "0.0.0",
- "description": "",
- "license": "MIT",
- "scripts": {},
- "keywords": []
-}
\ No newline at end of file
+ "name": "@stump/mobile",
+ "version": "0.0.0",
+ "description": "",
+ "license": "MIT",
+ "scripts": {},
+ "keywords": []
+}
diff --git a/apps/mobile/tsconfig.json b/apps/mobile/tsconfig.json
new file mode 100644
index 000000000..9811ff19f
--- /dev/null
+++ b/apps/mobile/tsconfig.json
@@ -0,0 +1,10 @@
+{
+ "extends": "../../tsconfig.options.json",
+ "include": [
+ "**/*"
+ ],
+ "references": [],
+ "compilerOptions": {
+ "outDir": "../../.moon/cache/types/apps/mobile"
+ }
+}
diff --git a/apps/server/Cargo.toml b/apps/server/Cargo.toml
index 1a9f4d537..b5174a2e6 100644
--- a/apps/server/Cargo.toml
+++ b/apps/server/Cargo.toml
@@ -1,42 +1,50 @@
[package]
name = "stump_server"
-version.workspace = true
+version = { workspace = true }
edition = "2021"
default-run = "stump_server"
[dependencies]
stump_core = { path = "../../core" }
-prisma-client-rust.workspace = true
-axum = { version = "0.5.16", features = ["ws"] }
-axum-macros = "0.2.3"
-axum-extra = { version = "0.3.7", features = [
+prisma-client-rust = { workspace = true }
+axum = { version = "0.6.1", features = ["ws"] }
+axum-macros = "0.3.0"
+axum-extra = { version = "0.4.2", features = [
"spa",
# "cookie"
+ "query"
] }
-tower-http = { version = "0.3.4", features = [
+tower-http = { version = "0.3.5", features = [
"fs",
"cors",
"set-header"
] }
hyper = "0.14.20"
serde_json = "1.0.85"
+serde_with = "2.1.0"
# used for the ws stuff
futures-util = "0.3.24"
# axum-typed-websockets = "0.4.0"
-tokio.workspace = true
+tokio = { workspace = true }
tokio-util = "0.7.4"
-serde.workspace = true
-axum-sessions = "0.3.1"
+serde = { workspace = true }
+axum-sessions = "0.4.1"
async-trait = "0.1.53"
-async-stream = "0.3.3"
+async-stream = { workspace = true }
+# TODO: figure out this super fucking annoying cargo dependency resolution issue. This is the second time
+# cargo, in docker, has ignored the workspace version of this dep and instead used the latest version from crates.io
+# local-ip-address = "0.5.1"
+local-ip-address = { git = "https://github.com/EstebanBorai/local-ip-address.git", tag = "v0.5.1" }
### Dev Utils ###
rand = "0.8.5"
+utoipa = { version = "3.0.3", features = ["axum_extras"] }
+utoipa-swagger-ui = { version = "3.0.2", features = ["axum"] }
### Error Handling + Logging ###
-tracing.workspace = true
-thiserror.workspace = true
+tracing = { workspace = true }
+thiserror = { workspace = true }
### Auth ###
bcrypt = "0.10.1"
@@ -53,4 +61,4 @@ openssl = { version = "0.10.40", features = ["vendored"] }
openssl = { version = "0.10.40", features = ["vendored"] }
[build-dependencies]
-chrono = "0.4.19"
\ No newline at end of file
+chrono = "0.4.19"
diff --git a/apps/server/build.rs b/apps/server/build.rs
index cd1b98f2f..3fa46e9f4 100644
--- a/apps/server/build.rs
+++ b/apps/server/build.rs
@@ -1,9 +1,9 @@
use chrono::prelude::{DateTime, Utc};
-use std::process::Command;
+use std::{env, process::Command};
fn get_git_rev() -> Option {
let output = Command::new("git")
- .args(&["rev-parse", "--short", "HEAD"])
+ .args(["rev-parse", "--short", "HEAD"])
.output()
.ok()?;
@@ -23,7 +23,12 @@ fn get_compile_date() -> String {
fn main() {
println!("cargo:rustc-env=STATIC_BUILD_DATE={}", get_compile_date());
- if let Some(rev) = get_git_rev() {
+ let maybe_rev = match env::var("GIT_REV") {
+ Ok(rev) => Some(rev),
+ _ => get_git_rev(),
+ };
+
+ if let Some(rev) = maybe_rev {
println!("cargo:rustc-env=GIT_REV={}", rev);
}
}
diff --git a/apps/server/moon.yml b/apps/server/moon.yml
new file mode 100644
index 000000000..2a7d799a7
--- /dev/null
+++ b/apps/server/moon.yml
@@ -0,0 +1,56 @@
+type: 'application'
+
+workspace:
+ inheritedTasks:
+ exclude: ['buildPackage']
+
+fileGroups:
+ app:
+ - 'src/**/*'
+
+language: 'rust'
+
+tasks:
+ dev:
+ command: 'cargo watch --ignore packages -x "run --manifest-path=apps/server/Cargo.toml --package stump_server"'
+ local: true
+ options:
+ runFromWorkspaceRoot: true
+
+ start:
+ command: 'cargo run --release --package stump_server'
+ local: true
+
+ build:
+ command: 'cargo build --release --package stump_server'
+ local: true
+ deps:
+ - 'web:build'
+ - '~:get-webapp'
+
+ lint:
+ command: 'cargo clippy --package stump_server -- -D warnings'
+ options:
+ mergeArgs: 'replace'
+ mergeDeps: 'replace'
+ mergeInputs: 'replace'
+
+ format:
+ command: 'cargo fmt --package stump_server'
+ options:
+ mergeArgs: 'replace'
+ mergeDeps: 'replace'
+ mergeInputs: 'replace'
+
+ clean:
+ command: 'cargo clean'
+
+ delete-webapp:
+ command: 'rm -rf ./dist'
+ platform: 'system'
+
+ get-webapp:
+ command: 'cp -r ../web/dist ./dist'
+ platform: 'system'
+ deps:
+ - '~:delete-webapp'
diff --git a/apps/server/package.json b/apps/server/package.json
index 02006a075..a122cdf55 100644
--- a/apps/server/package.json
+++ b/apps/server/package.json
@@ -3,14 +3,8 @@
"private": true,
"version": "0.0.0",
"scripts": {
- "check": "cargo check",
- "start": "cargo run --release",
- "dev": "cargo watch -x run",
"build": "pnpm get-client && cargo build --release && pnpm move-client",
"get-client": "trash \"dist/*\" \"!dist/.placeholder\" && cpy \"../web/dist/**/*\" ./dist/",
- "move-client": "trash ../../target/release/dist && cp -r ./dist ../../target/release/dist",
- "fmt": "cargo fmt --all --manifest-path=./Cargo.toml --",
- "benchmarks": "cargo test --benches",
- "test": "cargo test"
+ "move-client": "trash ../../target/release/dist && cp -r ./dist ../../target/release/dist"
}
}
\ No newline at end of file
diff --git a/apps/server/src/config/cors.rs b/apps/server/src/config/cors.rs
index 8089ae143..4bc1cc149 100644
--- a/apps/server/src/config/cors.rs
+++ b/apps/server/src/config/cors.rs
@@ -4,15 +4,34 @@ use axum::http::{
header::{ACCEPT, AUTHORIZATION, CONTENT_TYPE},
HeaderValue, Method,
};
+use local_ip_address::local_ip;
use tower_http::cors::{AllowOrigin, CorsLayer};
-use tracing::error;
+use tracing::{error, trace};
-const DEBUG_ALLOWED_ORIGINS: &[&str] = &["http://localhost:3000", "http://0.0.0.0:3000"];
+use crate::config::utils::is_debug;
const DEFAULT_ALLOWED_ORIGINS: &[&str] =
&["tauri://localhost", "https://tauri.localhost"];
+const DEBUG_ALLOWED_ORIGINS: &[&str] = &[
+ "tauri://localhost",
+ "https://tauri.localhost",
+ "http://localhost:3000",
+ "http://0.0.0.0:3000",
+];
+
+fn merge_origins(origins: &[&str], local_origins: Vec) -> Vec {
+ origins
+ .iter()
+ .map(|origin| origin.to_string())
+ .chain(local_origins.into_iter())
+ .map(|origin| origin.parse())
+ .filter_map(|res| res.ok())
+ .collect::>()
+}
+
+pub fn get_cors_layer(port: u16) -> CorsLayer {
+ let is_debug = is_debug();
-pub fn get_cors_layer() -> CorsLayer {
let allowed_origins = match env::var("STUMP_ALLOWED_ORIGINS") {
Ok(val) => {
if val.is_empty() {
@@ -37,31 +56,49 @@ pub fn get_cors_layer() -> CorsLayer {
Err(_) => None,
};
+ let local_ip = local_ip()
+ .map_err(|e| {
+ error!("Failed to get local ip: {:?}", e);
+ e
+ })
+ .map(|ip| ip.to_string())
+ .unwrap_or_default();
+
+ // Format the local IP with both http and https, and the port. If is_debug is true,
+ // then also add port 3000.
+ let local_orgins = if !local_ip.is_empty() {
+ let mut base = vec![
+ format!("http://{local_ip}:{port}"),
+ format!("https://{local_ip}:{port}"),
+ ];
+
+ if is_debug {
+ base.append(&mut vec![
+ format!("http://{local_ip}:3000",),
+ format!("https://{local_ip}:3000"),
+ ]);
+ }
+
+ base
+ } else {
+ vec![]
+ };
+
let mut cors_layer = CorsLayer::new();
if let Some(origins_list) = allowed_origins {
+ // TODO: consider adding some config to allow for this list to be appended to defaults, rather than
+ // completely overriding them.
cors_layer = cors_layer.allow_origin(AllowOrigin::list(origins_list));
- } else if env::var("STUMP_PROFILE").unwrap_or_else(|_| "release".into()) == "debug" {
- cors_layer = cors_layer.allow_origin(
- DEBUG_ALLOWED_ORIGINS
- .iter()
- .map(|origin| origin.parse())
- .filter_map(|res| res.ok())
- .collect::>(),
- );
+ } else if is_debug {
+ let debug_origins = merge_origins(DEBUG_ALLOWED_ORIGINS, local_orgins);
+ cors_layer = cors_layer.allow_origin(debug_origins);
} else {
- cors_layer = cors_layer.allow_origin(
- DEFAULT_ALLOWED_ORIGINS
- .iter()
- .map(|origin| origin.parse())
- .filter_map(|res| res.ok())
- .collect::>(),
- );
+ let release_origins = merge_origins(DEFAULT_ALLOWED_ORIGINS, local_orgins);
+ cors_layer = cors_layer.allow_origin(release_origins);
}
- // TODO: finalize what cors should be... fucking hate cors lmao
- cors_layer
- // .allow_methods(Any)
+ cors_layer = cors_layer
.allow_methods([
Method::GET,
Method::PUT,
@@ -71,5 +108,10 @@ pub fn get_cors_layer() -> CorsLayer {
Method::CONNECT,
])
.allow_headers([ACCEPT, AUTHORIZATION, CONTENT_TYPE])
- .allow_credentials(true)
+ .allow_credentials(true);
+
+ #[cfg(debug_assertions)]
+ trace!(?cors_layer, "Cors configuration complete");
+
+ cors_layer
}
diff --git a/apps/server/src/config/session.rs b/apps/server/src/config/session.rs
index 4a51b827d..ed8cb0cff 100644
--- a/apps/server/src/config/session.rs
+++ b/apps/server/src/config/session.rs
@@ -25,13 +25,20 @@ pub fn get_session_layer() -> SessionLayer {
.with_session_ttl(Some(Duration::from_secs(3600 * 24 * 3)))
.with_cookie_path("/");
- if env::var("STUMP_PROFILE").unwrap_or_else(|_| "release".into()) == "release" {
- sesssion_layer
- .with_same_site_policy(SameSite::None)
- .with_secure(true)
- } else {
- sesssion_layer
- .with_same_site_policy(SameSite::Lax)
- .with_secure(false)
- }
+ sesssion_layer
+ .with_same_site_policy(SameSite::Lax)
+ .with_secure(false)
+
+ // FIXME: I think this can be configurable, but most people are going to be insecurely
+ // running this, which means `secure` needs to be false otherwise the cookie won't
+ // be sent.
+ // if env::var("STUMP_PROFILE").unwrap_or_else(|_| "release".into()) == "release" {
+ // sesssion_layer
+ // .with_same_site_policy(SameSite::None)
+ // .with_secure(true)
+ // } else {
+ // sesssion_layer
+ // .with_same_site_policy(SameSite::Lax)
+ // .with_secure(false)
+ // }
}
diff --git a/apps/server/src/config/state.rs b/apps/server/src/config/state.rs
index 7b4866c08..2c46888f7 100644
--- a/apps/server/src/config/state.rs
+++ b/apps/server/src/config/state.rs
@@ -1,8 +1,15 @@
use std::sync::Arc;
-use axum::Extension;
-use stump_core::config::Ctx;
+use axum::extract::State;
+use axum_macros::FromRequestParts;
+use stump_core::prelude::Ctx;
// TODO: I don't feel like I need this module... Unless I add things to it..
+pub type AppState = Arc;
-pub type State = Extension>;
+// TODO: is this how to fix the FIXME note in auth extractor?
+#[derive(FromRequestParts, Clone)]
+pub struct _AppState {
+ #[allow(unused)]
+ core_ctx: State,
+}
diff --git a/apps/server/src/config/utils.rs b/apps/server/src/config/utils.rs
index 89f287b7d..d7d64539a 100644
--- a/apps/server/src/config/utils.rs
+++ b/apps/server/src/config/utils.rs
@@ -3,3 +3,7 @@ use std::env;
pub(crate) fn get_client_dir() -> String {
env::var("STUMP_CLIENT_DIR").unwrap_or_else(|_| "./dist".to_string())
}
+
+pub(crate) fn is_debug() -> bool {
+ env::var("STUMP_PROFILE").unwrap_or_else(|_| "release".into()) == "debug"
+}
diff --git a/apps/server/src/errors.rs b/apps/server/src/errors.rs
index f196cc7cb..f7f292639 100644
--- a/apps/server/src/errors.rs
+++ b/apps/server/src/errors.rs
@@ -8,9 +8,10 @@ use prisma_client_rust::{
};
use stump_core::{
event::InternalCoreTask,
- types::{errors::ProcessFileError, CoreError},
+ prelude::{CoreError, ProcessFileError},
};
use tokio::sync::mpsc;
+use utoipa::ToSchema;
use std::net;
use thiserror::Error;
@@ -71,7 +72,7 @@ impl IntoResponse for AuthError {
}
#[allow(unused)]
-#[derive(Debug, Error)]
+#[derive(Debug, Error, ToSchema)]
pub enum ApiError {
#[error("{0}")]
BadRequest(String),
@@ -94,9 +95,18 @@ pub enum ApiError {
#[error("{0}")]
Redirect(String),
#[error("{0}")]
+ #[schema(value_type = String)]
PrismaError(#[from] QueryError),
}
+impl ApiError {
+ pub fn forbidden_discreet() -> ApiError {
+ ApiError::Forbidden(String::from(
+ "You do not have permission to access this resource.",
+ ))
+ }
+}
+
impl From for ApiError {
fn from(err: CoreError) -> Self {
match err {
diff --git a/apps/server/src/main.rs b/apps/server/src/main.rs
index 361a5251f..548ae982b 100644
--- a/apps/server/src/main.rs
+++ b/apps/server/src/main.rs
@@ -1,6 +1,6 @@
use std::net::SocketAddr;
-use axum::{Extension, Router};
+use axum::Router;
use errors::{ServerError, ServerResult};
use stump_core::{config::logging::init_tracing, StumpCore};
use tracing::{error, info, trace};
@@ -34,6 +34,7 @@ async fn main() -> ServerResult<()> {
return Err(ServerError::ServerStartError(err.to_string()));
}
let stump_environment = stump_environment.unwrap();
+ let port = stump_environment.port.unwrap_or(10801);
// Note: init_tracing after loading the environment so the correct verbosity
// level is used for logging.
@@ -50,16 +51,18 @@ async fn main() -> ServerResult<()> {
}
let server_ctx = core.get_context();
+ let app_state = server_ctx.arced();
+ let cors_layer = cors::get_cors_layer(port);
info!("{}", core.get_shadow_text());
let app = Router::new()
- .merge(routers::mount())
- .layer(Extension(server_ctx.arced()))
+ .merge(routers::mount(app_state.clone()))
+ .with_state(app_state.clone())
.layer(session::get_session_layer())
- .layer(cors::get_cors_layer());
+ .layer(cors_layer);
- let addr = SocketAddr::from(([0, 0, 0, 0], stump_environment.port.unwrap_or(10801)));
+ let addr = SocketAddr::from(([0, 0, 0, 0], port));
info!("⚡️ Stump HTTP server starting on http://{}", addr);
axum::Server::bind(&addr)
diff --git a/apps/server/src/middleware/auth.rs b/apps/server/src/middleware/auth.rs
index 9e187e6e1..4eef02336 100644
--- a/apps/server/src/middleware/auth.rs
+++ b/apps/server/src/middleware/auth.rs
@@ -1,10 +1,8 @@
-use std::sync::Arc;
-
use async_trait::async_trait;
use axum::{
body::BoxBody,
- extract::{FromRequest, RequestParts},
- http::{header, Method, StatusCode},
+ extract::{FromRef, FromRequestParts},
+ http::{header, request::Parts, Method, StatusCode},
response::{IntoResponse, Response},
};
use axum_sessions::SessionHandle;
@@ -12,28 +10,43 @@ use prisma_client_rust::{
prisma_errors::query_engine::{RecordNotFound, UniqueKeyViolation},
QueryError,
};
-use stump_core::{config::Ctx, prisma::user, types::User};
+use stump_core::{db::models::User, prisma::user};
use tracing::{error, trace};
-use crate::utils::{decode_base64_credentials, verify_password};
+use crate::{
+ config::state::AppState,
+ utils::{decode_base64_credentials, verify_password},
+};
pub struct Auth;
#[async_trait]
-impl FromRequest for Auth
+impl FromRequestParts for Auth
where
- B: Send,
+ AppState: FromRef,
+ S: Send + Sync,
{
type Rejection = Response;
- async fn from_request(req: &mut RequestParts) -> Result {
+ async fn from_request_parts(
+ parts: &mut Parts,
+ state: &S,
+ ) -> Result {
// Note: this is fine, right? I mean, it's not like we're doing anything
// on a OPTIONS request, right? Right? 👀
- if req.method() == Method::OPTIONS {
+ if parts.method == Method::OPTIONS {
return Ok(Self);
}
- let session_handle = req.extensions().get::().unwrap();
+ let state = AppState::from_ref(state);
+ let session_handle =
+ parts.extensions.get::().ok_or_else(|| {
+ (
+ StatusCode::INTERNAL_SERVER_ERROR,
+ "Failed to extract session handle",
+ )
+ .into_response()
+ })?;
let session = session_handle.read().await;
if let Some(user) = session.get::("user") {
@@ -44,23 +57,16 @@ where
// drop so we don't deadlock when writing to the session lol oy vey
drop(session);
- let ctx = req.extensions().get::>().unwrap();
-
- // TODO: figure me out plz
- // let cookie_jar = req.extensions().get::().unwrap();
-
- // if let Some(cookie) = cookie_jar.get("stump_session") {
- // println!("cookie: {:?}", cookie);
- // }
-
- let auth_header = req
- .headers()
+ let auth_header = parts
+ .headers
.get(header::AUTHORIZATION)
.and_then(|value| value.to_str().ok());
- let is_opds = req.uri().path().starts_with("/opds");
+ let is_opds = parts.uri.path().starts_with("/opds");
+ let has_auth_header = auth_header.is_some();
+ trace!(is_opds, has_auth_header, uri = ?parts.uri, "Checking auth header");
- if auth_header.is_none() {
+ if !has_auth_header {
if is_opds {
return Err(BasicAuth.into_response());
}
@@ -69,7 +75,6 @@ where
}
let auth_header = auth_header.unwrap();
-
if !auth_header.starts_with("Basic ") || auth_header.len() <= 6 {
return Err((StatusCode::UNAUTHORIZED).into_response());
}
@@ -83,7 +88,7 @@ where
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response()
})?;
- let user = ctx
+ let user = state
.db
.user()
.find_unique(user::username::equals(decoded_credentials.username.clone()))
@@ -136,23 +141,26 @@ where
///
/// Router::new()
/// .layer(from_extractor::())
-/// .layer(from_extractor::());
+/// .layer(from_extractor_with_state::(app_state));
/// ```
pub struct AdminGuard;
#[async_trait]
-impl FromRequest for AdminGuard
+impl FromRequestParts for AdminGuard
where
- B: Send,
+ S: Send + Sync,
{
type Rejection = StatusCode;
- async fn from_request(req: &mut RequestParts) -> Result {
- if req.method() == Method::OPTIONS {
+ async fn from_request_parts(
+ parts: &mut Parts,
+ _: &S,
+ ) -> Result {
+ if parts.method == Method::OPTIONS {
return Ok(Self);
}
- let session_handle = req.extensions().get::().unwrap();
+ let session_handle = parts.extensions.get::().unwrap();
let session = session_handle.read().await;
if let Some(user) = session.get::("user") {
diff --git a/apps/server/src/routers/api/auth.rs b/apps/server/src/routers/api/auth.rs
deleted file mode 100644
index f8f1cf922..000000000
--- a/apps/server/src/routers/api/auth.rs
+++ /dev/null
@@ -1,132 +0,0 @@
-use axum::{
- routing::{get, post},
- Extension, Json, Router,
-};
-use axum_sessions::extractors::{ReadableSession, WritableSession};
-use stump_core::{
- prisma::{user, user_preferences},
- types::{enums::UserRole, LoginOrRegisterArgs, User},
-};
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- utils::{self, verify_password},
-};
-
-pub(crate) fn mount() -> Router {
- Router::new().nest(
- "/auth",
- Router::new()
- .route("/me", get(viewer))
- .route("/login", post(login))
- .route("/logout", post(logout))
- .route("/register", post(register)),
- )
-}
-
-async fn viewer(session: ReadableSession) -> ApiResult> {
- if let Some(user) = session.get::("user") {
- Ok(Json(user))
- } else {
- Err(ApiError::Unauthorized)
- }
-}
-
-// Wow, this is really ugly syntax for state extraction imo...
-async fn login(
- Json(input): Json,
- Extension(ctx): State,
- mut session: WritableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- if let Some(user) = session.get::("user") {
- if input.username == user.username {
- return Ok(Json(user));
- }
- }
-
- let fetched_user = db
- .user()
- .find_unique(user::username::equals(input.username.to_owned()))
- .with(user::user_preferences::fetch())
- .exec()
- .await?;
-
- if let Some(db_user) = fetched_user {
- let matches = verify_password(&db_user.hashed_password, &input.password)?;
- if !matches {
- return Err(ApiError::Unauthorized);
- }
-
- let user: User = db_user.into();
- session.insert("user", user.clone()).unwrap();
-
- return Ok(Json(user));
- }
-
- Err(ApiError::Unauthorized)
-}
-
-async fn logout(mut session: WritableSession) -> ApiResult<()> {
- session.destroy();
- Ok(())
-}
-
-pub async fn register(
- Json(input): Json,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- let has_users = db.user().find_first(vec![]).exec().await?.is_some();
-
- let mut user_role = UserRole::default();
-
- // server owners must register member accounts
- if session.get::("user").is_none() && has_users {
- return Err(ApiError::Forbidden(
- "Must be server owner to register member accounts".to_string(),
- ));
- } else if !has_users {
- // register the user as owner
- user_role = UserRole::ServerOwner;
- }
-
- let hashed_password = bcrypt::hash(&input.password, utils::get_hash_cost())?;
-
- let created_user = db
- .user()
- .create(
- input.username.to_owned(),
- hashed_password,
- vec![user::role::set(user_role.into())],
- )
- .exec()
- .await?;
-
- // FIXME: these next two queries will be removed once nested create statements are
- // supported on the prisma client. Until then, this ugly mess is necessary.
- let _user_preferences = db
- .user_preferences()
- .create(vec![user_preferences::user::connect(user::id::equals(
- created_user.id.clone(),
- ))])
- .exec()
- .await?;
-
- // This *really* shouldn't fail, so I am using unwrap here. It also doesn't
- // matter too much in the long run since this query will go away once above fixme
- // is resolved.
- let user = db
- .user()
- .find_unique(user::id::equals(created_user.id))
- .with(user::user_preferences::fetch())
- .exec()
- .await?
- .unwrap();
-
- Ok(Json(user.into()))
-}
diff --git a/apps/server/src/routers/api/job.rs b/apps/server/src/routers/api/job.rs
deleted file mode 100644
index a7de614c0..000000000
--- a/apps/server/src/routers/api/job.rs
+++ /dev/null
@@ -1,68 +0,0 @@
-use axum::{
- extract::Path,
- middleware::from_extractor,
- routing::{delete, get},
- Extension, Json, Router,
-};
-use stump_core::{event::InternalCoreTask, job::JobReport};
-use tokio::sync::oneshot;
-use tracing::debug;
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- middleware::auth::{AdminGuard, Auth},
-};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .nest(
- "/jobs",
- Router::new()
- .route("/", get(get_job_reports).delete(delete_job_reports))
- .route("/:id/cancel", delete(cancel_job)),
- )
- .layer(from_extractor::())
- .layer(from_extractor::())
-}
-
-/// Get all running/pending jobs.
-async fn get_job_reports(Extension(ctx): State) -> ApiResult>> {
- let (task_tx, task_rx) = oneshot::channel();
-
- ctx.internal_task(InternalCoreTask::GetJobReports(task_tx))
- .map_err(|e| {
- ApiError::InternalServerError(format!(
- "Failed to submit internal task: {}",
- e
- ))
- })?;
-
- let res = task_rx.await.map_err(|e| {
- ApiError::InternalServerError(format!("Failed to get job report: {}", e))
- })??;
-
- Ok(Json(res))
-}
-
-async fn delete_job_reports(Extension(ctx): State) -> ApiResult<()> {
- let result = ctx.db.job().delete_many(vec![]).exec().await?;
- debug!("Deleted {} job reports", result);
- Ok(())
-}
-
-async fn cancel_job(Extension(ctx): State, Path(job_id): Path) -> ApiResult<()> {
- let (task_tx, task_rx) = oneshot::channel();
-
- ctx.internal_task(InternalCoreTask::CancelJob {
- job_id,
- return_sender: task_tx,
- })
- .map_err(|e| {
- ApiError::InternalServerError(format!("Failed to submit internal task: {}", e))
- })?;
-
- Ok(task_rx.await.map_err(|e| {
- ApiError::InternalServerError(format!("Failed to cancel job: {}", e))
- })??)
-}
diff --git a/apps/server/src/routers/api/library.rs b/apps/server/src/routers/api/library.rs
deleted file mode 100644
index 3aaeeb19c..000000000
--- a/apps/server/src/routers/api/library.rs
+++ /dev/null
@@ -1,483 +0,0 @@
-use axum::{
- extract::{Path, Query},
- middleware::from_extractor,
- routing::get,
- Extension, Json, Router,
-};
-use axum_sessions::extractors::ReadableSession;
-use prisma_client_rust::{raw, Direction};
-use serde::Deserialize;
-use std::{path, str::FromStr};
-use tracing::{debug, error, trace};
-
-use stump_core::{
- db::utils::PrismaCountTrait,
- fs::{image, media_file},
- job::LibraryScanJob,
- prisma::{
- library, library_options, media,
- series::{self, OrderByParam as SeriesOrderByParam},
- tag,
- },
- types::{
- CreateLibraryArgs, FindManyTrait, LibrariesStats, Library, LibraryScanMode,
- Pageable, PagedRequestParams, QueryOrder, Series, UpdateLibraryArgs,
- },
-};
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- middleware::auth::Auth,
- utils::{
- get_session_admin_user,
- http::{ImageResponse, PageableTrait},
- },
-};
-
-// TODO: .layer(from_extractor::()) where needed. Might need to remove some
-// of the nesting
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/libraries", get(get_libraries).post(create_library))
- .route("/libraries/stats", get(get_libraries_stats))
- .nest(
- "/libraries/:id",
- Router::new()
- .route(
- "/",
- get(get_library_by_id)
- .put(update_library)
- .delete(delete_library),
- )
- .route("/scan", get(scan_library))
- .route("/series", get(get_library_series))
- .route("/thumbnail", get(get_library_thumbnail)),
- )
- .layer(from_extractor::())
-}
-
-/// Get all libraries
-async fn get_libraries(
- Extension(ctx): State,
- pagination: Query,
-) -> ApiResult>>> {
- let libraries = ctx
- .db
- .library()
- .find_many(vec![])
- .with(library::tags::fetch(vec![]))
- .with(library::library_options::fetch())
- .order_by(library::name::order(Direction::Asc))
- .exec()
- .await?
- .into_iter()
- .map(|l| l.into())
- .collect::>();
-
- let unpaged = pagination.unpaged.unwrap_or(false);
-
- if unpaged {
- return Ok(Json(libraries.into()));
- }
-
- Ok(Json((libraries, pagination.page_params()).into()))
-}
-
-/// Get stats for all libraries
-async fn get_libraries_stats(Extension(ctx): State) -> ApiResult> {
- let db = ctx.get_db();
-
- // TODO: maybe add more, like missingBooks, idk
- let stats = db
- ._query_raw::(raw!(
- "SELECT COUNT(*) as book_count, IFNULL(SUM(media.size),0) as total_bytes, IFNULL(series_count,0) as series_count FROM media INNER JOIN (SELECT COUNT(*) as series_count FROM series)"
- ))
- .exec()
- .await?
- .into_iter()
- .next();
-
- if stats.is_none() {
- return Err(ApiError::InternalServerError(
- "Failed to compute stats for libraries".to_string(),
- ));
- }
-
- Ok(Json(stats.unwrap()))
-}
-
-/// Get a library by id, if the current user has access to it. Library `series`, `media`
-/// and `tags` relations are loaded on this route.
-async fn get_library_by_id(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- // FIXME: this query is a pain to add series->media relation counts.
- // This should be much better in https://github.com/Brendonovich/prisma-client-rust/issues/24
- // but for now I kinda have to load all the media...
- let library = db
- .library()
- .find_unique(library::id::equals(id.clone()))
- .with(library::series::fetch(vec![]))
- .with(library::library_options::fetch())
- .with(library::tags::fetch(vec![]))
- .exec()
- .await?;
-
- if library.is_none() {
- return Err(ApiError::NotFound(format!(
- "Library with id {} not found",
- id
- )));
- }
-
- let library = library.unwrap();
-
- Ok(Json(library.into()))
-}
-
-// FIXME: this is absolutely atrocious...
-// This should be much better once https://github.com/Brendonovich/prisma-client-rust/issues/24 is added
-// but for now I will have this disgustingly gross and ugly work around...
-///Returns the series in a given library. Will *not* load the media relation.
-async fn get_library_series(
- Path(id): Path,
- pagination: Query,
- Extension(ctx): State,
-) -> ApiResult>>> {
- let db = ctx.get_db();
-
- let unpaged = pagination.unpaged.unwrap_or(false);
- let page_params = pagination.page_params();
- let order_by_param: SeriesOrderByParam =
- QueryOrder::from(page_params.clone()).try_into()?;
-
- let base_query = db
- .series()
- // TODO: add media relation count....
- .find_many(vec![series::library_id::equals(Some(id.clone()))])
- .order_by(order_by_param);
-
- let series = match unpaged {
- true => base_query.exec().await?,
- false => base_query.paginated(page_params.clone()).exec().await?,
- };
-
- let series_ids = series.iter().map(|s| s.id.clone()).collect();
-
- let media_counts = db.series_media_count(series_ids).await?;
-
- let series = series
- .iter()
- .map(|s| {
- let media_count = match media_counts.get(&s.id) {
- Some(count) => count.to_owned(),
- _ => 0,
- } as i64;
-
- (s.to_owned(), media_count).into()
- })
- .collect::>();
-
- if unpaged {
- return Ok(Json(series.into()));
- }
-
- let series_count = db.series_count(id).await?;
-
- Ok(Json((series, series_count, page_params).into()))
-}
-
-// /// Get the thumbnail image for a library by id, if the current user has access to it.
-async fn get_library_thumbnail(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult {
- let db = ctx.get_db();
-
- let library_series = db
- .series()
- .find_many(vec![series::library_id::equals(Some(id.clone()))])
- .with(series::media::fetch(vec![]).order_by(media::name::order(Direction::Asc)))
- .exec()
- .await?;
-
- // TODO: error handling
-
- let series = library_series.first().unwrap();
-
- let media = series.media()?.first().unwrap();
-
- Ok(media_file::get_page(media.path.as_str(), 1)?.into())
-}
-
-#[derive(Deserialize)]
-struct ScanQueryParam {
- scan_mode: Option,
-}
-
-/// Queue a ScannerJob to scan the library by id. The job, when started, is
-/// executed in a separate thread.
-async fn scan_library(
- Path(id): Path,
- Extension(ctx): State,
- query: Query,
- session: ReadableSession, // TODO: admin middleware
-) -> Result<(), ApiError> {
- let db = ctx.get_db();
- let _user = get_session_admin_user(&session)?;
-
- let lib = db
- .library()
- .find_unique(library::id::equals(id.clone()))
- .exec()
- .await?;
-
- if lib.is_none() {
- return Err(ApiError::NotFound(format!(
- "Library with id {} not found",
- id
- )));
- }
-
- let lib = lib.unwrap();
-
- let scan_mode = query.scan_mode.to_owned().unwrap_or_default();
- let scan_mode = LibraryScanMode::from_str(&scan_mode)
- .map_err(|e| ApiError::BadRequest(format!("Invalid scan mode: {}", e)))?;
-
- // TODO: should this just be an error?
- if scan_mode != LibraryScanMode::None {
- let job = LibraryScanJob {
- path: lib.path,
- scan_mode,
- };
-
- return Ok(ctx.spawn_job(Box::new(job))?);
- }
-
- Ok(())
-}
-
-// /// Create a new library. Will queue a ScannerJob to scan the library, and return the library
-async fn create_library(
- Json(input): Json,
- Extension(ctx): State,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- // TODO: check library is not a parent of another library
- if !path::Path::new(&input.path).exists() {
- return Err(ApiError::BadRequest(format!(
- "The library directory does not exist: {}",
- input.path
- )));
- }
-
- // TODO: refactor once nested create is supported
- // https://github.com/Brendonovich/prisma-client-rust/issues/44
-
- let library_options_arg = input.library_options.to_owned().unwrap_or_default();
-
- // FIXME: until nested create, library_options.library_id will be NULL in the database... unless I run ANOTHER
- // update. Which I am not doing lol.
- let library_options = db
- .library_options()
- .create(vec![
- library_options::convert_rar_to_zip::set(
- library_options_arg.convert_rar_to_zip,
- ),
- library_options::hard_delete_conversions::set(
- library_options_arg.hard_delete_conversions,
- ),
- library_options::create_webp_thumbnails::set(
- library_options_arg.create_webp_thumbnails,
- ),
- library_options::library_pattern::set(
- library_options_arg.library_pattern.to_string(),
- ),
- ])
- .exec()
- .await?;
-
- let lib = db
- .library()
- .create(
- input.name.to_owned(),
- input.path.to_owned(),
- library_options::id::equals(library_options.id),
- vec![library::description::set(input.description.to_owned())],
- )
- .exec()
- .await?;
-
- // FIXME: try and do multiple connects again soon, batching is WAY better than
- // previous solution but still...
- if let Some(tags) = input.tags.to_owned() {
- let tag_connects = tags.into_iter().map(|tag| {
- db.library().update(
- library::id::equals(lib.id.clone()),
- vec![library::tags::connect(vec![tag::id::equals(tag.id)])],
- )
- });
-
- db._batch(tag_connects).await?;
- }
-
- let scan_mode = input.scan_mode.unwrap_or_default();
-
- // `scan` is not a required field, however it will default to BATCHED if not provided
- if scan_mode != LibraryScanMode::None {
- ctx.spawn_job(Box::new(LibraryScanJob {
- path: lib.path.clone(),
- scan_mode,
- }))?;
- }
-
- Ok(Json(lib.into()))
-}
-
-/// Update a library by id, if the current user is a SERVER_OWNER.
-async fn update_library(
- Extension(ctx): State,
- Path(id): Path,
- Json(input): Json,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- if !path::Path::new(&input.path).exists() {
- return Err(ApiError::BadRequest(format!(
- "Updated path does not exist: {}",
- input.path
- )));
- }
-
- let library_options = input.library_options.to_owned();
-
- db.library_options()
- .update(
- library_options::id::equals(library_options.id.unwrap_or_default()),
- vec![
- library_options::convert_rar_to_zip::set(
- library_options.convert_rar_to_zip,
- ),
- library_options::hard_delete_conversions::set(
- library_options.hard_delete_conversions,
- ),
- library_options::create_webp_thumbnails::set(
- library_options.create_webp_thumbnails,
- ),
- ],
- )
- .exec()
- .await?;
-
- let mut batches = vec![];
-
- // FIXME: this is disgusting. I don't understand why the library::tag::connect doesn't
- // work with multiple tags, nor why providing multiple library::tag::connect params
- // doesn't work. Regardless, absolutely do NOT keep this. Correction required,
- // highly inefficient queries.
-
- if let Some(tags) = input.tags.to_owned() {
- for tag in tags {
- batches.push(db.library().update(
- library::id::equals(id.clone()),
- vec![library::tags::connect(vec![tag::id::equals(
- tag.id.to_owned(),
- )])],
- ));
- }
- }
-
- if let Some(removed_tags) = input.removed_tags.to_owned() {
- for tag in removed_tags {
- batches.push(db.library().update(
- library::id::equals(id.clone()),
- vec![library::tags::disconnect(vec![tag::id::equals(
- tag.id.to_owned(),
- )])],
- ));
- }
- }
-
- if !batches.is_empty() {
- db._batch(batches).await?;
- }
-
- let updated = db
- .library()
- .update(
- library::id::equals(id),
- vec![
- library::name::set(input.name.to_owned()),
- library::path::set(input.path.to_owned()),
- library::description::set(input.description.to_owned()),
- ],
- )
- .with(library::tags::fetch(vec![]))
- .exec()
- .await?;
-
- let scan_mode = input.scan_mode.unwrap_or_default();
-
- // `scan` is not a required field, however it will default to BATCHED if not provided
- if scan_mode != LibraryScanMode::None {
- ctx.spawn_job(Box::new(LibraryScanJob {
- path: updated.path.clone(),
- scan_mode,
- }))?;
- }
-
- Ok(Json(updated.into()))
-}
-
-/// Delete a library by id, if the current user is a SERVER_OWNER.
-async fn delete_library(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- trace!("Attempting to delete library with ID {}", &id);
-
- let deleted = db
- .library()
- .delete(library::id::equals(id.clone()))
- .include(library::include!({
- series: include {
- media: select {
- id
- }
- }
- }))
- .exec()
- .await?;
-
- let media_ids = deleted
- .series
- .into_iter()
- .flat_map(|series| series.media)
- .map(|media| media.id)
- .collect::>();
-
- if !media_ids.is_empty() {
- trace!("List of deleted media IDs: {:?}", media_ids);
-
- debug!(
- "Attempting to delete {} media thumbnails (if present)",
- media_ids.len()
- );
-
- if let Err(err) = image::remove_thumbnails(&media_ids) {
- error!("Failed to remove thumbnails for library media: {:?}", err);
- } else {
- debug!("Removed thumbnails for library media (if present)");
- }
- }
-
- Ok(Json(deleted.id))
-}
diff --git a/apps/server/src/routers/api/media.rs b/apps/server/src/routers/api/media.rs
deleted file mode 100644
index 8a9bf7857..000000000
--- a/apps/server/src/routers/api/media.rs
+++ /dev/null
@@ -1,361 +0,0 @@
-use axum::{
- extract::{Path, Query},
- middleware::from_extractor,
- routing::{get, put},
- Extension, Json, Router,
-};
-use axum_sessions::extractors::ReadableSession;
-use prisma_client_rust::{raw, Direction};
-use stump_core::{
- config::get_config_dir,
- db::utils::PrismaCountTrait,
- fs::{image, media_file},
- prisma::{
- media::{self, OrderByParam as MediaOrderByParam},
- read_progress, user,
- },
- types::{
- ContentType, FindManyTrait, Media, Pageable, PagedRequestParams, QueryOrder,
- ReadProgress,
- },
-};
-use tracing::trace;
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- middleware::auth::Auth,
- utils::{
- get_session_user,
- http::{ImageResponse, NamedFile, PageableTrait},
- },
-};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/media", get(get_media))
- .route("/media/duplicates", get(get_duplicate_media))
- .route("/media/keep-reading", get(get_reading_media))
- .nest(
- "/media/:id",
- Router::new()
- .route("/", get(get_media_by_id))
- .route("/file", get(get_media_file))
- .route("/convert", get(convert_media))
- .route("/thumbnail", get(get_media_thumbnail))
- .route("/page/:page", get(get_media_page))
- .route("/progress/:page", put(update_media_progress)),
- )
- .layer(from_extractor::())
-}
-
-/// Get all media accessible to the requester. This is a paginated request, and
-/// has various pagination params available.
-async fn get_media(
- pagination: Query,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>>> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let unpaged = pagination.unpaged.unwrap_or(false);
- let page_params = pagination.page_params();
- let order_by_param: MediaOrderByParam =
- QueryOrder::from(page_params.clone()).try_into()?;
-
- let base_query = db
- .media()
- .find_many(vec![])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .order_by(order_by_param);
-
- if unpaged {
- return Ok(Json(
- base_query
- .exec()
- .await?
- .into_iter()
- .map(|m| m.into())
- .collect::>()
- .into(),
- ));
- }
-
- let count = db.media_count().await?;
-
- let media = base_query
- .paginated(page_params.clone())
- .exec()
- .await?
- .into_iter()
- .map(|m| m.into())
- .collect::>();
-
- Ok(Json((media, count, page_params).into()))
-}
-
-/// Get all media with identical checksums. This heavily implies duplicate files.
-/// This is a paginated request, and has various pagination params available.
-async fn get_duplicate_media(
- pagination: Query,
- Extension(ctx): State,
- _session: ReadableSession,
-) -> ApiResult>>> {
- let db = ctx.get_db();
-
- let media: Vec = db
- ._query_raw(raw!("SELECT * FROM media WHERE checksum IN (SELECT checksum FROM media GROUP BY checksum HAVING COUNT(*) > 1)"))
- .exec()
- .await?;
-
- let unpaged = pagination.unpaged.unwrap_or(false);
-
- if unpaged {
- return Ok(Json(media.into()));
- }
-
- Ok(Json((media, pagination.page_params()).into()))
-}
-
-// TODO: I will need to add epub progress in here SOMEHOW... this will be rather
-// difficult...
-// TODO: paginate?
-/// Get all media which the requester has progress for that is less than the
-/// total number of pages available (i.e not completed).
-async fn get_reading_media(
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- Ok(Json(
- db.media()
- .find_many(vec![media::read_progresses::some(vec![
- read_progress::user_id::equals(user_id.clone()),
- read_progress::page::gt(0),
- ])])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- read_progress::page::gt(0),
- ]))
- .order_by(media::updated_at::order(Direction::Desc))
- .exec()
- .await?
- .into_iter()
- .filter(|m| match m.read_progresses() {
- // Read progresses relation on media is one to many, there is a dual key
- // on read_progresses table linking a user and media. Therefore, there should
- // only be 1 item in this vec for each media resulting from the query.
- Ok(progresses) => {
- if progresses.len() != 1 {
- return false;
- }
-
- let progress = &progresses[0];
-
- if let Some(_epubcfi) = progress.epubcfi.as_ref() {
- // TODO: figure something out... might just need a `completed` field in progress TBH.
- false
- } else {
- progress.page < m.pages
- }
- },
- _ => false,
- })
- .map(|m| m.into())
- .collect(),
- ))
-}
-
-async fn get_media_by_id(
- Path(id): Path,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let book = db
- .media()
- .find_unique(media::id::equals(id.clone()))
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .exec()
- .await?;
-
- if book.is_none() {
- return Err(ApiError::NotFound(format!(
- "Media with id {} not found",
- id
- )));
- }
-
- Ok(Json(book.unwrap().into()))
-}
-
-async fn get_media_file(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult {
- let db = ctx.get_db();
-
- let media = db
- .media()
- .find_unique(media::id::equals(id.clone()))
- .exec()
- .await?;
-
- if media.is_none() {
- return Err(ApiError::NotFound(format!(
- "Media with id {} not found",
- id
- )));
- }
-
- let media = media.unwrap();
-
- Ok(NamedFile::open(media.path.clone()).await?)
-}
-
-// TODO: remove this, implement it? maybe?
-async fn convert_media(
- Path(id): Path,
- Extension(ctx): State,
-) -> Result<(), ApiError> {
- let db = ctx.get_db();
-
- let media = db
- .media()
- .find_unique(media::id::equals(id.clone()))
- .exec()
- .await?;
-
- if media.is_none() {
- return Err(ApiError::NotFound(format!(
- "Media with id {} not found",
- id
- )));
- }
-
- let media = media.unwrap();
-
- if media.extension != "cbr" || media.extension != "rar" {
- return Err(ApiError::BadRequest(format!(
- "Media with id {} is not a rar file. Stump only supports converting rar/cbr files to zip/cbz.",
- id
- )));
- }
-
- // TODO: write me...
- unimplemented!()
-}
-
-async fn get_media_page(
- Path((id, page)): Path<(String, i32)>,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let book = db
- .media()
- .find_unique(media::id::equals(id.clone()))
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .exec()
- .await?;
-
- match book {
- Some(book) => {
- if page > book.pages {
- // FIXME: probably won't work lol
- Err(ApiError::Redirect(format!(
- "/book/{}/read?page={}",
- id, book.pages
- )))
- } else {
- Ok(media_file::get_page(&book.path, page)?.into())
- }
- },
- None => Err(ApiError::NotFound(format!(
- "Media with id {} not found",
- id
- ))),
- }
-}
-
-async fn get_media_thumbnail(
- Path(id): Path,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let webp_path = get_config_dir()
- .join("thumbnails")
- .join(format!("{}.webp", id));
-
- if webp_path.exists() {
- trace!("Found webp thumbnail for media {}", id);
- return Ok((ContentType::WEBP, image::get_image_bytes(webp_path)?).into());
- }
-
- let book = db
- .media()
- .find_unique(media::id::equals(id.clone()))
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .exec()
- .await?;
-
- if book.is_none() {
- return Err(ApiError::NotFound(format!(
- "Media with id {} not found",
- id
- )));
- }
-
- let book = book.unwrap();
-
- Ok(media_file::get_page(book.path.as_str(), 1)?.into())
-}
-
-// FIXME: this doesn't really handle certain errors correctly, e.g. media/user not found
-async fn update_media_progress(
- Path((id, page)): Path<(String, i32)>,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- // update the progress, otherwise create it
- Ok(Json(
- db.read_progress()
- .upsert(
- read_progress::UniqueWhereParam::UserIdMediaIdEquals(
- user_id.clone(),
- id.clone(),
- ),
- (
- page,
- media::id::equals(id.clone()),
- user::id::equals(user_id.clone()),
- vec![],
- ),
- vec![read_progress::page::set(page)],
- )
- .exec()
- .await?
- .into(),
- ))
-}
diff --git a/apps/server/src/routers/api/mod.rs b/apps/server/src/routers/api/mod.rs
index 5d8220274..64ec1115f 100644
--- a/apps/server/src/routers/api/mod.rs
+++ b/apps/server/src/routers/api/mod.rs
@@ -1,60 +1,9 @@
-use axum::{
- routing::{get, post},
- Extension, Json, Router,
-};
-use stump_core::types::{ClaimResponse, StumpVersion};
+use axum::Router;
-use crate::{config::state::State, errors::ApiResult};
+use crate::config::state::AppState;
-mod auth;
-mod epub;
-mod filesystem;
-mod job;
-mod library;
-mod log;
-mod media;
-mod series;
-mod tag;
-mod user;
-mod reading_list;
+pub(crate) mod v1;
-pub(crate) fn mount() -> Router {
- Router::new().nest(
- "/api",
- Router::new()
- .merge(auth::mount())
- .merge(epub::mount())
- .merge(library::mount())
- .merge(media::mount())
- .merge(filesystem::mount())
- .merge(job::mount())
- .merge(log::mount())
- .merge(series::mount())
- .merge(tag::mount())
- .merge(user::mount())
- .merge(reading_list::mount())
- .route("/claim", get(claim))
- .route("/ping", get(ping))
- .route("/version", post(version)),
- )
-}
-
-async fn claim(Extension(ctx): State) -> ApiResult> {
- let db = ctx.get_db();
-
- Ok(Json(ClaimResponse {
- is_claimed: db.user().find_first(vec![]).exec().await?.is_some(),
- }))
-}
-
-async fn ping() -> ApiResult {
- Ok("pong".to_string())
-}
-
-async fn version() -> ApiResult> {
- Ok(Json(StumpVersion {
- semver: env!("CARGO_PKG_VERSION").to_string(),
- rev: std::env::var("GIT_REV").ok(),
- compile_time: env!("STATIC_BUILD_DATE").to_string(),
- }))
+pub(crate) fn mount(app_state: AppState) -> Router {
+ Router::new().nest("/api", Router::new().nest("/v1", v1::mount(app_state)))
}
diff --git a/apps/server/src/routers/api/reading_list.rs b/apps/server/src/routers/api/reading_list.rs
deleted file mode 100644
index e0a34f8bf..000000000
--- a/apps/server/src/routers/api/reading_list.rs
+++ /dev/null
@@ -1,126 +0,0 @@
-use axum::{
- routing::{get, post, put, delete},
- extract::Path,
- Extension, Json, Router,
-};
-use axum_sessions::extractors::{ReadableSession, WritableSession};
-use stump_core::{
- prisma::{reading_list, media, user},
- types::{User, readinglist::ReadingList, Media, readinglist::CreateReadingList},
-};
-use tracing::log::trace;
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- utils::{get_session_user},
-};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/reading-list", get(get_reading_list).post(create_reading_list))
- .nest(
- "/reading-list/:id",
- Router::new()
- .route("/", get(get_reading_list_by_id).put(update_reading_list).delete(delete_reading_list_by_id)),
- )
-}
-
-async fn get_reading_list(
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>> {
- let user_id = get_session_user(&session)?.id;
-
- Ok(Json(
- ctx.db
- .reading_list()
- .find_many(vec![
- reading_list::creating_user_id::equals(user_id),
- ])
- .exec()
- .await?
- .into_iter()
- .map(|u| u.into())
- .collect::>(),
- ))
-}
-
-async fn create_reading_list(
- Extension(ctx): State,
- Json(input): Json,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let created_reading_list = db
- .reading_list()
- .create(
- input.id.to_owned(),
- user::id::equals(user_id.clone()),
- vec![reading_list::media::connect(input.media_ids.iter().map(|id| media::id::equals(id.to_string())).collect())]
- )
- .exec()
- .await?;
-
- Ok(Json(created_reading_list.into()))
-}
-
-async fn get_reading_list_by_id(
- Path(id): Path,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult> {
- let user_id = get_session_user(&session)?.id;
- let db = ctx.get_db();
-
- let reading_list_id = db
- .reading_list()
- .find_unique(reading_list::id::equals(id.clone()))
- .exec()
- .await?;
-
- if reading_list_id.is_none() {
- return Err(ApiError::NotFound(format!(
- "Reading List with id {} not found",
- id
- )));
- }
-
- Ok(Json(reading_list_id.unwrap().into()))
-}
-
-async fn update_reading_list(
- Path(id): Path,
- Extension(ctx): State,
- Json(input): Json,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- let created_reading_list: _ = db
- .reading_list()
- .update(reading_list::id::equals(id.clone()), vec![
- reading_list::media::connect(input.media_ids.iter().map(|id| media::id::equals(id.to_string())).collect())
- ])
- .exec()
- .await?;
-
- Ok(Json(created_reading_list.into()))
-}
-
-async fn delete_reading_list_by_id(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- trace!("Attempting to delete reading list with ID {}", &id);
-
- let deleted = db
- .reading_list()
- .delete(reading_list::id::equals(id.clone()))
- .exec()
- .await?;
-
- Ok(Json(deleted.id))
-}
\ No newline at end of file
diff --git a/apps/server/src/routers/api/series.rs b/apps/server/src/routers/api/series.rs
deleted file mode 100644
index 3549d3b87..000000000
--- a/apps/server/src/routers/api/series.rs
+++ /dev/null
@@ -1,301 +0,0 @@
-use axum::{
- extract::{Path, Query},
- middleware::from_extractor,
- routing::get,
- Extension, Json, Router,
-};
-use axum_sessions::extractors::ReadableSession;
-use prisma_client_rust::Direction;
-use serde::Deserialize;
-use stump_core::{
- db::utils::PrismaCountTrait,
- fs::{image, media_file},
- prisma::{
- media::{self, OrderByParam as MediaOrderByParam},
- read_progress, series,
- },
- types::{
- ContentType, FindManyTrait, Media, Pageable, PagedRequestParams, QueryOrder,
- Series,
- },
-};
-use tracing::trace;
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- middleware::auth::Auth,
- utils::{
- get_session_user,
- http::{ImageResponse, PageableTrait},
- },
-};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/series", get(get_series))
- .nest(
- "/series/:id",
- Router::new()
- .route("/", get(get_series_by_id))
- .route("/media", get(get_series_media))
- .route("/media/next", get(get_next_in_series))
- .route("/thumbnail", get(get_series_thumbnail)),
- )
- .layer(from_extractor::())
-}
-
-#[derive(Deserialize)]
-struct LoadMedia {
- load_media: Option,
-}
-
-/// Get all series accessible by user. This is a paginated respone, and
-/// accepts various paginated request params.
-async fn get_series(
- load: Query,
- pagination: Query,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>>> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let load_media = load.load_media.unwrap_or(false);
-
- let action = db.series();
- let action = action.find_many(vec![]);
-
- let query = match load_media {
- true => action.with(
- series::media::fetch(vec![])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .order_by(media::name::order(Direction::Asc)),
- ),
- false => action,
- };
-
- let series = query
- .exec()
- .await?
- .into_iter()
- .map(|s| s.into())
- .collect::>();
-
- let unpaged = pagination.unpaged.unwrap_or(false);
- if unpaged {
- return Ok(Json(series.into()));
- }
-
- Ok(Json((series, pagination.page_params()).into()))
-}
-
-/// Get a series by ID. Optional query param `load_media` that will load the media
-/// relation (i.e. the media entities will be loaded and sent with the response)
-// #[get("/series/?")]
-async fn get_series_by_id(
- Path(id): Path,
- Extension(ctx): State,
- load_media: Query,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let load_media = load_media.load_media.unwrap_or(false);
- let mut query = db.series().find_unique(series::id::equals(id.clone()));
-
- if load_media {
- query = query.with(
- series::media::fetch(vec![])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .order_by(media::name::order(Direction::Asc)),
- );
- }
-
- let series = query.exec().await?;
-
- if series.is_none() {
- return Err(ApiError::NotFound(format!(
- "Series with id {} not found",
- id
- )));
- }
-
- if !load_media {
- // FIXME: PCR doesn't support relation counts yet!
- // let media_count = db
- // .media()
- // .count(vec![media::series_id::equals(Some(id.clone()))])
- // .exec()
- // .await?;
- let series_media_count = db.media_in_series_count(id).await?;
-
- return Ok(Json((series.unwrap(), series_media_count).into()));
- }
-
- Ok(Json(series.unwrap().into()))
-}
-
-/// Returns the thumbnail image for a series
-// #[get("/series//thumbnail")]
-async fn get_series_thumbnail(
- Path(id): Path,
- Extension(ctx): State,
-) -> ApiResult {
- let db = ctx.get_db();
-
- let media = db
- .media()
- .find_first(vec![media::series_id::equals(Some(id.clone()))])
- .order_by(media::name::order(Direction::Asc))
- .exec()
- .await?;
-
- if media.is_none() {
- return Err(ApiError::NotFound(format!(
- "Series with id {} not found",
- id
- )));
- }
-
- let media = media.unwrap();
-
- if let Some(webp_path) = image::get_thumbnail_path(&media.id) {
- trace!("Found webp thumbnail for series {}", &id);
- return Ok((ContentType::WEBP, image::get_image_bytes(webp_path)?).into());
- }
-
- Ok(media_file::get_page(media.path.as_str(), 1)?.into())
-}
-
-/// Returns the media in a given series. This is a paginated respone, and
-/// accepts various paginated request params.
-// #[get("/series//media?&")]
-async fn get_series_media(
- Path(id): Path,
- Extension(ctx): State,
- pagination: Query,
- session: ReadableSession,
-) -> ApiResult>>> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let unpaged = pagination.unpaged.unwrap_or(false);
- let page_params = pagination.page_params();
- let order_by_param: MediaOrderByParam =
- QueryOrder::from(page_params.clone()).try_into()?;
-
- let base_query = db
- .media()
- .find_many(vec![media::series_id::equals(Some(id.clone()))])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .order_by(order_by_param);
-
- let media = if unpaged {
- base_query.exec().await?
- } else {
- base_query.paginated(page_params.clone()).exec().await?
- };
-
- let media = media.into_iter().map(|m| m.into()).collect::>();
-
- if unpaged {
- return Ok(Json(media.into()));
- }
-
- // TODO: investigate this, I am getting incorrect counts here...
- // FIXME: AHAHAHAHAHAHA, PCR doesn't support relation counts! I legit thought I was
- // going OUTSIDE my fuckin mind
- // FIXME: PCR doesn't support relation counts yet!
- // let series_media_count = db
- // .media()
- // .count(vec![media::series_id::equals(Some(id))])
- // .exec()
- // .await?;
- let series_media_count = db.media_in_series_count(id).await?;
-
- Ok(Json((media, series_media_count, page_params).into()))
-}
-
-// TODO: Should I support ehere too?? Not sure, I have separate routes for epub,
-// but until I actually implement progress tracking for eI think think I can really
-// give a hard answer on what is best...
-/// Get the next media in a series, based on the read progress for the requesting user.
-/// Stump will return the first book in the series without progress, or return the first
-/// with partial progress. E.g. if a user has read pages 32/32 of book 3, then book 4 is
-/// next. If a user has read pages 31/32 of book 4, then book 4 is still next.
-// #[get("/series//media/next")]
-async fn get_next_in_series(
- Path(id): Path,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>> {
- let db = ctx.get_db();
- let user_id = get_session_user(&session)?.id;
-
- let series = db
- .series()
- .find_unique(series::id::equals(id.clone()))
- .with(
- series::media::fetch(vec![])
- .with(media::read_progresses::fetch(vec![
- read_progress::user_id::equals(user_id),
- ]))
- .order_by(media::name::order(Direction::Asc)),
- )
- .exec()
- .await?;
-
- if series.is_none() {
- return Err(ApiError::NotFound(format!(
- "Series with id {} no found.",
- id
- )));
- }
-
- let series = series.unwrap();
-
- let media = series.media();
-
- if media.is_err() {
- return Ok(Json(None));
- }
-
- let media = media.unwrap();
-
- Ok(Json(
- media
- .iter()
- .find(|m| {
- // I don't really know that this is valid... When I load in the
- // relation, this will NEVER be None. It will default to an empty
- // vector. But, for safety I guess I will leave this for now.
- if m.read_progresses.is_none() {
- return true;
- }
-
- let progresses = m.read_progresses.as_ref().unwrap();
-
- // No progress means it is up next (for this user)!
- if progresses.is_empty() {
- true
- } else {
- // Note: this should never really exceed len == 1, but :shrug:
- let progress = progresses.get(0).unwrap();
-
- progress.page < m.pages && progress.page > 0
- }
- })
- .or_else(|| media.get(0))
- .map(|data| data.to_owned().into()),
- ))
-}
-
-// async fn download_series()
diff --git a/apps/server/src/routers/api/tag.rs b/apps/server/src/routers/api/tag.rs
deleted file mode 100644
index ffb6456bc..000000000
--- a/apps/server/src/routers/api/tag.rs
+++ /dev/null
@@ -1,63 +0,0 @@
-use axum::{middleware::from_extractor, routing::get, Extension, Json, Router};
-use serde::Deserialize;
-use stump_core::types::Tag;
-
-use crate::{config::state::State, errors::ApiResult, middleware::auth::Auth};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/tags", get(get_tags).post(create_tags))
- .layer(from_extractor::())
-}
-
-/// Get all tags for all items in the database. Tags are returned in a flat list,
-/// not grouped by the items which they belong to.
-async fn get_tags(Extension(ctx): State) -> ApiResult>> {
- let db = ctx.get_db();
-
- Ok(Json(
- db.tag()
- .find_many(vec![])
- .exec()
- .await?
- .into_iter()
- .map(|t| t.into())
- .collect(),
- ))
-}
-
-#[derive(Deserialize)]
-pub struct CreateTags {
- pub tags: Vec,
-}
-
-async fn create_tags(
- Json(input): Json,
- Extension(ctx): State,
-) -> ApiResult>> {
- let db = ctx.get_db();
-
- let tags = input.tags.to_owned();
-
- let mut created_tags = vec![];
-
- // FIXME: bulk insert not yet supported. Also transactions, as an alternative,
- // not yet supported.
- for tag in tags {
- match db.tag().create(tag, vec![]).exec().await {
- Ok(new_tag) => {
- created_tags.push(new_tag.into());
- },
- Err(e) => {
- // TODO: check if duplicate tag error, in which case I don't care and
- // will ignore the error, otherwise throw the error.
- // Alternative, I could upsert? This way an error is always an error,
- // and if there's a duplicate tag it will be "updated", but really nothing
- // will happen sine the name is the same?
- println!("{}", e);
- },
- }
- }
-
- Ok(Json(created_tags))
-}
diff --git a/apps/server/src/routers/api/user.rs b/apps/server/src/routers/api/user.rs
deleted file mode 100644
index 6da6fc7ef..000000000
--- a/apps/server/src/routers/api/user.rs
+++ /dev/null
@@ -1,185 +0,0 @@
-use axum::{
- extract::Path, middleware::from_extractor, routing::get, Extension, Json, Router,
-};
-use axum_sessions::extractors::ReadableSession;
-use stump_core::{
- prisma::{user, user_preferences},
- types::{LoginOrRegisterArgs, User, UserPreferences, UserPreferencesUpdate},
-};
-
-use crate::{
- config::state::State,
- errors::{ApiError, ApiResult},
- middleware::auth::{AdminGuard, Auth},
- utils::{get_hash_cost, get_session_user},
-};
-
-pub(crate) fn mount() -> Router {
- Router::new()
- .route("/users", get(get_users).post(create_user))
- .layer(from_extractor::())
- .nest(
- "/users/:id",
- Router::new()
- .route("/", get(get_user_by_id).put(update_user))
- .route(
- "/preferences",
- get(get_user_preferences).put(update_user_preferences),
- ),
- )
- .layer(from_extractor::())
-}
-
-async fn get_users(
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult>> {
- let user = get_session_user(&session)?;
-
- // FIXME: admin middleware
- if !user.is_admin() {
- return Err(ApiError::Forbidden(
- "You do not have permission to access this resource.".into(),
- ));
- }
-
- Ok(Json(
- ctx.db
- .user()
- .find_many(vec![])
- .exec()
- .await?
- .into_iter()
- .map(|u| u.into())
- .collect::>(),
- ))
-}
-
-async fn create_user(
- Extension(ctx): State,
- Json(input): Json,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
- let user = get_session_user(&session)?;
-
- // FIXME: admin middleware
- if !user.is_admin() {
- return Err(ApiError::Forbidden(
- "You do not have permission to access this resource.".into(),
- ));
- }
- let hashed_password = bcrypt::hash(&input.password, get_hash_cost())?;
-
- let created_user = db
- .user()
- .create(input.username.to_owned(), hashed_password, vec![])
- .exec()
- .await?;
-
- // FIXME: these next two queries will be removed once nested create statements are
- // supported on the prisma client. Until then, this ugly mess is necessary.
- // https://github.com/Brendonovich/prisma-client-rust/issues/44
- let _user_preferences = db
- .user_preferences()
- .create(vec![user_preferences::user::connect(user::id::equals(
- created_user.id.clone(),
- ))])
- .exec()
- .await?;
-
- // This *really* shouldn't fail, so I am using unwrap here. It also doesn't
- // matter too much in the long run since this query will go away once above fixme
- // is resolved.
- let user = db
- .user()
- .find_unique(user::id::equals(created_user.id))
- .with(user::user_preferences::fetch())
- .exec()
- .await?
- .unwrap();
-
- Ok(Json(user.into()))
-}
-
-async fn get_user_by_id() -> ApiResult<()> {
- Err(ApiError::NotImplemented)
-}
-
-// TODO: figure out what operations are allowed here, and by whom. E.g. can a server
-// owner update user details of another managed account after they've been created?
-// or update another user's preferences? I don't like that last one, unsure about
-// the first. In general, after creation, I think a user has sole control over their account.
-// The server owner should be able to remove them, but I don't think they should be able
-// to do anything else?
-async fn update_user() -> ApiResult<()> {
- Err(ApiError::NotImplemented)
-}
-
-// FIXME: remove this once I resolve the below 'TODO'
-async fn get_user_preferences(
- Path(id): Path,
- Extension(ctx): State,
- // session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- Ok(Json(
- db.user_preferences()
- .find_unique(user_preferences::id::equals(id.clone()))
- .exec()
- .await?
- .expect("Failed to fetch user preferences")
- .into(), // .map(|p| p.into()),
- // user_preferences,
- ))
-}
-
-// TODO: I load the user preferences from the session in the auth call.
-// If a session didn't exist then I load it from DB. I think for now this is OK since
-// all the preferences are client-side, so if the server is not in sync with
-// preferences updates it is not a big deal. This will have to change somehow in the
-// future potentially though, unless I just load preferences when required.
-//
-// Note: I don't even use the user id to load the preferences, as I pull it from
-// when I got from the session. I could remove the ID requirement. I think the preferences
-// structure needs to eventually change a little anyways, I don't like how I can't query
-// by user id, it should be a unique where param but it isn't with how I structured it...
-// FIXME: remove this 'allow' once I resolve the above 'TODO'
-#[allow(unused)]
-async fn update_user_preferences(
- Path(id): Path,
- Json(input): Json,
- Extension(ctx): State,
- session: ReadableSession,
-) -> ApiResult> {
- let db = ctx.get_db();
-
- let user = get_session_user(&session)?;
- let user_preferences = user.user_preferences.unwrap_or_default();
-
- if user_preferences.id != input.id {
- return Err(ApiError::Forbidden(
- "You cannot update another user's preferences".into(),
- ));
- }
-
- Ok(Json(
- db.user_preferences()
- .update(
- user_preferences::id::equals(user_preferences.id.clone()),
- vec![
- user_preferences::locale::set(input.locale.to_owned()),
- user_preferences::library_layout_mode::set(
- input.library_layout_mode.to_owned(),
- ),
- user_preferences::series_layout_mode::set(
- input.series_layout_mode.to_owned(),
- ),
- ],
- )
- .exec()
- .await?
- .into(),
- ))
-}
diff --git a/apps/server/src/routers/api/v1/auth.rs b/apps/server/src/routers/api/v1/auth.rs
new file mode 100644
index 000000000..b8fe1dfa6
--- /dev/null
+++ b/apps/server/src/routers/api/v1/auth.rs
@@ -0,0 +1,193 @@
+use axum::{
+ extract::State,
+ routing::{get, post},
+ Json, Router,
+};
+use axum_sessions::extractors::{ReadableSession, WritableSession};
+use stump_core::{
+ db::models::User,
+ prelude::{LoginOrRegisterArgs, UserRole},
+ prisma::{user, user_preferences},
+};
+
+use crate::{
+ config::state::AppState,
+ errors::{ApiError, ApiResult},
+ utils::{self, verify_password},
+};
+
+pub(crate) fn mount() -> Router {
+ Router::new().nest(
+ "/auth",
+ Router::new()
+ .route("/me", get(viewer))
+ .route("/login", post(login))
+ .route("/logout", post(logout))
+ .route("/register", post(register)),
+ )
+}
+
+#[utoipa::path(
+ get,
+ path = "/api/v1/auth/me",
+ tag = "auth",
+ responses(
+ (status = 200, description = "Returns the currently logged in user from the session.", body = User),
+ (status = 401, description = "No user is logged in (unauthorized).")
+ )
+)]
+/// Returns the currently logged in user from the session. If no user is logged in, returns an
+/// unauthorized error.
+async fn viewer(session: ReadableSession) -> ApiResult> {
+ if let Some(user) = session.get::("user") {
+ Ok(Json(user))
+ } else {
+ Err(ApiError::Unauthorized)
+ }
+}
+
+#[utoipa::path(
+ post,
+ path = "/api/v1/auth/login",
+ tag = "auth",
+ request_body = LoginOrRegisterArgs,
+ responses(
+ (status = 200, description = "Authenticates the user and returns the user object.", body = User),
+ (status = 401, description = "Invalid username or password."),
+ (status = 500, description = "An internal server error occurred.")
+ )
+)]
+/// Authenticates the user and returns the user object. If the user is already logged in, returns the
+/// user object from the session.
+async fn login(
+ mut session: WritableSession,
+ State(state): State,
+ Json(input): Json,
+) -> ApiResult> {
+ if let Some(user) = session.get::("user") {
+ if input.username == user.username {
+ return Ok(Json(user));
+ }
+ }
+
+ let fetched_user = state
+ .db
+ .user()
+ .find_unique(user::username::equals(input.username.to_owned()))
+ .with(user::user_preferences::fetch())
+ .exec()
+ .await?;
+
+ if let Some(db_user) = fetched_user {
+ let matches = verify_password(&db_user.hashed_password, &input.password)?;
+ if !matches {
+ return Err(ApiError::Unauthorized);
+ }
+
+ let user: User = db_user.into();
+ session
+ .insert("user", user.clone())
+ .expect("Failed to write user to session");
+
+ return Ok(Json(user));
+ }
+
+ Err(ApiError::Unauthorized)
+}
+
+#[utoipa::path(
+ post,
+ path = "/api/v1/auth/logout",
+ tag = "auth",
+ responses(
+ (status = 200, description = "Destroys the session and logs the user out."),
+ (status = 500, description = "Failed to destroy session.")
+ )
+)]
+/// Destroys the session and logs the user out.
+async fn logout(mut session: WritableSession) -> ApiResult<()> {
+ session.destroy();
+ if !session.is_destroyed() {
+ return Err(ApiError::InternalServerError(
+ "Failed to destroy session".to_string(),
+ ));
+ }
+ Ok(())
+}
+
+#[utoipa::path(
+ post,
+ path = "/api/v1/auth/register",
+ tag = "auth",
+ request_body = LoginOrRegisterArgs,
+ responses(
+ (status = 200, description = "Successfully registered new user.", body = User),
+ (status = 403, description = "Must be server owner to register member accounts."),
+ (status = 500, description = "An internal server error occurred.")
+ )
+)]
+/// Attempts to register a new user. If no users exist in the database, the user is registered as a server owner.
+/// Otherwise, the registration is rejected by all users except the server owner.
+pub async fn register(
+ session: ReadableSession,
+ State(ctx): State,
+ Json(input): Json,
+) -> ApiResult> {
+ let db = ctx.get_db();
+
+ let has_users = db.user().find_first(vec![]).exec().await?.is_some();
+
+ let mut user_role = UserRole::default();
+
+ let session_user = session.get::("user");
+
+ // TODO: move nested if to if let once stable
+ if let Some(user) = session_user {
+ if !user.is_admin() {
+ return Err(ApiError::Forbidden(String::from(
+ "You do not have permission to access this resource.",
+ )));
+ }
+ } else if session_user.is_none() && has_users {
+ // if users exist, a valid session is required to register a new user
+ return Err(ApiError::Unauthorized);
+ } else if !has_users {
+ // if no users present, the user is automatically a server owner
+ user_role = UserRole::ServerOwner;
+ }
+
+ let hashed_password = bcrypt::hash(&input.password, utils::get_hash_cost())?;
+
+ let created_user = db
+ .user()
+ .create(
+ input.username.to_owned(),
+ hashed_password,
+ vec![user::role::set(user_role.into())],
+ )
+ .exec()
+ .await?;
+
+ // FIXME: these next two queries will be removed once nested create statements are
+ // supported on the prisma client. Until then, this ugly mess is necessary.
+ let _user_preferences = db
+ .user_preferences()
+ .create(vec![user_preferences::user::connect(user::id::equals(
+ created_user.id.clone(),
+ ))])
+ .exec()
+ .await?;
+
+ // This *really* shouldn't fail, so I am using expect here. It also doesn't
+ // matter too much in the long run since this query will go away once above fixme
+ // is resolved.
+ let user = db
+ .user()
+ .find_unique(user::id::equals(created_user.id))
+ .with(user::user_preferences::fetch())
+ .exec()
+ .await?
+ .expect("Failed to fetch user after registration.");
+
+ Ok(Json(user.into()))
+}
diff --git a/apps/server/src/routers/api/epub.rs b/apps/server/src/routers/api/v1/epub.rs
similarity index 89%
rename from apps/server/src/routers/api/epub.rs
rename to apps/server/src/routers/api/v1/epub.rs
index bb8f9fe47..ae1d141da 100644
--- a/apps/server/src/routers/api/epub.rs
+++ b/apps/server/src/routers/api/v1/epub.rs
@@ -1,23 +1,26 @@
use std::path::PathBuf;
use axum::{
- extract::Path, middleware::from_extractor, routing::get, Extension, Json, Router,
+ extract::{Path, State},
+ middleware::from_extractor_with_state,
+ routing::get,
+ Json, Router,
};
use axum_sessions::extractors::ReadableSession;
use stump_core::{
+ db::models::Epub,
fs::epub,
prisma::{media, read_progress},
- types::Epub,
};
use crate::{
- config::state::State,
+ config::state::AppState,
errors::{ApiError, ApiResult},
middleware::auth::Auth,
utils::{get_session_user, http::BufferResponse},
};
-pub(crate) fn mount() -> Router {
+pub(crate) fn mount(app_state: AppState) -> Router {
Router::new()
.nest(
"/epub/:id",
@@ -26,13 +29,13 @@ pub(crate) fn mount() -> Router {
.route("/chapter/:chapter", get(get_epub_chapter))
.route("/:root/:resource", get(get_epub_meta)),
)
- .layer(from_extractor::())
+ .layer(from_extractor_with_state::(app_state))
}
/// Get an Epub by ID. The `read_progress` relation is loaded.
async fn get_epub(
Path(id): Path,
- Extension(ctx): State,
+ State(ctx): State,
session: ReadableSession,
) -> ApiResult> {
let user_id = get_session_user(&session)?.id;
@@ -66,7 +69,7 @@ async fn get_epub(
/// the resource path)
async fn get_epub_chapter(
Path((id, chapter)): Path<(String, usize)>,
- Extension(ctx): State,
+ State(ctx): State,
) -> ApiResult {
let book = ctx
.db
@@ -95,7 +98,7 @@ async fn get_epub_chapter(
async fn get_epub_meta(
// TODO: does this work?
Path((id, root, resource)): Path<(String, String, PathBuf)>,
- Extension(ctx): State,
+ State(ctx): State,
) -> ApiResult {
let book = ctx
.db
diff --git a/apps/server/src/routers/api/filesystem.rs b/apps/server/src/routers/api/v1/filesystem.rs
similarity index 68%
rename from apps/server/src/routers/api/filesystem.rs
rename to apps/server/src/routers/api/v1/filesystem.rs
index 057eb9251..84dfcc8e3 100644
--- a/apps/server/src/routers/api/filesystem.rs
+++ b/apps/server/src/routers/api/v1/filesystem.rs
@@ -1,44 +1,54 @@
-use axum::{extract::Query, middleware::from_extractor, routing::post, Json, Router};
+use axum::{
+ extract::Query,
+ middleware::{from_extractor, from_extractor_with_state},
+ routing::post,
+ Json, Router,
+};
use axum_sessions::extractors::ReadableSession;
use std::path::Path;
-use stump_core::types::{
- DirectoryListing, DirectoryListingFile, DirectoryListingInput, Pageable,
- PagedRequestParams,
+use stump_core::prelude::{
+ DirectoryListing, DirectoryListingFile, DirectoryListingInput, PageQuery, Pageable,
};
use tracing::trace;
use crate::{
+ config::state::AppState,
errors::{ApiError, ApiResult},
middleware::auth::{AdminGuard, Auth},
- utils::get_session_user,
+ utils::get_session_admin_user,
};
-pub(crate) fn mount() -> Router {
+pub(crate) fn mount(app_state: AppState) -> Router {
Router::new()
.route("/filesystem", post(list_directory))
.layer(from_extractor::())
- .layer(from_extractor::())
+ .layer(from_extractor_with_state::(app_state))
}
+#[utoipa::path(
+ post,
+ path = "/api/v1/filesystem",
+ tag = "filesystem",
+ request_body = Option,
+ params(
+ ("pagination" = Option, Query, description = "Pagination parameters for the directory listing.")
+ ),
+ responses(
+ (status = 200, description = "Successfully retrieved contents of directory", body = PageableDirectoryListing),
+ (status = 400, description = "Invalid request."),
+ (status = 401, description = "No user is logged in (unauthorized)."),
+ (status = 403, description = "User does not have permission to access this resource."),
+ (status = 404, description = "Directory does not exist."),
+ )
+)]
/// List the contents of a directory on the file system at a given (optional) path. If no path
/// is provided, the file system root directory contents is returned.
pub async fn list_directory(
- input: Json