From 86f6ad08f426c04bb4ef752e8ea7389b68d44db1 Mon Sep 17 00:00:00 2001 From: Spoked <5782630+dreulavelle@users.noreply.github.com> Date: Mon, 26 Feb 2024 22:44:25 -0500 Subject: [PATCH] Dev (#237) * rework in progress. * fix: correct limits for orionoid * fix: switch to comprehensions * fix: disable plex logging for id mismatches * time for sleep. rework still wip. * feat: parser works. needs more work. language needs a rewrite. disabled for now. * fix: overseerr bug on using external ids * fix: remove plex debug line for users * disable tvdb checks from listrr, overseerr, plex. needs reworked. * set torrentio to disabled by default. removed parse logs. raised torrentio limit slightly. * Set all default settings to disabled by default for onboarding * add extra logging attr. for debugging large groups of data. * feat: started status page rewrite * add dev branch builds with :dev tag (#165) * fix: listrr validation * rework in progress. * fix: correct limits for orionoid * fix: switch to comprehensions * fix: disable plex logging for id mismatches * time for sleep. rework still wip. * feat: parser works. needs more work. language needs a rewrite. disabled for now. * feat: frontend improvements (#158) * feat: added global debug for settings * feat: added dev to formDebug so it's always true in development but false in production * feat: added DEBUG & LOG to general settings * deps: switched svelte-sonner to shadcn customized toast component * refactor: renamed PlexDebridItem to IcebergItem and added changes made to /items * fix: fixed the wrong relative date in status page * docs: readme improvements * docs: readme improvements * docs: readme improvements * chore(deps): bump lucide-svelte from 0.303.0 to 0.307.0 in /frontend (#124) * refactor: componentized forms, soon will do same for fields too * Parse rewrite (#128) * Move parser to its own module * Add ORIGIN to env vars * Fix overseerr, watchlist, jackett validation. * Added more refined logic to parser module. * Set stage for testing * Add methods for individual checks * Update sort logic * Update default settings * Fix jackett. Begin to add title support for jackett. --------- Co-authored-by: Spoked Co-authored-by: Dreu LaVelle * feat: onboarding on the way ;), major refactoring of form related code * Simplified downloading logic and modified state matchine * fix typo in state machine and handle movie pathing correctly * Remove useless method * Temporary fix to test * Remove uncached stream hashes from item to avoid loop, some blacklisting logic could also be good * chore(deps-dev): bump @typescript-eslint/eslint-plugin in /frontend (#134) * chore(deps): bump lucide-svelte from 0.307.0 to 0.309.0 in /frontend (#133) * chore(deps-dev): bump @sveltejs/kit from 2.0.1 to 2.3.2 in /frontend (#132) * chore(deps): bump bits-ui from 0.13.0 to 0.14.0 in /frontend (#130) * chore(deps-dev): bump @sveltejs/adapter-node in /frontend (#138) * feat: some more onboarding and form improvements * Dev startup to disabling pickling * feat: Listrr Support Added (#136) * Start Listrr Feature * feat: Listrr ready for review. * small tweaks. rewrite coming later. --------- Co-authored-by: Spoked * Jackett rewrite (#139) * Add TorBox scraper * Add is_anime attribute to item * Rework Jackett to Keyword Queries. Added categories. Removed Torbox * Remove audio from parsing, it removed alot of good hits * fix movie scraping and modify response parsing logic to be more readable * fix: remove torbox module * remove audio from being parsed * remove more audio from parser * fix typo * fix: tidy audio and networks * small tweaks --------- Co-authored-by: Spoked Co-authored-by: Gaisberg * Avoid [None] if empty content service * fix: handle bad quality manually in parser (#145) Co-authored-by: Spoked * deps: updated deps due to security updateS * feat: added more onboarding steps, some bugs also introduced * chore(deps-dev): bump @sveltejs/kit from 2.3.2 to 2.4.2 in /frontend (#156) * chore(deps-dev): bump prettier from 3.1.1 to 3.2.4 in /frontend (#155) * chore(deps): bump lucide-svelte from 0.309.0 to 0.314.0 in /frontend (#154) * chore(deps): bump bits-ui from 0.14.0 to 0.15.1 in /frontend (#153) * feat: minor changes * feat: deps change * feat: deps change * feat: onboarding MVP done --------- Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Spoked <5782630+dreulavelle@users.noreply.github.com> Co-authored-by: Spoked Co-authored-by: Dreu LaVelle Co-authored-by: Gaisberg * feat: frontend improvements (#159) * feat: added global debug for settings * feat: added dev to formDebug so it's always true in development but false in production * feat: added DEBUG & LOG to general settings * deps: switched svelte-sonner to shadcn customized toast component * refactor: renamed PlexDebridItem to IcebergItem and added changes made to /items * fix: fixed the wrong relative date in status page * docs: readme improvements * docs: readme improvements * docs: readme improvements * chore(deps): bump lucide-svelte from 0.303.0 to 0.307.0 in /frontend (#124) * refactor: componentized forms, soon will do same for fields too * Parse rewrite (#128) * Move parser to its own module * Add ORIGIN to env vars * Fix overseerr, watchlist, jackett validation. * Added more refined logic to parser module. * Set stage for testing * Add methods for individual checks * Update sort logic * Update default settings * Fix jackett. Begin to add title support for jackett. --------- Co-authored-by: Spoked Co-authored-by: Dreu LaVelle * feat: onboarding on the way ;), major refactoring of form related code * Simplified downloading logic and modified state matchine * fix typo in state machine and handle movie pathing correctly * Remove useless method * Temporary fix to test * Remove uncached stream hashes from item to avoid loop, some blacklisting logic could also be good * chore(deps-dev): bump @typescript-eslint/eslint-plugin in /frontend (#134) * chore(deps): bump lucide-svelte from 0.307.0 to 0.309.0 in /frontend (#133) * chore(deps-dev): bump @sveltejs/kit from 2.0.1 to 2.3.2 in /frontend (#132) * chore(deps): bump bits-ui from 0.13.0 to 0.14.0 in /frontend (#130) * chore(deps-dev): bump @sveltejs/adapter-node in /frontend (#138) * feat: some more onboarding and form improvements * Dev startup to disabling pickling * feat: Listrr Support Added (#136) * Start Listrr Feature * feat: Listrr ready for review. * small tweaks. rewrite coming later. --------- Co-authored-by: Spoked * Jackett rewrite (#139) * Add TorBox scraper * Add is_anime attribute to item * Rework Jackett to Keyword Queries. Added categories. Removed Torbox * Remove audio from parsing, it removed alot of good hits * fix movie scraping and modify response parsing logic to be more readable * fix: remove torbox module * remove audio from being parsed * remove more audio from parser * fix typo * fix: tidy audio and networks * small tweaks --------- Co-authored-by: Spoked Co-authored-by: Gaisberg * Avoid [None] if empty content service * fix: handle bad quality manually in parser (#145) Co-authored-by: Spoked * deps: updated deps due to security updateS * feat: added more onboarding steps, some bugs also introduced * chore(deps-dev): bump @sveltejs/kit from 2.3.2 to 2.4.2 in /frontend (#156) * chore(deps-dev): bump prettier from 3.1.1 to 3.2.4 in /frontend (#155) * chore(deps): bump lucide-svelte from 0.309.0 to 0.314.0 in /frontend (#154) * chore(deps): bump bits-ui from 0.14.0 to 0.15.1 in /frontend (#153) * feat: minor changes * feat: deps change * feat: deps change * feat: onboarding MVP done * refactor: moved schemes into forms/helpers.ts and command menu improvements * refactor: switched to new font, changes made to all except status page * refactor: minor change, didn't get commit * fix: minor fix * feat: fixed git merge conflicts issue * feat: fixed status page font too --------- Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Spoked <5782630+dreulavelle@users.noreply.github.com> Co-authored-by: Spoked Co-authored-by: Dreu LaVelle Co-authored-by: Gaisberg * docs: minor improvements (#160) * docs: readme improvements * docs: readme improvements * docs: minor improvements (#161) * docs: readme improvements * docs: readme improvements * docs: readme improvements * docs: minor improvements (#162) * fix: correct parsing of external id's (#163) Co-authored-by: Dreu LaVelle * fix: overseerr bug on using external ids * fix: remove plex debug line for users * disable tvdb checks from listrr, overseerr, plex. needs reworked. * set torrentio to disabled by default. removed parse logs. raised torrentio limit slightly. * Set all default settings to disabled by default for onboarding * add extra logging attr. for debugging large groups of data. * fix: listrr validation * feat: rewrite of status page almost done * add verbose logging in plex to debug looping * revert debug logging for plex. figured out looping issue. * add back plex log. add boilerplate for trakt content service. wip * add back plex log. add boilerplate for trakt content service. wip * feat: status page improvements (#169) * Feat/better status page (#170) * feat: status page improvements * feat: status page improvements * feat: status page improvements * feat: new settings (#176) * added more validation and logging * changed name of test_items module * refactor: edited minor things in settings (#177) * remove tzdata * refactor: edited minor things in settings (#177) * remove tzdata * fix typo on parser * increase ratelimits on second_limiters * Fix/parser/add attribute (#179) * increase ratelimits on second_limiters * Iceberg works. All scrapers working together. Symlinking works. --------- Co-authored-by: Spoked * add extra attrs to extended api endpoint * feat: status page improvements (#182) * [DEV] feat: frontend improvements (#193) * feat: switched to new theme! refactor: componentized most of form except tag inputs * feat: completely refactored form and added many improvements in ui * fix: fixed the mobile select issue * feat: some more frontend improvements and bug fixes * fix: shows not being downloaded * fix: add check if data attr exists for orionoid * fix: validate on empty symlink path correctly * fix: added more validation to symlink paths * feat: added some checks to save settings (#196) * feat: improve symlink validation. fixed torrentio rate limits once and for all. * fix: remove extra debug line from orionoid. correct settings path in symlink. * fix: tweaked symlink validation. * fix: raise jackett ratelimit * fix: raise orionoid ratelimit * feat: add back second limiter with 1/5s per on Torrentio. This will help prolong requests more instead of bursting. * feat: frontend and backend improvements (#197) (#199) * rework in progress. * fix: correct limits for orionoid * fix: switch to comprehensions * fix: disable plex logging for id mismatches * time for sleep. rework still wip. * feat: parser works. needs more work. language needs a rewrite. disabled for now. * fix: overseerr bug on using external ids * fix: remove plex debug line for users * disable tvdb checks from listrr, overseerr, plex. needs reworked. * set torrentio to disabled by default. removed parse logs. raised torrentio limit slightly. * Set all default settings to disabled by default for onboarding * add extra logging attr. for debugging large groups of data. * feat: started status page rewrite * add dev branch builds with :dev tag (#165) * fix: listrr validation * rework in progress. * fix: correct limits for orionoid * fix: switch to comprehensions * fix: disable plex logging for id mismatches * time for sleep. rework still wip. * feat: parser works. needs more work. language needs a rewrite. disabled for now. * feat: frontend improvements (#158) * feat: added global debug for settings * feat: added dev to formDebug so it's always true in development but false in production * feat: added DEBUG & LOG to general settings * deps: switched svelte-sonner to shadcn customized toast component * refactor: renamed PlexDebridItem to IcebergItem and added changes made to /items * fix: fixed the wrong relative date in status page * docs: readme improvements * docs: readme improvements * docs: readme improvements * chore(deps): bump lucide-svelte from 0.303.0 to 0.307.0 in /frontend (#124) * refactor: componentized forms, soon will do same for fields too * Parse rewrite (#128) * Move parser to its own module * Add ORIGIN to env vars * Fix overseerr, watchlist, jackett validation. * Added more refined logic to parser module. * Set stage for testing * Add methods for individual checks * Update sort logic * Update default settings * Fix jackett. Begin to add title support for jackett. --------- * feat: onboarding on the way ;), major refactoring of form related code * Simplified downloading logic and modified state matchine * fix typo in state machine and handle movie pathing correctly * Remove useless method * Temporary fix to test * Remove uncached stream hashes from item to avoid loop, some blacklisting logic could also be good * chore(deps-dev): bump @typescript-eslint/eslint-plugin in /frontend (#134) * chore(deps): bump lucide-svelte from 0.307.0 to 0.309.0 in /frontend (#133) * chore(deps-dev): bump @sveltejs/kit from 2.0.1 to 2.3.2 in /frontend (#132) * chore(deps): bump bits-ui from 0.13.0 to 0.14.0 in /frontend (#130) * chore(deps-dev): bump @sveltejs/adapter-node in /frontend (#138) * feat: some more onboarding and form improvements * Dev startup to disabling pickling * feat: Listrr Support Added (#136) * Start Listrr Feature * feat: Listrr ready for review. * small tweaks. rewrite coming later. --------- * Jackett rewrite (#139) * Add TorBox scraper * Add is_anime attribute to item * Rework Jackett to Keyword Queries. Added categories. Removed Torbox * Remove audio from parsing, it removed alot of good hits * fix movie scraping and modify response parsing logic to be more readable * fix: remove torbox module * remove audio from being parsed * remove more audio from parser * fix typo * fix: tidy audio and networks * small tweaks --------- * Avoid [None] if empty content service * fix: handle bad quality manually in parser (#145) * deps: updated deps due to security updateS * feat: added more onboarding steps, some bugs also introduced * chore(deps-dev): bump @sveltejs/kit from 2.3.2 to 2.4.2 in /frontend (#156) * chore(deps-dev): bump prettier from 3.1.1 to 3.2.4 in /frontend (#155) * chore(deps): bump lucide-svelte from 0.309.0 to 0.314.0 in /frontend (#154) * chore(deps): bump bits-ui from 0.14.0 to 0.15.1 in /frontend (#153) * feat: minor changes * feat: deps change * feat: deps change * feat: onboarding MVP done --------- * feat: frontend improvements (#159) * feat: added global debug for settings * feat: added dev to formDebug so it's always true in development but false in production * feat: added DEBUG & LOG to general settings * deps: switched svelte-sonner to shadcn customized toast component * refactor: renamed PlexDebridItem to IcebergItem and added changes made to /items * fix: fixed the wrong relative date in status page * docs: readme improvements * docs: readme improvements * docs: readme improvements * chore(deps): bump lucide-svelte from 0.303.0 to 0.307.0 in /frontend (#124) * refactor: componentized forms, soon will do same for fields too * Parse rewrite (#128) * Move parser to its own module * Add ORIGIN to env vars * Fix overseerr, watchlist, jackett validation. * Added more refined logic to parser module. * Set stage for testing * Add methods for individual checks * Update sort logic * Update default settings * Fix jackett. Begin to add title support for jackett. --------- * feat: onboarding on the way ;), major refactoring of form related code * Simplified downloading logic and modified state matchine * fix typo in state machine and handle movie pathing correctly * Remove useless method * Temporary fix to test * Remove uncached stream hashes from item to avoid loop, some blacklisting logic could also be good * chore(deps-dev): bump @typescript-eslint/eslint-plugin in /frontend (#134) * chore(deps): bump lucide-svelte from 0.307.0 to 0.309.0 in /frontend (#133) * chore(deps-dev): bump @sveltejs/kit from 2.0.1 to 2.3.2 in /frontend (#132) * chore(deps): bump bits-ui from 0.13.0 to 0.14.0 in /frontend (#130) * chore(deps-dev): bump @sveltejs/adapter-node in /frontend (#138) * feat: some more onboarding and form improvements * Dev startup to disabling pickling * feat: Listrr Support Added (#136) * Start Listrr Feature * feat: Listrr ready for review. * small tweaks. rewrite coming later. --------- * Jackett rewrite (#139) * Add TorBox scraper * Add is_anime attribute to item * Rework Jackett to Keyword Queries. Added categories. Removed Torbox * Remove audio from parsing, it removed alot of good hits * fix movie scraping and modify response parsing logic to be more readable * fix: remove torbox module * remove audio from being parsed * remove more audio from parser * fix typo * fix: tidy audio and networks * small tweaks --------- * Avoid [None] if empty content service * fix: handle bad quality manually in parser (#145) * deps: updated deps due to security updateS * feat: added more onboarding steps, some bugs also introduced * chore(deps-dev): bump @sveltejs/kit from 2.3.2 to 2.4.2 in /frontend (#156) * chore(deps-dev): bump prettier from 3.1.1 to 3.2.4 in /frontend (#155) * chore(deps): bump lucide-svelte from 0.309.0 to 0.314.0 in /frontend (#154) * chore(deps): bump bits-ui from 0.14.0 to 0.15.1 in /frontend (#153) * feat: minor changes * feat: deps change * feat: deps change * feat: onboarding MVP done * refactor: moved schemes into forms/helpers.ts and command menu improvements * refactor: switched to new font, changes made to all except status page * refactor: minor change, didn't get commit * fix: minor fix * feat: fixed git merge conflicts issue * feat: fixed status page font too --------- * docs: minor improvements (#160) * docs: readme improvements * docs: readme improvements * docs: minor improvements (#161) * docs: readme improvements * docs: readme improvements * docs: readme improvements * docs: minor improvements (#162) * fix: correct parsing of external id's (#163) * fix: overseerr bug on using external ids * fix: remove plex debug line for users * disable tvdb checks from listrr, overseerr, plex. needs reworked. * set torrentio to disabled by default. removed parse logs. raised torrentio limit slightly. * Set all default settings to disabled by default for onboarding * add extra logging attr. for debugging large groups of data. * fix: listrr validation * feat: rewrite of status page almost done * add verbose logging in plex to debug looping * revert debug logging for plex. figured out looping issue. * add back plex log. add boilerplate for trakt content service. wip * add back plex log. add boilerplate for trakt content service. wip * feat: status page improvements (#169) * Feat/better status page (#170) * feat: status page improvements * feat: status page improvements * feat: status page improvements * feat: new settings (#176) * added more validation and logging * changed name of test_items module * refactor: edited minor things in settings (#177) * remove tzdata * refactor: edited minor things in settings (#177) * remove tzdata * fix typo on parser * increase ratelimits on second_limiters * Fix/parser/add attribute (#179) * increase ratelimits on second_limiters * Iceberg works. All scrapers working together. Symlinking works. --------- * add extra attrs to extended api endpoint * feat: status page improvements (#182) * [DEV] feat: frontend improvements (#193) * feat: switched to new theme! refactor: componentized most of form except tag inputs * feat: completely refactored form and added many improvements in ui * fix: fixed the mobile select issue * feat: some more frontend improvements and bug fixes * fix: shows not being downloaded * fix: add check if data attr exists for orionoid * fix: validate on empty symlink path correctly * fix: added more validation to symlink paths * feat: added some checks to save settings (#196) * feat: improve symlink validation. fixed torrentio rate limits once and for all. * fix: remove extra debug line from orionoid. correct settings path in symlink. * fix: tweaked symlink validation. * fix: raise jackett ratelimit * fix: raise orionoid ratelimit * feat: add back second limiter with 1/5s per on Torrentio. This will help prolong requests more instead of bursting. --------- Co-authored-by: Spoked Co-authored-by: KingPin Co-authored-by: Dreu LaVelle Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Spoked <5782630+dreulavelle@users.noreply.github.com> Co-authored-by: Dreu LaVelle * fix: symlink works correctly now. missing arg in episode parsing. * fix: remove container_path dir check * chore: updated zod validations (#200) * Rd_rewrite (#198) * Update realdebrid.py * fix: library path pointed to wrong path * fix: better form validations and improvements (#202) * fix: settings not saving in onboarding (#205) * Use Pydantic for settings validation data model (#204) * Use Pydantic data models for settings * Rename Scraping to Scraper to be consistent across platform * Revert "Rename Scraping to Scraper to be consistent across platform" This reverts commit db29f42d608dce33659b5b72805b1275dbfb3d54. * fix data model * Fix data model and references * Add url field to torrentio model * Correct docstring * Refactor/validation (#211) * fix: refactor run logic, tidy, validation and symlink improvements * fix: symlink validation and paths. changed to users setting their own library dir. * fix: update to latest settings --------- Co-authored-by: Spoked * fix: trakt_data not being generated. corrected empty new_items * fix: simplify counter in logs * fix: symlink pointed to wrong source path * fix: lets make this a little cleaner for future reading.. * fix: couple tweaks * refactor: content classes. wip. * fix: media_items now properly is a container of your plex items * fix: remove some unneccessary logging * fix: remove redundant method for item removal. handles item removals now. * fix: tidy removal excess logging * remove: vscode settings. added library size logging to program class. * revert: move plex back to init after Content * fix: minor tweaks. add logging to scrapers. refactor is_cached method in rd. * chore: apply black formatting * push latest to dev. downloading only missing library items is fixed. * small plex refactor to improve readability * Add error log when no matching library paths are found in plex * black formatting * fix incorrect reference * Rename and reformat * final cleanup * Improve scraping downloading concurrency (#224) * cleanup * validate refresh interval in the model * use correct import and feild name * incomplete not working code commit * less broken now works to scraper stage * Scrapers working * End to end baby * Settings reload * Add update interval field for plex on frontend * Cleanup and add type based item fetching * make requested by based on class * Render state name correctly * add scheduler to reqs * fill out any missing info in plex library using metadata state * check for scheduler and executor when shutting down * fix: plex validation * dont instantiate container with list * pickle doing bad things * improve container updating accuraccy and add tests * remove commented code * state transition improvements * don't check for metadata so often * add indexed state and renamed content * Use states instead of services to route items * add type hints * use hasattr instead * Improvements from GB * fix service reference * state improvements * feat: add symlink watcher using watchdog. * improved deepcopy performance * Infer parent_id when adding season/episode like direct parent link * Lots of fixes --------- Co-authored-by: Spoked * only check if item has streams to be scraped state * fix conditional that caused looping of symlinked state * State machine testing (#236) * improve state handling and warm boot * finish method rename * impove warm boot * reorganize code to make it easier to run coverage * make sure we also update item based on merged index state * separate dev flag into cache and profile * add failure condition to symlink * add collection to get root item_id * save state transition debug data * add line number to executed_lines --------- Co-authored-by: Spoked Co-authored-by: Ayush Sehrawat Co-authored-by: KingPin Co-authored-by: Dreu LaVelle Co-authored-by: Ayush Sehrawat <69469790+AyushSehrawat@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dreu LaVelle Co-authored-by: Pukabyte <120460627+Pukabyte@users.noreply.github.com> Co-authored-by: Hank Bond <3474285+omnunum@users.noreply.github.com> --- Dockerfile | 1 + VERSION | 1 + backend/controllers/default.py | 27 +- backend/controllers/items.py | 12 +- backend/controllers/settings.py | 59 ++- backend/main.py | 32 +- backend/program/__init__.py | 64 +-- backend/program/content/__init__.py | 40 +- backend/program/content/listrr.py | 116 ++--- backend/program/content/mdblist.py | 52 +- backend/program/content/overseerr.py | 110 ++--- backend/program/content/plex_watchlist.py | 182 +++---- backend/program/content/trakt.py | 47 +- backend/program/indexers/__init__.py | 1 + backend/program/indexers/trakt.py | 177 +++++++ backend/program/libaries/__init__.py | 2 + backend/program/libaries/plex.py | 170 +++++++ backend/program/libaries/symlink.py | 72 +++ backend/program/media/__init__.py | 3 + backend/program/media/container.py | 209 ++++---- backend/program/media/item.py | 307 +++++++----- backend/program/media/state.py | 110 +---- backend/program/plex.py | 268 ----------- backend/program/program.py | 320 ++++++++++++ backend/program/realdebrid.py | 98 ++-- backend/program/scrapers/__init__.py | 87 ++-- backend/program/scrapers/jackett.py | 86 ++-- backend/program/scrapers/orionoid.py | 123 +++-- backend/program/scrapers/torrentio.py | 98 ++-- backend/program/settings/__init__.py | 0 backend/program/settings/manager.py | 59 +++ backend/program/settings/models.py | 149 ++++++ backend/program/state_transision.py | 99 ++++ backend/program/symlink.py | 233 +++++---- backend/program/types.py | 21 + backend/program/updaters/__init__.py | 0 backend/program/updaters/plex.py | 50 ++ backend/program/updaters/trakt.py | 194 -------- backend/tests/test_container.py | 62 +++ backend/tests/test_items.py | 8 +- backend/tests/test_parser.py | 38 +- backend/utils/__init__.py | 7 + backend/utils/default_settings.json | 66 --- backend/utils/logger.py | 60 +-- backend/utils/observable.py | 10 - backend/utils/parser.py | 455 +++++++++--------- backend/utils/request.py | 36 +- backend/utils/service_manager.py | 48 -- backend/utils/settings.py | 87 ---- backend/utils/utils.py | 4 - frontend/src/lib/forms/general-form.svelte | 12 +- frontend/src/lib/forms/helpers.ts | 83 +++- .../src/lib/forms/media-server-form.svelte | 1 + frontend/src/lib/helpers.ts | 24 - .../src/routes/settings/about/+page.svelte | 12 +- .../routes/settings/content/+page.server.ts | 34 +- .../routes/settings/general/+page.server.ts | 33 +- .../settings/mediaserver/+page.server.ts | 34 +- .../routes/settings/scrapers/+page.server.ts | 34 +- makefile | 12 +- requirements.txt | 4 +- 61 files changed, 2715 insertions(+), 2128 deletions(-) create mode 100644 VERSION create mode 100644 backend/program/indexers/__init__.py create mode 100644 backend/program/indexers/trakt.py create mode 100644 backend/program/libaries/__init__.py create mode 100644 backend/program/libaries/plex.py create mode 100644 backend/program/libaries/symlink.py create mode 100644 backend/program/media/__init__.py delete mode 100644 backend/program/plex.py create mode 100644 backend/program/program.py create mode 100644 backend/program/settings/__init__.py create mode 100644 backend/program/settings/manager.py create mode 100644 backend/program/settings/models.py create mode 100644 backend/program/state_transision.py create mode 100644 backend/program/types.py create mode 100644 backend/program/updaters/__init__.py create mode 100644 backend/program/updaters/plex.py delete mode 100644 backend/program/updaters/trakt.py create mode 100644 backend/tests/test_container.py delete mode 100644 backend/utils/default_settings.json delete mode 100644 backend/utils/observable.py delete mode 100644 backend/utils/service_manager.py delete mode 100644 backend/utils/settings.py diff --git a/Dockerfile b/Dockerfile index 48acfa88..acf66baf 100644 --- a/Dockerfile +++ b/Dockerfile @@ -30,6 +30,7 @@ COPY --from=frontend --chown=node:node /app/node_modules /iceberg/frontend/node_ COPY --from=frontend --chown=node:node /app/package.json /iceberg/frontend/package.json # Backend +COPY VERSION /iceberg/VERSION COPY backend/ /iceberg/backend RUN python3 -m venv /venv COPY requirements.txt /iceberg/requirements.txt diff --git a/VERSION b/VERSION new file mode 100644 index 00000000..c0a1ac19 --- /dev/null +++ b/VERSION @@ -0,0 +1 @@ +0.4.6 \ No newline at end of file diff --git a/backend/controllers/default.py b/backend/controllers/default.py index 34e86682..f9d1604a 100644 --- a/backend/controllers/default.py +++ b/backend/controllers/default.py @@ -1,6 +1,6 @@ from fastapi import APIRouter, Request import requests -from utils.settings import settings_manager +from program.settings.manager import settings_manager router = APIRouter( @@ -13,6 +13,7 @@ async def root(): return { "success": True, "message": "Iceburg is running!", + "version": settings_manager.settings.version, } @@ -26,7 +27,7 @@ async def health(request: Request): @router.get("/user") async def get_rd_user(): - api_key = settings_manager.get("real_debrid.api_key") + api_key = settings_manager.settings.real_debrid.api_key headers = {"Authorization": f"Bearer {api_key}"} response = requests.get( "https://api.real-debrid.com/rest/1.0/user", headers=headers @@ -37,19 +38,11 @@ async def get_rd_user(): @router.get("/services") async def get_services(request: Request): data = {} - if hasattr(request.app.program, "core_manager"): - for service in request.app.program.core_manager.services: + if hasattr(request.app.program, "services"): + for service in request.app.program.services.values(): data[service.key] = service.initialized - if getattr(service, "sm", False): - for sub_service in service.sm.services: - data[sub_service.key] = sub_service.initialized - if hasattr(request.app.program, "extras_manager"): - for service in request.app.program.extras_manager.services: - data[service.key] = service.initialized - if getattr(service, "sm", False): - for sub_service in service.sm.services: - data[sub_service.key] = sub_service.initialized - return { - "success": True, - "data": data - } + if not hasattr(service, "services"): + continue + for sub_service in service.services.values(): + data[sub_service.key] = sub_service.initialized + return {"success": True, "data": data} diff --git a/backend/controllers/items.py b/backend/controllers/items.py index 44bc2cd8..e3096a19 100644 --- a/backend/controllers/items.py +++ b/backend/controllers/items.py @@ -1,5 +1,5 @@ from fastapi import APIRouter, HTTPException, Request -from program.media.state import MediaItemStates +from program.media.state import States router = APIRouter( prefix="/items", @@ -12,7 +12,7 @@ async def get_states(request: Request): return { "success": True, - "states": [state for state in MediaItemStates], + "states": [state for state in States], } @@ -20,7 +20,7 @@ async def get_states(request: Request): async def get_items(request: Request): return { "success": True, - "items": [item.to_dict() for item in request.app.program.media_items.items], + "items": [item.to_dict() for item in request.app.program.media_items], } @@ -38,6 +38,7 @@ async def get_extended_item_info(request: Request, item_id: str): @router.delete("/remove/{item}") async def remove_item(request: Request, item: str): request.app.program.media_items.remove(item) + request.app.program.content.overseerr.delete_request(item) return { "success": True, "message": f"Removed {item}", @@ -49,7 +50,4 @@ async def get_imdb_info(request: Request, imdb_id: str): item = request.app.program.media_items.get_item_by_imdb_id(imdb_id) if item is None: raise HTTPException(status_code=404, detail="Item not found") - return { - "success": True, - "item": item.to_extended_dict() - } \ No newline at end of file + return {"success": True, "item": item.to_extended_dict()} diff --git a/backend/controllers/settings.py b/backend/controllers/settings.py index 7c1a7327..4bac74b6 100644 --- a/backend/controllers/settings.py +++ b/backend/controllers/settings.py @@ -1,6 +1,6 @@ from copy import copy -from fastapi import APIRouter -from utils.settings import settings_manager +from fastapi import APIRouter, HTTPException +from program.settings.manager import settings_manager from pydantic import BaseModel from typing import Any, List @@ -39,14 +39,25 @@ async def save_settings(): async def get_all_settings(): return { "success": True, - "data": copy(settings_manager.get_all()), + "data": copy(settings_manager.settings), } -@router.get("/get/{keys}") -async def get_settings(keys: str): - keys = keys.split(",") - data = {key: settings_manager.get(key) for key in keys} +@router.get("/get/{paths}") +async def get_settings(paths: str): + current_settings = settings_manager.settings.dict() + data = {} + for path in paths.split(","): + keys = path.split(".") + current_obj = current_settings + + for k in keys: + if k not in current_obj: + return None + current_obj = current_obj[k] + + data[path] = current_obj + return { "success": True, "data": data, @@ -55,8 +66,32 @@ async def get_settings(keys: str): @router.post("/set") async def set_settings(settings: List[SetSettings]): - settings_manager.set(settings) - return { - "success": True, - "message": "Settings saved!", - } + current_settings = settings_manager.settings.dict() + + for setting in settings: + keys = setting.key.split(".") + current_obj = current_settings + + # Navigate to the last key's parent object, similar to the getter. + for k in keys[:-1]: + if k not in current_obj: + # If a key in the path does not exist, raise an exception or optionally create a new dict. + raise HTTPException( + status_code=400, + detail=f"Path '{'.'.join(keys[:-1])}' does not exist.", + ) + current_obj = current_obj[k] + + # Set the value at the final key. + if keys[-1] in current_obj: + current_obj[keys[-1]] = setting.value + else: + # If the final key does not exist, raise an exception. + raise HTTPException( + status_code=400, + detail=f"Key '{keys[-1]}' does not exist in path '{'.'.join(keys[:-1])}'.", + ) + + settings_manager.load(settings_dict=current_settings) + + return {"success": True, "message": "Settings updated successfully."} diff --git a/backend/main.py b/backend/main.py index 3103d10c..1c7f04dd 100644 --- a/backend/main.py +++ b/backend/main.py @@ -1,20 +1,32 @@ +import contextlib +import sys +import threading +import time +import argparse +import traceback + +import uvicorn from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from program import Program from controllers.settings import router as settings_router from controllers.items import router as items_router from controllers.default import router as default_router -import contextlib -import sys -import threading -import time -import uvicorn -import argparse + +from utils.logger import logger parser = argparse.ArgumentParser() -parser.add_argument('--dev', action='store_true', help='Enable development mode') +parser.add_argument( + "--ignore_cache", action="store_true", + help="Ignore the cached metadata, create new data from scratch." +) +parser.add_argument( + "--profile_state_transitions", action="store_true", + help="Use a profiling process to determine what paths the state machine took" +) args = parser.parse_args() + class Server(uvicorn.Server): def install_signal_handlers(self): pass @@ -27,11 +39,15 @@ def run_in_thread(self): while not self.started: time.sleep(1e-3) yield + except Exception as e: + logger.error(traceback.format_exc()) + raise e finally: app.program.stop() self.should_exit = True sys.exit(0) + app = FastAPI() app.program = Program(args) @@ -55,4 +71,4 @@ def run_in_thread(self): app.program.start() app.program.run() except KeyboardInterrupt: - pass + pass \ No newline at end of file diff --git a/backend/program/__init__.py b/backend/program/__init__.py index 0c906bc9..6629f229 100644 --- a/backend/program/__init__.py +++ b/backend/program/__init__.py @@ -1,66 +1,4 @@ """Program main module""" -import os -import threading -import time -import concurrent.futures -from program.scrapers import Scraping -from program.realdebrid import Debrid -from program.symlink import Symlinker -from program.media.container import MediaItemContainer -from utils.logger import logger, get_data_path -from program.plex import Plex -from program.content import Content -from utils.utils import Pickly -from utils.settings import settings_manager as settings -from utils.service_manager import ServiceManager +from program.program import Program, Event -class Program(threading.Thread): - """Program class""" - - def __init__(self, args): - super().__init__(name="Iceberg") - self.running = False - self.startup_args = args - - def start(self): - logger.info("Iceberg v%s starting!", settings.get("version")) - self.initialized = False - self.media_items = MediaItemContainer(items=[]) - self.data_path = get_data_path() - if not os.path.exists(self.data_path): - os.mkdir(self.data_path) - if not self.startup_args.dev: - self.pickly = Pickly(self.media_items, self.data_path) - self.pickly.start() - self.core_manager = ServiceManager(self.media_items, True, Content, Plex, Scraping, Debrid, Symlinker) - if self.validate(): - logger.info("Iceberg started!") - else: - logger.info("----------------------------------------------") - logger.info("Iceberg is waiting for configuration to start!") - logger.info("----------------------------------------------") - super().start() - self.running = True - self.initialized = True - - def run(self): - while self.running: - if self.validate(): - with concurrent.futures.ThreadPoolExecutor( - max_workers=10, thread_name_prefix="Worker" - ) as executor: - for item in self.media_items: - executor.submit(item.perform_action, self.core_manager.services) - time.sleep(1) - - def validate(self): - return all(service.initialized for service in self.core_manager.services) - - def stop(self): - for service in self.core_manager.services: - if getattr(service, "running", False): - service.stop() - self.pickly.stop() - settings.save() - self.running = False \ No newline at end of file diff --git a/backend/program/content/__init__.py b/backend/program/content/__init__.py index da9c49e5..e1a53fd3 100644 --- a/backend/program/content/__init__.py +++ b/backend/program/content/__init__.py @@ -1,42 +1,4 @@ -import threading -import time -from utils.logger import logger -from utils.service_manager import ServiceManager from .mdblist import Mdblist from .overseerr import Overseerr from .plex_watchlist import PlexWatchlist -from .listrr import Listrr - - -class Content(threading.Thread): - def __init__(self, media_items): - super().__init__(name="Content") - self.initialized = False - self.key = "content" - self.running = False - self.sm = ServiceManager(media_items, False, Overseerr, PlexWatchlist, Listrr, Mdblist) - if not self.validate(): - logger.error("You have no content services enabled, please enable at least one!") - return - self._get_content() - self.initialized = True - - def validate(self): - return any(service.initialized for service in self.sm.services) - - def run(self) -> None: - while self.running: - self._get_content() - time.sleep(1) - - def _get_content(self) -> None: - for service in self.sm.services: - if service.initialized: - service.run() - - def start(self) -> None: - self.running = True - super().start() - - def stop(self) -> None: - self.running = False +from .listrr import Listrr \ No newline at end of file diff --git a/backend/program/content/listrr.py b/backend/program/content/listrr.py index 6104c128..be7bd494 100644 --- a/backend/program/content/listrr.py +++ b/backend/program/content/listrr.py @@ -1,40 +1,28 @@ -"""Mdblist content module""" -from time import time -from typing import Optional -from pydantic import BaseModel -from utils.settings import settings_manager +"""Listrr content module""" +from typing import Generator + from utils.logger import logger from utils.request import get, ping from requests.exceptions import HTTPError -from program.media.container import MediaItemContainer -from program.updaters.trakt import Updater as Trakt, get_imdbid_from_tmdb, get_imdbid_from_tvdb - +from program.settings.manager import settings_manager +from program.media.item import MediaItem +from program.indexers.trakt import get_imdbid_from_tmdb -class ListrrConfig(BaseModel): - enabled: bool - movie_lists: Optional[list] - show_lists: Optional[list] - api_key: Optional[str] - update_interval: int # in seconds - -class Listrr: +class Listrr(): """Content class for Listrr""" - def __init__(self, media_items: MediaItemContainer): + def __init__(self): self.key = "listrr" self.url = "https://listrr.pro/api" - self.settings = ListrrConfig(**settings_manager.get(f"content.{self.key}")) + self.settings = settings_manager.settings.content.listrr self.headers = {"X-Api-Key": self.settings.api_key} - self.initialized = self.validate_settings() + self.initialized = self.validate() if not self.initialized: return - self.media_items = media_items - self.updater = Trakt() - self.next_run_time = 0 logger.info("Listrr initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: """Validate Listrr settings.""" if not self.settings.enabled: logger.debug("Listrr is set to disabled.") @@ -43,8 +31,10 @@ def validate_settings(self) -> bool: logger.error("Listrr api key is not set or invalid.") return False valid_list_found = False - for list_name, content_list in [('movie_lists', self.settings.movie_lists), - ('show_lists', self.settings.show_lists)]: + for list_name, content_list in [ + ("movie_lists", self.settings.movie_lists), + ("show_lists", self.settings.show_lists), + ]: if content_list is None or not any(content_list): continue for item in content_list: @@ -57,67 +47,55 @@ def validate_settings(self) -> bool: try: response = ping("https://listrr.pro/", additional_headers=self.headers) if not response.ok: - logger.error(f"Listrr ping failed - Status Code: {response.status_code}, Reason: {response.reason}") + logger.error( + "Listrr ping failed - Status Code: %s, Reason: %s", response.status_code, response.reason + ) return response.ok except Exception as e: - logger.error(f"Listrr ping exception: {e}") + logger.error("Listrr ping exception: %s", e) return False - def run(self): - """Fetch media from Listrr and add them to media_items attribute.""" - if time() < self.next_run_time: - return - self.next_run_time = time() + self.settings.update_interval + def run(self) -> Generator[MediaItem, None, None]: + """Fetch new media from `Listrr`""" + self.not_found_ids.clear() movie_items = self._get_items_from_Listrr("Movies", self.settings.movie_lists) show_items = self._get_items_from_Listrr("Shows", self.settings.show_lists) - items = list(set(movie_items + show_items)) - new_items = [item for item in items if item not in self.media_items] - container = self.updater.create_items(new_items) - for item in container: - item.set("requested_by", "Listrr") - added_items = self.media_items.extend(container) - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.info("Added %s", item.log_string) - elif length > 5: - logger.info("Added %s items", length) + for imdb_id in movie_items + show_items: + yield MediaItem({'imdb_id': imdb_id, 'requested_by': self.__class__}) + return - def _get_items_from_Listrr(self, content_type, content_lists): + def _get_items_from_Listrr(self, content_type, content_lists) -> list[MediaItem]: """Fetch unique IMDb IDs from Listrr for a given type and list of content.""" - unique_ids = set() + unique_ids: set[str] = set() + if not content_lists: + return list(unique_ids) + for list_id in content_lists: - page = 1 - total_pages = 1 + if not list_id or len(list_id) != 24: + continue + + page, total_pages = 1, 1 while page <= total_pages: - if list_id == "": - break try: - response = get( - self.url + f"/List/{content_type}/{list_id}/ReleaseDate/Descending/{page}", - additional_headers=self.headers, - ) - if response.is_ok: - total_pages = response.data.pages - for item in response.data.items: - imdb_id = item.imDbId + url = f"{self.url}/List/{content_type}/{list_id}/ReleaseDate/Descending/{page}" + response = get(url, additional_headers=self.headers).response + data = response.json() + total_pages = data.get("pages", 1) + for item in data.get("items", []): + imdb_id = item.get("imDbId") + if imdb_id: + unique_ids.add(imdb_id) + elif content_type == "Movies" and item.get("tmDbId"): + imdb_id = get_imdbid_from_tmdb(item["tmDbId"]) if imdb_id: unique_ids.add(imdb_id) - # elif content_type == "Shows" and item.tvDbId: - # imdb_id = get_imdbid_from_tvdb(item.tvDbId) - # if imdb_id: - # unique_ids.add(imdb_id) - if not imdb_id and content_type == "Movies" and item.tmDbId: - imdb_id = get_imdbid_from_tmdb(item.tmDbId) - if imdb_id: - unique_ids.add(imdb_id) - else: - break + else: + self.not_found_ids.append(item["id"]) except HTTPError as e: if e.response.status_code in [400, 404, 429, 500]: break except Exception as e: - logger.error(f"An error occurred: {e}") + logger.error("An error occurred: %s", e) break page += 1 return list(unique_ids) diff --git a/backend/program/content/mdblist.py b/backend/program/content/mdblist.py index d6896429..87ae3986 100644 --- a/backend/program/content/mdblist.py +++ b/backend/program/content/mdblist.py @@ -1,34 +1,27 @@ """Mdblist content module""" -from typing import Optional -from pydantic import BaseModel -from utils.settings import settings_manager +from typing import Generator + from utils.logger import logger +from program.settings.manager import settings_manager +from program.media.item import MediaItem from utils.request import RateLimitExceeded, RateLimiter, get, ping -from program.media.container import MediaItemContainer -from program.updaters.trakt import Updater as Trakt -class MdblistConfig(BaseModel): - enabled: bool - api_key: Optional[str] - lists: Optional[list] -class Mdblist: +class Mdblist(): """Content class for mdblist""" - def __init__(self, media_items: MediaItemContainer): + def __init__(self): self.key = "mdblist" - self.settings = MdblistConfig(**settings_manager.get(f"content.{self.key}")) - self.initialized = self.validate_settings() + self.settings = settings_manager.settings.content.mdblist + self.initialized = self.validate() if not self.initialized: return - self.media_items = media_items - self.updater = Trakt() self.requests_per_2_minutes = self._calculate_request_time() self.rate_limiter = RateLimiter(self.requests_per_2_minutes, 120, True) logger.info("mdblist initialized") - def validate_settings(self): + def validate(self): if not self.settings.enabled: logger.debug("Mdblist is set to disabled.") return False @@ -44,33 +37,20 @@ def validate_settings(self): return False return True - def run(self): + def run(self) -> Generator[MediaItem, None, None]: """Fetch media from mdblist and add them to media_items attribute if they are not already there""" + try: with self.rate_limiter: - items = [] for list_id in self.settings.lists: - if list_id: - items += self._get_items_from_list( - list_id, self.settings.api_key - ) - new_items = [item for item in items if item not in self.media_items] or [] - container = self.updater.create_items(new_items) - for item in container: - item.set("requested_by", "Mdblist") - added_items = self.media_items.extend(container) - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.info("Added %s", item.log_string) - elif length > 5: - logger.info("Added %s items", length) + if not list_id: + continue + for item in list_items(list_id, self.settings.api_key): + yield MediaItem({'imdb_id': item.imdb_id, 'requested_by': self.__class__}) except RateLimitExceeded: pass - - def _get_items_from_list(self, list_id: str, api_key: str) -> MediaItemContainer: - return [item.imdb_id for item in list_items(list_id, api_key)] + return def _calculate_request_time(self): limits = my_limits(self.settings.api_key).limits diff --git a/backend/program/content/overseerr.py b/backend/program/content/overseerr.py index 4ba3924e..ff84c907 100644 --- a/backend/program/content/overseerr.py +++ b/backend/program/content/overseerr.py @@ -1,35 +1,25 @@ """Mdblist content module""" -from typing import Optional -from pydantic import BaseModel -from utils.settings import settings_manager from utils.logger import logger -from utils.request import get, ping -from program.media.container import MediaItemContainer -from program.updaters.trakt import Updater as Trakt, get_imdbid_from_tmdb, get_imdbid_from_tvdb +from utils.request import delete, get, ping +from program.settings.manager import settings_manager +from program.media.item import MediaItem +from program.indexers.trakt import get_imdbid_from_tmdb -class OverseerrConfig(BaseModel): - enabled: bool - url: Optional[str] - api_key: Optional[str] - - -class Overseerr: +class Overseerr(): """Content class for overseerr""" - def __init__(self, media_items: MediaItemContainer): + def __init__(self): self.key = "overseerr" - self.settings = OverseerrConfig(**settings_manager.get(f"content.{self.key}")) + self.settings = settings_manager.settings.content.overseerr self.headers = {"X-Api-Key": self.settings.api_key} - self.initialized = self.validate_settings() + self.initialized = self.validate() if not self.initialized: return - self.media_items = media_items - self.updater = Trakt() self.not_found_ids = [] logger.info("Overseerr initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: if not self.settings.enabled: logger.debug("Overseerr is set to disabled.") return False @@ -53,74 +43,76 @@ def validate_settings(self) -> bool: return False def run(self): - """Fetch media from overseerr and add them to media_items attribute - if they are not already there""" - items = self._get_items_from_overseerr(10000) - new_items = [item for item in items if item not in self.media_items] or [] - container = self.updater.create_items(new_items) - for item in container: - item.set("requested_by", "Overseerr") - added_items = self.media_items.extend(container) - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.info("Added %s", item.log_string) - elif length > 5: - logger.info("Added %s items", length) - - def _get_items_from_overseerr(self, amount: int): - """Fetch media from overseerr""" + """Fetch new media from `Overseerr`""" + self.not_found_ids.clear() response = get( - self.settings.url + f"/api/v1/request?take={amount}", + self.settings.url + f"/api/v1/request?take={10000}", additional_headers=self.headers, ) - ids = [] - if response.is_ok: - for item in response.data.results: - if not item.media.imdbId: - imdb_id = self.get_imdb_id(item.media) - if imdb_id: - ids.append(imdb_id) - else: - ids.append(item.media.imdbId) - return ids + if not response.is_ok: + return + for item in response.data.results: + if not item.media.imdbId: + imdb_id = self.get_imdb_id(item.media) + else: + imdb_id = item.media.imdbId + yield MediaItem({'imdb_id': imdb_id, 'requested_by': self.__class__}) + + - def get_imdb_id(self, overseerr_item): + def get_imdb_id(self, data) -> str: """Get imdbId for item from overseerr""" - if overseerr_item.mediaType == "show": - external_id = overseerr_item.tvdbId - overseerr_item.mediaType = "tv" + if data.mediaType == "show": + external_id = data.tvdbId + data.mediaType = "tv" id_extension = "tvdb-" else: - external_id = overseerr_item.tmdbId + external_id = data.tmdbId id_extension = "tmdb-" if f"{id_extension}{external_id}" in self.not_found_ids: return None response = get( - self.settings.url + f"/api/v1/{overseerr_item.mediaType}/{external_id}?language=en", + self.settings.url + + f"/api/v1/{data.mediaType}/{external_id}?language=en", additional_headers=self.headers, ) if not response.is_ok or not hasattr(response.data, "externalIds"): - logger.debug(f"Failed to fetch or no externalIds for {id_extension}{external_id}") + logger.debug( + f"Failed to fetch or no externalIds for {id_extension}{external_id}" + ) return None - title = getattr(response.data, "title", None) or getattr(response.data, "originalName", None) - imdb_id = getattr(response.data.externalIds, 'imdbId', None) + title = getattr(response.data, "title", None) or getattr( + response.data, "originalName", None + ) + imdb_id = getattr(response.data.externalIds, "imdbId", None) if imdb_id: return imdb_id # Try alternate IDs if IMDb ID is not available # alternate_ids = [('tvdbId', get_imdbid_from_tvdb), ('tmdbId', get_imdbid_from_tmdb)] - alternate_ids = [('tmdbId', get_imdbid_from_tmdb)] + alternate_ids = [("tmdbId", get_imdbid_from_tmdb)] for id_attr, fetcher in alternate_ids: external_id_value = getattr(response.data.externalIds, id_attr, None) if external_id_value: new_imdb_id = fetcher(external_id_value) if new_imdb_id: - logger.debug(f"Found imdbId for {title} from {id_attr}: {external_id_value}") + logger.debug( + f"Found imdbId for {title} from {id_attr}: {external_id_value}" + ) return new_imdb_id self.not_found_ids.append(f"{id_extension}{external_id}") - logger.debug(f"Could not get imdbId for {title}, or match with external id") + logger.debug("Could not get imdbId for %s, or match with external id", title) return None + + def delete_request(self, request_id: int) -> bool: + """Delete request from `Overseerr`""" + response = delete( + self.settings.url + f"/api/v1/request/{request_id}", + additional_headers=self.headers, + ) + if response.is_ok: + logger.info("Deleted request %c from overseerr", request_id) + return {"success": True, "message": f"Deleted request {request_id}"} \ No newline at end of file diff --git a/backend/program/content/plex_watchlist.py b/backend/program/content/plex_watchlist.py index 81134f4b..f588e2cd 100644 --- a/backend/program/content/plex_watchlist.py +++ b/backend/program/content/plex_watchlist.py @@ -1,153 +1,113 @@ """Plex Watchlist Module""" -from typing import Optional -from pydantic import BaseModel -from requests import ConnectTimeout, HTTPError +from requests import HTTPError +from typing import Generator + from utils.request import get, ping from utils.logger import logger -from utils.settings import settings_manager -from program.media.container import MediaItemContainer -from program.updaters.trakt import Updater as Trakt -import json - - -class PlexWatchlistConfig(BaseModel): - enabled: bool - rss: Optional[str] +from program.settings.manager import settings_manager +from program.media.item import MediaItem -class PlexWatchlist: +class PlexWatchlist(): """Class for managing Plex Watchlists""" - def __init__(self, media_items: MediaItemContainer): + def __init__(self): self.key = "plex_watchlist" self.rss_enabled = False - self.settings = PlexWatchlistConfig(**settings_manager.get(f"content.{self.key}")) - self.initialized = self.validate_settings() + self.settings = settings_manager.settings.content.plex_watchlist + self.initialized = self.validate() if not self.initialized: return - self.token = settings_manager.get("plex.token") - self.media_items = media_items - self.prev_count = 0 - self.updater = Trakt() + self.token = settings_manager.settings.plex.token self.not_found_ids = [] + logger.info("Plex Watchlist initialized!") - def validate_settings(self): + def validate(self): if not self.settings.enabled: logger.debug("Plex Watchlists is set to disabled.") return False if self.settings.rss: - logger.info("Found Plex RSS URL. Validating...") try: response = ping(self.settings.rss) - if response.ok: - self.rss_enabled = True - logger.info("Plex RSS URL is valid.") - return True - else: - logger.info(f"Plex RSS URL is not valid. Falling back to watching user Watchlist.") - return True + response.raise_for_status() + self.rss_enabled = True + return True except HTTPError as e: - if e.response.status_code in [404]: - logger.warn("Plex RSS URL is Not Found. Falling back to watching user Watchlist.") - return True - if e.response.status_code >= 400 and e.response.status_code <= 499: - logger.warn(f"Plex RSS URL is not reachable. Falling back to watching user Watchlist.") - return True - if e.response.status_code >= 500: - logger.error(f"Plex is having issues validating RSS feed. Falling back to watching user Watchlist.") - return True + if e.response.status_code == 404: + logger.warn( + "Plex RSS URL is Not Found. Please check your RSS URL in settings." + ) + else: + logger.warn( + "Plex RSS URL is not reachable (HTTP status code: %s). Falling back to using user Watchlist.", e.response.status_codez + ) + return True except Exception as e: logger.exception("Failed to validate Plex RSS URL: %s", e) return True return True def run(self): - """Fetch media from Plex Watchlist and add them to media_items attribute - if they are not already there""" - items = self._create_unique_list() - new_items = [item for item in items if item not in self.media_items] or [] - if len(new_items) == 0: - logger.debug("No new items found in Plex Watchlist") - return - for check in new_items: - if check is None: - new_items.remove(check) - self.not_found_ids.append(check) - container = self.updater.create_items(new_items) - for item in container: - item.set("requested_by", "Plex Watchlist") - previous_count = len(self.media_items) - added_items = self.media_items.extend(container) - added_items_count = len(self.media_items) - previous_count - if ( - added_items_count != self.prev_count - ): - self.prev_count = added_items_count - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.info("Added %s", item.log_string) - elif length > 5: - logger.info("Added %s items", length) - if len(self.not_found_ids) >= 1 and len(self.not_found_ids) <= 5: - for item in self.not_found_ids: - logger.info("Failed to add %s", item) - - def _create_unique_list(self): - """Create a unique list of items from Plex RSS and Watchlist""" - watchlist_items = self._get_items_from_watchlist() + """Fetch new media from `Plex Watchlist`""" + self.not_found_ids.clear() if not self.rss_enabled: - return watchlist_items - rss_items = self._get_items_from_rss() - return list(set(watchlist_items).union(rss_items)) + yield from self._get_items_from_watchlist() + else: + watchlist_items = set(self._get_items_from_watchlist()) + rss_items = set(self._get_items_from_rss()) + yield from ( + MediaItem({'imdb_id': id, 'requested_by': self.__class__}) + for id in watchlist_items.union(rss_items) + ) + - def _get_items_from_rss(self) -> list: - """Fetch media from Plex RSS Feed""" + def _get_items_from_rss(self) -> Generator[MediaItem, None, None]: + """Fetch media from Plex RSS Feed.""" try: - response_obj = get(self.settings.rss, timeout=60) - data = json.loads(response_obj.response.content) - items = data.get("items", []) - ids = [ + response = get(self.settings.rss, timeout=60) + if not response.is_ok: + logger.error( + "Failed to fetch Plex RSS feed: HTTP %s", response.status_code + ) + return + yield from ( guid.split("//")[-1] - for item in items - for guid in item.get("guids", []) - if "imdb://" in guid - ] - return ids - except ConnectTimeout: - logger.error("Connection Timeout: Failed to fetch Plex RSS feed") - return [] - except Exception: - logger.exception("Failed to fetch Plex RSS feed") - return [] + for item in response.data.items + for guid in item.guids + if guid.startswith("imdb://") + ) + except Exception as e: + logger.error( + "An unexpected error occurred while fetching Plex RSS feed: %s", e + ) + + return - def _get_items_from_watchlist(self) -> list: + def _get_items_from_watchlist(self) -> Generator[MediaItem, None, None]: """Fetch media from Plex watchlist""" filter_params = "includeFields=title,year,ratingkey&includeElements=Guid&sort=watchlistedAt:desc" url = f"https://metadata.provider.plex.tv/library/sections/watchlist/all?X-Plex-Token={self.token}&{filter_params}" response = get(url) - if not response.is_ok: - return [] - ids = [] + if not response.is_ok or not hasattr(response.data, "MediaContainer"): + yield + return for item in response.data.MediaContainer.Metadata: - if not item.ratingKey: - continue - imdb_id = self._ratingkey_to_imdbid(item.ratingKey) - if imdb_id: - ids.append(imdb_id) - return ids + if hasattr(item, "ratingKey") and item.ratingKey: + imdb_id = self._ratingkey_to_imdbid(item.ratingKey) + if imdb_id: + yield imdb_id def _ratingkey_to_imdbid(self, ratingKey: str) -> str: """Convert Plex rating key to IMDb ID""" - filter_params = "includeGuids=1&includeFields=guid,title,year&includeElements=Guid" + filter_params = ( + "includeGuids=1&includeFields=guid,title,year&includeElements=Guid" + ) url = f"https://metadata.provider.plex.tv/library/metadata/{ratingKey}?X-Plex-Token={self.token}&{filter_params}" response = get(url) - if not response.is_ok: - return None - metadata = response.data.MediaContainer.Metadata - if not metadata or not hasattr(metadata[0], "Guid"): - return None - for guid in metadata[0].Guid: - if "imdb://" in guid.id: - return guid.id.split("//")[-1] + if response.is_ok and hasattr(response.data, "MediaContainer"): + if hasattr(response.data.MediaContainer.Metadata[0], "Guid"): + for guid in response.data.MediaContainer.Metadata[0].Guid: + if "imdb://" in guid.id: + return guid.id.split("//")[-1] + self.not_found_ids.append(ratingKey) return None diff --git a/backend/program/content/trakt.py b/backend/program/content/trakt.py index 86d057cd..ca41064e 100644 --- a/backend/program/content/trakt.py +++ b/backend/program/content/trakt.py @@ -1,63 +1,42 @@ """Mdblist content module""" from time import time -from typing import Optional -from pydantic import BaseModel -from utils.settings import settings_manager +from program.settings.manager import settings_manager from utils.logger import logger -from utils.request import get, ping -from program.media.container import MediaItemContainer -from program.updaters.trakt import Updater as Trakt, CLIENT_ID - - -class TraktConfig(BaseModel): - enabled: bool - watchlist: Optional[list] - collection: Optional[list] - user_lists: Optional[list] - api_key: Optional[str] - update_interval: int # in seconds +# from program.indexers.trakt import TraktIndexer class Trakt: """Content class for Trakt""" - def __init__(self, media_items: MediaItemContainer): + def __init__(self): self.key = "trakt" - self.url = None - self.settings = TraktConfig(**settings_manager.get(f"content.{self.key}")) + self.api_url = "https://api.trakt.tv" + self.settings = settings_manager.settings.content.trakt self.headers = {"X-Api-Key": self.settings.api_key} - self.initialized = self.validate_settings() + self.initialized = self.validate() if not self.initialized: return - self.media_items = media_items self.updater = Trakt() self.next_run_time = 0 logger.info("Trakt initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: """Validate Trakt settings.""" return NotImplementedError def run(self): """Fetch media from Trakt and add them to media_items attribute.""" - if time() < self.next_run_time: - return self.next_run_time = time() + self.settings.update_interval watchlist_items = self._get_items_from_trakt_watchlist(self.settings.watchlist) - collection_items = self._get_items_from_trakt_collections(self.settings.collection) + collection_items = self._get_items_from_trakt_collections( + self.settings.collection + ) user_list_items = self._get_items_from_trakt_list(self.settings.user_lists) items = list(set(watchlist_items + collection_items + user_list_items)) - new_items = [item for item in items if item not in self.media_items] - container = self.updater.create_items(new_items) + container = self.updater.create_items(items) for item in container: - item.set("requested_by", "Trakt") - added_items = self.media_items.extend(container) - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.info("Added %s", item.log_string) - elif length > 5: - logger.info("Added %s items", length) + item.set("requested_by", self.__class__) + yield from container def _get_items_from_trakt_watchlist(self, watchlist_items: list) -> list: """Get items from Trakt watchlist""" diff --git a/backend/program/indexers/__init__.py b/backend/program/indexers/__init__.py new file mode 100644 index 00000000..a41e522b --- /dev/null +++ b/backend/program/indexers/__init__.py @@ -0,0 +1 @@ +from .trakt import TraktIndexer \ No newline at end of file diff --git a/backend/program/indexers/trakt.py b/backend/program/indexers/trakt.py new file mode 100644 index 00000000..84c8e88a --- /dev/null +++ b/backend/program/indexers/trakt.py @@ -0,0 +1,177 @@ +"""Trakt updater module""" + +from datetime import datetime, timedelta +from typing import Optional +from utils.logger import logger +from utils.request import get +from program.media.item import Movie, Show, Season, Episode, MediaItem, ItemId +from program.settings.manager import settings_manager + +CLIENT_ID = "0183a05ad97098d87287fe46da4ae286f434f32e8e951caad4cc147c947d79a3" + + +class TraktIndexer: + """Trakt updater class""" + + def __init__(self): + self.key = 'traktindexer' + self.ids = [] + self.initialized = True + self.settings = settings_manager.settings.indexer + + def run(self, item: MediaItem): + if not item: + logger.error("Item is None") + return None + if (imdb_id := item.imdb_id) is None: + logger.error("Item %s does not have an imdb_id, cannot index it", item.log_string) + return None + item = create_item_from_imdb_id(imdb_id) + if not item: + logger.error("Failed to get item from imdb_id: %s", imdb_id) + return None + elif item.type == "show": + seasons = get_show(imdb_id) + for season in seasons: + if season.number == 0: + continue + season_item = _map_item_from_data(season, "season") + for episode in season.episodes: + episode_item = _map_item_from_data(episode, "episode") + season_item.add_episode(episode_item) + item.add_season(season_item) + item.indexed_at = datetime.now() + yield item + + + @staticmethod + def should_submit(item: MediaItem) -> bool: + if not item.indexed_at: + return True + settings = settings_manager.settings.indexer + interval = timedelta(seconds=settings.update_interval) + return item.indexed_at < datetime.now() - interval + +def _map_item_from_data(data, item_type) -> MediaItem: + """Map trakt.tv API data to MediaItemContainer""" + if item_type not in ["movie", "show", "season", "episode"]: + logger.debug( + "Unknown item type %s for %s not found in list of acceptable objects", + item_type, + data.title, + ) + return None + formatted_aired_at = None + if getattr(data, "first_aired", None) and ( + item_type == "show" + or (item_type == "season" and data.aired_episodes == data.episode_count) + or item_type == "episode" + ): + aired_at = data.first_aired + formatted_aired_at = datetime.strptime(aired_at, "%Y-%m-%dT%H:%M:%S.%fZ") + if getattr(data, "released", None): + released_at = data.released + formatted_aired_at = datetime.strptime(released_at, "%Y-%m-%d") + item = { + "title": getattr(data, "title", None), # 'Game of Thrones' + "year": getattr(data, "year", None), # 2011 + "status": getattr( + data, "status", None + ), # 'ended', 'released', 'returning series' + "aired_at": formatted_aired_at, # datetime.datetime(2011, 4, 17, 0, 0) + "imdb_id": getattr(data.ids, "imdb", None), # 'tt0496424' + "tvdb_id": getattr(data.ids, "tvdb", None), # 79488 + "tmdb_id": getattr(data.ids, "tmdb", None), # 1399 + "genres": getattr( + data, "genres", None + ), # ['Action', 'Adventure', 'Drama', 'Fantasy'] + "network": getattr(data, "network", None), # 'HBO' + "country": getattr(data, "country", None), # 'US' + "language": getattr(data, "language", None), # 'en' + "requested_at": datetime.now(), # datetime.datetime(2021, 4, 17, 0, 0) + "is_anime": "anime" in getattr(data, "genres", []), + } + + match item_type: + case "movie": + return_item = Movie(item) + case "show": + return_item = Show(item) + case "season": + item["number"] = getattr(data, "number") + return_item = Season(item) + case "episode": + item["number"] = getattr(data, "number") + return_item = Episode(item) + case _: + logger.debug("Unknown item type %s for %s", item_type, data.title) + return_item = None + return return_item + + +# API METHODS + + +def get_show(imdb_id: str): + """Wrapper for trakt.tv API show method""" + url = f"https://api.trakt.tv/shows/{imdb_id}/seasons?extended=episodes,full" + response = get( + url, + additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, + ) + if response.is_ok: + if response.data: + return response.data + return [] + + +def create_item_from_imdb_id(imdb_id: str) -> MediaItem: + """Wrapper for trakt.tv API search method""" + url = f"https://api.trakt.tv/search/imdb/{imdb_id}?extended=full" + response = get( + url, + additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, + ) + if response.is_ok and len(response.data) > 0: + try: + media_type = response.data[0].type + if media_type == "movie": + data = response.data[0].movie + elif media_type == "show": + data = response.data[0].show + elif media_type == "season": + data = response.data[0].season + elif media_type == "episode": + data = response.data[0].episode + if data: + return _map_item_from_data(data, media_type) + except UnboundLocalError: + logger.error("Unknown item %s with response %s", imdb_id, response.content) + return None + return None + + +def get_imdbid_from_tvdb(tvdb_id: str) -> str | None: + """Get IMDb ID from TVDB ID in Trakt""" + url = f"https://api.trakt.tv/search/tvdb/{tvdb_id}?extended=full" + response = get( + url, + additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, + ) + if response.is_ok and len(response.data) > 0: + # noticing there are multiple results for some TVDB IDs + # TODO: Need to check item.type and compare to the resulting types.. + return response.data[0].show.ids.imdb + return None + + +def get_imdbid_from_tmdb(tmdb_id: str) -> str | None: + """Get IMDb ID from TMDB ID in Trakt""" + url = f"https://api.trakt.tv/search/tmdb/{tmdb_id}?extended=full" + response = get( + url, + additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, + ) + if response.is_ok and len(response.data) > 0: + return response.data[0].movie.ids.imdb + return None diff --git a/backend/program/libaries/__init__.py b/backend/program/libaries/__init__.py new file mode 100644 index 00000000..229bed76 --- /dev/null +++ b/backend/program/libaries/__init__.py @@ -0,0 +1,2 @@ +from .plex import PlexLibrary +from .symlink import SymlinkLibrary \ No newline at end of file diff --git a/backend/program/libaries/plex.py b/backend/program/libaries/plex.py new file mode 100644 index 00000000..68b3a715 --- /dev/null +++ b/backend/program/libaries/plex.py @@ -0,0 +1,170 @@ +"""Plex library module""" + +import concurrent.futures +from threading import Lock +import os +from datetime import datetime +from typing import Optional +from plexapi.server import PlexServer +from plexapi.exceptions import BadRequest, Unauthorized +from utils.logger import logger +from program.settings.manager import settings_manager +from program.media.item import ( + Movie, + Show, + Season, + Episode, + ItemId +) + +class PlexLibrary(): + """Plex library class""" + + def __init__(self): + self.key = "plexlibrary" + self.initialized = False + self.library_path = os.path.abspath( + os.path.dirname(settings_manager.settings.symlink.library_path) + ) + self.last_fetch_times = {} + self.settings = settings_manager.settings.plex + try: + self.plex = PlexServer(self.settings.url, self.settings.token, timeout=60) + except Unauthorized: + logger.error("Plex is not authorized!") + return + except BadRequest as e: + logger.error("Plex is not configured correctly: %s", e) + return + except Exception as e: + logger.error("Plex exception thrown: %s", e) + return + self.log_worker_count = False + self.initialized = True if isinstance(self.plex, PlexServer) else False + if not self.initialized: + logger.error("Plex is not initialized!") + return + logger.info("Plex initialized!") + self.lock = Lock() + + def _get_last_fetch_time(self, section): + return self.last_fetch_times.get(section.key, datetime(1800, 1, 1)) + + def run(self): + """Run Plex library""" + items = [] + sections = self.plex.library.sections() + processed_sections = set() + max_workers = os.cpu_count() / 2 + with concurrent.futures.ThreadPoolExecutor( + max_workers=max_workers, thread_name_prefix="Plex" + ) as executor: + for section in sections: + is_wanted = self._is_wanted_section(section) + if section.key in processed_sections or not is_wanted: + continue + if section.refreshing: + processed_sections.add(section.key) + continue + # Fetch only items that have been added or updated since the last fetch + last_fetch_time = self._get_last_fetch_time(section) + filters = {} if not self.last_fetch_times else {"addedAt>>": last_fetch_time} + future_items = { + executor.submit(self._create_item, item) + for item in section.search(libtype=section.type, filters=filters) + } + for future in concurrent.futures.as_completed(future_items): + media_item = future.result() + items.append(media_item) + with self.lock: + self.last_fetch_times[section.key] = datetime.now() + processed_sections.add(section.key) + + if not processed_sections: + logger.error( + "Failed to process any sections. Ensure that your library_path" + " of {self.library_path} folders are included in the relevant sections" + " (found in Plex Web UI Setting > Manage > Libraries > Edit Library)." + ) + return + yield from items + + def _create_item(self, raw_item): + """Create a MediaItem from Plex API data.""" + item = _map_item_from_data(raw_item) + if not item or raw_item.type != "show": + return item + for season in raw_item.seasons(): + if season.seasonNumber == 0: + continue + if not (season_item := _map_item_from_data(season)): + continue + episode_items = [] + for episode in season.episodes(): + episode_item = _map_item_from_data(episode) + if episode_item: + episode_items.append(episode_item) + season_item.episodes = episode_items + item.seasons.append(season_item) + return item + + def _is_wanted_section(self, section): + section_located = any( + self.library_path in location for location in section.locations + ) + return section_located and section.type in ["movie", "show"] + + +def _map_item_from_data(item): + """Map Plex API data to MediaItemContainer.""" + file = None + guid = getattr(item, "guid", None) + if item.type in ["movie", "episode"]: + file = getattr(item, "locations", [None])[0].split("/")[-1] + genres = [genre.tag for genre in getattr(item, "genres", [])] + is_anime = "anime" in genres + title = getattr(item, "title", None) + key = getattr(item, "key", None) + season_number = getattr(item, "seasonNumber", None) + episode_number = getattr(item, "episodeNumber", None) + art_url = getattr(item, "artUrl", None) + imdb_id = None + tvdb_id = None + aired_at = None + + if item.type in ["movie", "show"]: + guids = getattr(item, "guids", []) + imdb_id = next( + (guid.id.split("://")[-1] for guid in guids if "imdb" in guid.id), None + ) + aired_at = getattr(item, "originallyAvailableAt", None) + + media_item_data = { + "title": title, + "imdb_id": imdb_id, + "tvdb_id": tvdb_id, + "aired_at": aired_at, + "genres": genres, + "key": key, + "guid": guid, + "art_url": art_url, + "file": file, + "is_anime": is_anime, + } + + # Instantiate the appropriate subclass based on 'item_type' + if item.type == "movie": + return Movie(media_item_data) + elif item.type == "show": + return Show(media_item_data) + elif item.type == "season": + media_item_data["number"] = season_number + return Season(media_item_data) + elif item.type == "episode": + media_item_data["number"] = episode_number + media_item_data["season_number"] = season_number + return Episode(media_item_data) + else: + # Specials may end up here.. + logger.error("Unknown Item: %s with type %s", item.title, item.type) + return None diff --git a/backend/program/libaries/symlink.py b/backend/program/libaries/symlink.py new file mode 100644 index 00000000..d7828108 --- /dev/null +++ b/backend/program/libaries/symlink.py @@ -0,0 +1,72 @@ +import os +import re +from pathlib import Path + +from utils.logger import logger +from program.settings.manager import settings_manager +from program.media.item import ( + MediaItem, + Movie, + Show, + Season, + Episode, + ItemId +) + +class SymlinkLibrary: + def __init__(self): + self.key = "symlinklibrary" + self.last_fetch_times = {} + self.settings = settings_manager.settings.symlink + self.initialized = True + + def run(self) -> MediaItem: + """Create a library from the symlink paths. Return stub items that should + be fed into an Indexer to have the rest of the metadata filled in.""" + movies = [ + (root, files[0]) + for root, _, files + in os.walk(self.settings.library_path / "movies") + if files + ] + for path, filename in movies: + imdb_id = re.search('(tt\d+)', filename) + if not imdb_id: + logger.error("Can't extract episode imdb_id at path %s", path / filename) + continue + movie_item = Movie({'imdb_id': imdb_id.group()}) + movie_item.update_folder = "updated" + yield movie_item + + shows_dir = self.settings.library_path / "shows" + for show in os.listdir(shows_dir): + imdb_id = re.search(r'(tt\d+)', show) + title = re.search(r'(.+)?( \()', show) + if not imdb_id or not title: + logger.error( + "Can't extract episode imdb_id or title at path %s", + shows_dir / show + ) + continue + show_item = Show({'imdb_id': imdb_id.group(), 'title': title.group(1)}) + for season in os.listdir(shows_dir / show): + if not (season_number := re.search(r'(\d+)', season)): + logger.error( + "Can't extract episode number at path %s", + shows_dir / show / season + ) + continue + season_item = Season({'number': int(season_number.group())}) + for episode in os.listdir(shows_dir / show / season): + if not (episode_number := re.search(r's\d+e(\d+)', episode)): + logger.error( + "Can't extract episode number at path %s", + shows_dir / show / season / episode + ) + continue + episode_item = Episode({'number': int(episode_number.group(1))}) + episode_item.symlinked = True + episode_item.update_folder = "updated" + season_item.add_episode(episode_item) + show_item.add_season(season_item) + yield show_item \ No newline at end of file diff --git a/backend/program/media/__init__.py b/backend/program/media/__init__.py new file mode 100644 index 00000000..8cdeb86f --- /dev/null +++ b/backend/program/media/__init__.py @@ -0,0 +1,3 @@ +from .item import MediaItem, Episode, Show, Season, Movie +from .container import MediaItemContainer +from .state import States \ No newline at end of file diff --git a/backend/program/media/container.py b/backend/program/media/container.py index 8015052a..055d1dec 100644 --- a/backend/program/media/container.py +++ b/backend/program/media/container.py @@ -1,107 +1,152 @@ import os -import threading +from copy import deepcopy, copy import dill -from typing import List, Optional -from utils.logger import logger -from program.media.item import MediaItem +from pickle import UnpicklingError +from typing import Generator +from program.media.item import MediaItem, Episode, Season, Show, ItemId, Movie +from program.media.state import States +from utils.logger import logger class MediaItemContainer: """MediaItemContainer class""" - def __init__(self, items: Optional[List[MediaItem]] = None): - self.items = items if items is not None else [] - self.lock = threading.Lock() + def __init__(self): + self._items = {} + self._shows = {} + self._seasons = {} + self._episodes = {} + self._movies = {} - def __iter__(self): - for item in self.items: + def __iter__(self) -> Generator[MediaItem, None, None]: + for item in self._items.values(): yield item + + def __contains__(self, item) -> bool: + return item in self._items - def __iadd__(self, other): - if not isinstance(other, MediaItem) and other is not None: - raise TypeError("Cannot append non-MediaItem to MediaItemContainer") - if other not in self.items: - self.items.append(other) - return self - - def sort(self, by, reverse): - """Sort container by given attribute""" - try: - self.items.sort(key=lambda item: item.get(by), reverse=reverse) - except AttributeError: - pass # Fixes: 'NoneType' object has no attribute 'get' - caused by Trakt not able to create an item - - def __len__(self): + def __len__(self) -> int: """Get length of container""" - return len(self.items) - - def append(self, item) -> bool: - """Append item to container""" - with self.lock: - self.items.append(item) - self.sort("requested_at", True) - - def get(self, item) -> MediaItem: - """Get item matching given item from container""" - for my_item in self.items: - if my_item == item: - return my_item - return None - - def get_item_by_id(self, itemid) -> MediaItem: - """Get item matching given item from container""" - for my_item in self.items: - if my_item.itemid == int(itemid): - return my_item - return None - - def get_item_by_imdb_id(self, imdb_id) -> MediaItem: - """Get item matching given item from container""" - for my_item in self.items: - if my_item.imdb_id == imdb_id: - return my_item - return None - - def get_item(self, attr, value) -> "MediaItemContainer": - """Get items that match given items""" - return next((item for item in self.items if getattr(item, attr) == value), None) - - def extend(self, items) -> "MediaItemContainer": - """Extend container with items""" - with self.lock: - added_items = MediaItemContainer() - for media_item in items: - if media_item not in self.items: - self.items.append(media_item) - added_items.append(media_item) - self.sort("requested_at", True) - return added_items - - def remove(self, item): + return len(self._items) + + def __getitem__(self, item_id: ItemId) -> MediaItem: + return deepcopy(self._items[item_id]) + + def get(self, key, default=None) -> MediaItem: + return deepcopy(self._items.get(key, default)) + + @property + def seasons(self) -> dict[ItemId, Season]: + return deepcopy(self._seasons) + + @property + def episodes(self) -> dict[ItemId, Episode]: + return deepcopy(self._episodes) + + @property + def shows(self) -> dict[ItemId, Show]: + return deepcopy(self._shows) + + @property + def movies(self) -> dict[ItemId, Movie]: + return deepcopy(self._movies) + + def upsert(self, item: MediaItem) -> None: + """Iterate through the input item and upsert all parents and children.""" + # Use deepcopy so that further modifications made to the input item + # will not affect the container state + item = deepcopy(item) + self._items[item.item_id] = item + detatched = item.item_id.parent_id is None or item.parent is None + if isinstance(item, (Season, Episode)) and detatched: + logger.error( + "%s item %s is detatched and not associated with a parent, and thus" + + " it cannot be upserted into the database", + item.__class__.name, item.log_string + ) + raise ValueError("Item detached from parent") + if isinstance(item, Show): + self._shows[item.item_id] = item + for season in item.seasons: + season.parent = item + self._items[season.item_id] = season + self._seasons[season.item_id] = season + for episode in season.episodes: + episode.parent = season + self._items[episode.item_id] = episode + self._episodes[episode.item_id] = episode + if isinstance(item, Season): + self._seasons[item.item_id] = item + # update children + for episode in item.episodes: + episode.parent = item + self._items[episode.item_id] = episode + self._episodes[episode.item_id] = episode + # Ensure the parent Show is updated in the container + container_show: Show = self._items[item.item_id.parent_id] + parent_index = container_show.get_season_index_by_id(item.item_id) + if parent_index is not None: + container_show.seasons[parent_index] = item + elif isinstance(item, Episode): + self._episodes[item.item_id] = item + # Ensure the parent Season is updated in the container + container_season: Season = self._items[item.item_id.parent_id] + parent_index = container_season.get_episode_index_by_id(item.item_id) + if parent_index is not None: + container_season.episodes[parent_index] = item + elif isinstance(item, Movie): + self._movies[item.item_id] = item + + def remove(self, item) -> None: """Remove item from container""" - if item in self.items: - self.items.remove(item) + if item.item_id in self._items: + del self._items[item.item_id] def count(self, state) -> int: """Count items with given state in container""" return len(self.get_items_with_state(state)) - def get_items_with_state(self, state): - """Get items that need to be updated""" - return MediaItemContainer([item for item in self.items if item.state == state]) - - def save(self, filename): + def get_items_with_state(self, state) -> dict[ItemId, MediaItem]: + """Get items with the specified state""" + return { + item_id: self[item_id] + for item_id, item in self._items.items() + if item.state == state + } + + def get_incomplete_items(self) -> dict[ItemId, MediaItem]: + """Get items with the specified state.""" + return { + # direct self access deep copies the item before passing it + item_id: self[item_id] + # We need to copy first in case there are additions or deletions while we are iterating + for item_id, item in copy(self._items).items() + if item.state not in (States.Completed, States.PartiallyCompleted) + } + + def save(self, filename) -> None: """Save container to file""" with open(filename, "wb") as file: - dill.dump(self.items, file) + dill.dump(self, file) - def load(self, filename): + def load(self, filename) -> None: """Load container from file""" + logger.info("Loading cached media data from %s", filename) try: with open(filename, "rb") as file: - self.items = dill.load(file) + from_disk = dill.load(file) + self._items = from_disk._items + self._movies = from_disk._movies + self._shows = from_disk._shows + self._seasons = from_disk._seasons + self._episodes = from_disk._episodes except FileNotFoundError: - self.items = [] - except EOFError: + logger.error("Cannot find cached media data at %s", filename) + except (EOFError, UnpicklingError): + logger.error("Failed to unpickle media data at %s, wiping cached data", filename) os.remove(filename) - self.items = [] + self._items = {} + self._movies = {} + self._shows = {} + self._seasons = {} + self._episodes = {} diff --git a/backend/program/media/item.py b/backend/program/media/item.py index 1ce55577..b40c737e 100644 --- a/backend/program/media/item.py +++ b/backend/program/media/item.py @@ -1,41 +1,56 @@ -import threading from datetime import datetime -from program.media.state import ( - Unknown, - Content, - Scrape, - Download, - Symlink, - Library, - LibraryPartial, -) +from dataclasses import dataclass +from program.media.state import States +from typing import Self, Optional from utils.parser import parser +@dataclass +class ItemId: + value: str + parent_id: Optional[Self] = None + + + def __repr__(self): + if not self.parent_id: + return str(self.value) + return f"{self.parent_id}/{self.value}" + + def __hash__(self): + return hash(self.__repr__()) + + class MediaItem: """MediaItem class""" def __init__(self, item): - self._lock = threading.Lock() - self.itemid = item_id.get_next_value() - self.scraped_at = datetime(1970, 1, 1) + self.requested_at = item.get("requested_at", None) or datetime.now() + self.requested_by = item.get("requested_by", None) + + self.indexed_at = None + + self.scraped_at = None self.scraped_times = 0 self.active_stream = item.get("active_stream", None) self.streams = {} + self.symlinked = False - self.requested_at = item.get("requested_at", None) or datetime.now() - self.requested_by = item.get("requested_by", None) + self.symlinked_at = None + self.symlinked_times = 0 + self.file = None self.folder = None - self.is_anime = False - self.parsed = False + self.is_anime = item.get("is_anime", False) self.parsed_data = item.get("parsed_data", []) + self.parent = None # Media related self.title = item.get("title", None) self.imdb_id = item.get("imdb_id", None) if self.imdb_id: self.imdb_link = f"https://www.imdb.com/title/{self.imdb_id}/" + if not hasattr(self, 'item_id'): + self.item_id = ItemId(self.imdb_id) self.tvdb_id = item.get("tvdb_id", None) self.tmdb_id = item.get("tmdb_id", None) self.network = item.get("network", None) @@ -48,35 +63,42 @@ def __init__(self, item): self.key = item.get("key", None) self.guid = item.get("guid", None) self.update_folder = item.get("update_folder", None) - self.state.set_context(self) - - def perform_action(self, modules): - with self._lock: - self.state.perform_action(modules) @property def state(self): - _state = self._determine_state() - _state.set_context(self) - return _state + return self._determine_state() def _determine_state(self): if self.key or self.update_folder == "updated": - return Library() - if self.symlinked: - return Symlink() - if self.file and self.folder: - return Download() - if len(self.streams) > 0: - return Scrape() - if self.title: - return Content() - return Unknown() + return States.Completed + elif self.symlinked: + return States.Symlinked + elif self.file and self.folder: + return States.Downloaded + elif self.is_scraped(): + return States.Scraped + elif self.title: + return States.Indexed + elif self.imdb_id and self.requested_by: + return States.Requested + else: + return States.Unknown + + def copy_other_media_attr(self, other): + self.title = getattr(other, "title", None) + self.tvdb_id = getattr(other, "tvdb_id", None) + self.tmdb_id = getattr(other, "tmdb_id", None) + self.network = getattr(other, "network", None) + self.country = getattr(other, "country", None) + self.language = getattr(other, "language", None) + self.aired_at = getattr(other, "aired_at", None) + self.genres = getattr(other, "genres", []) def is_scraped(self): return len(self.streams) > 0 def is_checked_for_availability(self): + """Check if item has been checked for availability.""" if self.streams: return all( stream.get("cached", None) is not None @@ -85,45 +107,61 @@ def is_checked_for_availability(self): return False def to_dict(self): + """Convert item to dictionary (API response)""" return { - "item_id": self.itemid, + "item_id": str(self.item_id), "title": self.title, - "type": self.type, + "type": self.__class__.__name__, "imdb_id": self.imdb_id if hasattr(self, "imdb_id") else None, "tvdb_id": self.tvdb_id if hasattr(self, "tvdb_id") else None, "tmdb_id": self.tmdb_id if hasattr(self, "tmdb_id") else None, - "state": self.state.__class__.__name__, + "state": self.state.value, "imdb_link": self.imdb_link if hasattr(self, "imdb_link") else None, "aired_at": self.aired_at, "genres": self.genres if hasattr(self, "genres") else None, "guid": self.guid, - "requested_at": self.requested_at, - "requested_by": self.requested_by, + "requested_at": str(self.requested_at), + "requested_by": self.requested_by.__name__ if self.requested_by else None, "scraped_at": self.scraped_at, "scraped_times": self.scraped_times, } - def to_extended_dict(self): + def to_extended_dict(self, abbreviated_children=False): + """Convert item to extended dictionary (API response)""" dict = self.to_dict() - if self.type == "show": - dict["seasons"] = [season.to_extended_dict() for season in self.seasons] - if self.type == "season": - dict["episodes"] = [episode.to_extended_dict() for episode in self.episodes] - dict["language"] = (self.language if hasattr(self, "language") else None,) - dict["country"] = (self.country if hasattr(self, "country") else None,) - dict["network"] = (self.network if hasattr(self, "network") else None,) + match self: + case Show(): + dict["seasons"] = ( + [season.to_extended_dict() for season in self.seasons] + if not abbreviated_children + else self.represent_children + ) + case Season(): + dict["episodes"] = ( + [episode.to_extended_dict() for episode in self.episodes] + if not abbreviated_children + else self.represent_children + ) + dict["language"] = (self.language if hasattr(self, "language") else None) + dict["country"] = (self.country if hasattr(self, "country") else None) + dict["network"] = (self.network if hasattr(self, "network") else None) dict["active_stream"] = ( self.active_stream if hasattr(self, "active_stream") else None - ,) - dict["symlinked"] = (self.symlinked if hasattr(self, "symlinked") else None,) - dict["parsed"] = (self.parsed if hasattr(self, "parsed") else None,) - dict["parsed_data"] = (self.parsed_data if hasattr(self, "parsed_data") else None,) - dict["is_anime"] = (self.is_anime if hasattr(self, "is_anime") else None,) + ) + dict["symlinked"] = (self.symlinked if hasattr(self, "symlinked") else None) + dict["symlinked_at"] = (self.symlinked_at if hasattr(self, "symlinked_at") else None) + dict["symlinked_times"] = (self.symlinked_times if hasattr(self, "symlinked_times") else None) + + dict["parsed"] = (self.parsed if hasattr(self, "parsed") else None) + dict["parsed_data"] = ( + self.parsed_data if hasattr(self, "parsed_data") else None + ) + dict["is_anime"] = (self.is_anime if hasattr(self, "is_anime") else None) dict["update_folder"] = ( self.update_folder if hasattr(self, "update_folder") else None - ,) - dict["file"] = (self.file if hasattr(self, "file") else None,) - dict["folder"] = (self.folder if hasattr(self, "folder") else None,) + ) + dict["file"] = (self.file if hasattr(self, "file") else None) + dict["folder"] = (self.folder if hasattr(self, "folder") else None) return dict def __iter__(self): @@ -143,6 +181,15 @@ def set(self, key, value): """Set item attribute""" _set_nested_attr(self, key, value) + @property + def log_string(self): + return self.title or self.imdb_id + + @property + def collection(self): + return self.parent.collection if self.parent else self.item_id + + class Movie(MediaItem): """Movie class""" @@ -150,94 +197,136 @@ def __init__(self, item): self.type = "movie" self.file = item.get("file", None) super().__init__(item) + self.item_id = ItemId(self.imdb_id) def __repr__(self): - return f"Movie:{self.title}:{self.state.__class__.__name__}" - - @property - def log_string(self): - return self.title + return f"Movie:{self.log_string}:{self.state.name}" + class Show(MediaItem): """Show class""" def __init__(self, item): self.locations = item.get("locations", []) - self.seasons = item.get("seasons", []) + self.seasons: list[Season] = item.get("seasons", []) self.type = "show" super().__init__(item) + self.item_id = ItemId(self.imdb_id) + + def get_season_index_by_id(self, item_id): + """Find the index of an season by its item_id.""" + for i, season in enumerate(self.seasons): + if season.item_id == item_id: + return i + return None def _determine_state(self): - if all(season.state == Library for season in self.seasons): - return Library() + if all(season.state == States.Completed for season in self.seasons): + return States.Completed if any( - season.state == Library or season.state == LibraryPartial + season.state == States.Completed or season.state == States.PartiallyCompleted for season in self.seasons ): - return LibraryPartial() - if any(season.state == Symlink for season in self.seasons): - return Symlink() - if any(season.state == Download for season in self.seasons): - return Download() - if any(season.state == Scrape for season in self.seasons): - return Scrape() - if any(season.state == Content for season in self.seasons): - return Content() - return Unknown() + return States.PartiallyCompleted + if any(season.state == States.Symlinked for season in self.seasons): + return States.Symlinked + if any(season.state == States.Downloaded for season in self.seasons): + return States.Downloaded + if any(season.state == States.Scraped for season in self.seasons): + return States.Scraped + if any(season.state == States.Indexed for season in self.seasons): + return States.Indexed + if any(season.state == States.Requested for season in self.seasons): + return States.Requested + return States.Unknown def __repr__(self): - return f"Show:{self.title}:{self.state.__class__.__name__}" - + return f"Show:{self.log_string}:{self.state.name}" + + def fill_in_missing_children(self, other: Self): + existing_seasons = [s.number for s in self.seasons] + for s in other.seasons: + if s.number not in existing_seasons: + self.add_season(s) + else: + existing_season = next(es for es in self.seasons if s.number == es.number) + existing_season.fill_in_missing_children(s) + def add_season(self, season): """Add season to show""" self.seasons.append(season) season.parent = self + season.item_id.parent_id = self.item_id + self.seasons = sorted(self.seasons, key=lambda s: s.number) - @property - def log_string(self): - return self.title - + def represent_children(self): + return [ + s.represent_children() + for s in self.seasons + ] class Season(MediaItem): """Season class""" def __init__(self, item): self.type = "season" - self.parent = None self.number = item.get("number", None) - self.episodes = item.get("episodes", []) + self.episodes: list[Episode] = item.get("episodes", []) + self.item_id = ItemId(self.number) super().__init__(item) + def get_episode_index_by_id(self, item_id): + """Find the index of an episode by its item_id.""" + for i, episode in enumerate(self.episodes): + if episode.item_id == item_id: + return i + return None + def _determine_state(self): if len(self.episodes) > 0: - if all(episode.state == Library for episode in self.episodes): - return Library() - if any(episode.state == Library for episode in self.episodes): - return LibraryPartial() - if all(episode.state == Symlink for episode in self.episodes): - return Symlink() + if all(episode.state == States.Completed for episode in self.episodes): + return States.Completed + if any(episode.state == States.Completed for episode in self.episodes): + return States.PartiallyCompleted + if all(episode.state == States.Symlinked for episode in self.episodes): + return States.Symlinked if all(episode.file and episode.folder for episode in self.episodes): - return Download() + return States.Downloaded if self.is_scraped(): - return Scrape() - if any(episode.state == Content for episode in self.episodes): - return Content() - return Unknown() + return States.Scraped + if all(episode.state == States.Indexed for episode in self.episodes): + return States.Indexed + if any(episode.state == States.Requested for episode in self.episodes): + return States.Requested + return States.Unknown def __eq__(self, other): - return self.number == other.number + if type(self) == type(other) and self.item_id.parent_id == other.item_id.parent_id: + return self.number == other.get('number', None) def __repr__(self): - return f"Season:{self.number}:{self.state.__class__.__name__}" + return f"Season:{self.number}:{self.state.name}" + + def fill_in_missing_children(self, other: Self): + existing_episodes = [s.number for s in self.episodes] + for e in other.episodes: + if e.number not in existing_episodes: + self.add_episode(e) + + def represent_children(self): + return [e.log_string for e in self.episodes] def add_episode(self, episode): """Add episode to season""" self.episodes.append(episode) episode.parent = self + episode.item_id.parent_id = self.item_id + self.episodes = sorted(self.episodes, key=lambda e: e.number) + @property def log_string(self): - return self.parent.title + " S" + str(self.number).zfill(2) + return self.parent.log_string + " S" + str(self.number).zfill(2) class Episode(MediaItem): @@ -245,24 +334,24 @@ class Episode(MediaItem): def __init__(self, item): self.type = "episode" - self.parent = None self.number = item.get("number", None) self.file = item.get("file", None) + self.item_id = ItemId(self.number) super().__init__(item) def __eq__(self, other): - if type(self) == type(other) and self.parent == other.parent: - return self.number == other.number + if type(self) == type(other) and self.item_id.parent_id == other.item_id.parent_id: + return self.number == other.get('number', None) def __repr__(self): - return f"Episode:{self.number}:{self.state.__class__.__name__}" + return f"Episode:{self.number}:{self.state.name}" def get_file_episodes(self): return parser.episodes(self.file) - + @property def log_string(self): - return self.parent.parent.title + " S" + str(self.parent.number).zfill(2) + "E" + str(self.number).zfill(2) + return f"{self.parent.log_string}E{self.number:02}" def _set_nested_attr(obj, key, value): @@ -280,15 +369,3 @@ def _set_nested_attr(obj, key, value): obj[key] = value else: setattr(obj, key, value) - - -class ItemId: - value = 0 - - @classmethod - def get_next_value(cls): - cls.value += 1 - return cls.value - - -item_id = ItemId() diff --git a/backend/program/media/state.py b/backend/program/media/state.py index 5826c014..e7ed54fb 100644 --- a/backend/program/media/state.py +++ b/backend/program/media/state.py @@ -1,104 +1,14 @@ from enum import Enum +class States(Enum): + Unknown = "Unknown" + Requested = "Requested" + Indexed = "Indexed" + Scraped = "Scraped" + Downloaded = "Downloaded" + Symlinked = "Symlinked" + Completed = "Completed" + PartiallyCompleted = "PartiallyCompleted" + Failed = "Failed" -class MediaItemState: - def __eq__(self, other) -> bool: - if type(other) == type: - return type(self) == other - return type(self) == type(other) - def set_context(self, context): - self.context = context - - def perform_action(self, _): - pass - - -class Unknown(MediaItemState): - def perform_action(self, _): - pass - - -class Content(MediaItemState): - def perform_action(self, modules): - scraper = next(module for module in modules if module.key == "scraping") - if self.context.type in ["movie", "season", "episode"]: - scraper.run(self.context) - if self.context.state == Content and self.context.type == "season": - for episode in self.context.episodes: - episode.state.perform_action(modules) - if self.context.type == "show": - for season in self.context.seasons: - if season.aired_at: - season.state.perform_action(modules) - else: - for episode in season.episodes: - episode.state.perform_action(modules) - - -class Scrape(MediaItemState): - def perform_action(self, modules): - debrid = next(module for module in modules if module.key == "real_debrid") - if self.context.type in ["movie", "season", "episode"]: - debrid.run(self.context) - if self.context.type == "show": - for season in self.context.seasons: - if season.aired_at and season.state == Scrape: - season.state.perform_action(modules) - else: - for episode in season.episodes: - episode.state.perform_action(modules) - if self.context.type == "season": - self.context.state.perform_action(modules) - - -class Download(MediaItemState): - def perform_action(self, modules): - symlink = next(module for module in modules if module.key == "symlink") - if self.context.type in ["movie", "episode"]: - symlink.run(self.context) - if self.context.type == "show": - for season in self.context.seasons: - for episode in season.episodes: - episode.state.perform_action(modules) - if self.context.type == "season": - for episode in self.context.episodes: - episode.state.perform_action(modules) - - -class Symlink(MediaItemState): - def perform_action(self, modules): - library = next(module for module in modules if module.key == "plex") - if self.context.type == "show": - for season in self.context.seasons: - season.state.perform_action(modules) - elif self.context.type == "season": - for episode in self.context.episodes: - episode.state.perform_action(modules) - else: - library.update_item_section(self.context) - -class Library(MediaItemState): - def perform_action(self, _): - pass - - -class LibraryPartial(MediaItemState): - def perform_action(self, modules): - if self.context.type == "show": - for season in self.context.seasons: - season.state.perform_action(modules) - if self.context.type == "season": - for episode in self.context.episodes: - episode.state.perform_action(modules) - - -# This for api to get states, not for program -class MediaItemStates(Enum): - Unknown = Unknown.__name__ - Content = Content.__name__ - Scrape = Scrape.__name__ - Download = Download.__name__ - Symlink = Symlink.__name__ - Library = Library.__name__ - LibraryPartial = LibraryPartial.__name__ diff --git a/backend/program/plex.py b/backend/program/plex.py deleted file mode 100644 index 496beb9f..00000000 --- a/backend/program/plex.py +++ /dev/null @@ -1,268 +0,0 @@ -"""Plex library module""" -import concurrent.futures -import os -import threading -import time -import uuid -from datetime import datetime -from typing import Optional -from plexapi.server import PlexServer -from plexapi.exceptions import BadRequest, Unauthorized -from pydantic import BaseModel -# from program.updaters.trakt import get_imdbid_from_tvdb -from utils.logger import logger -from utils.settings import settings_manager as settings -from program.media.container import MediaItemContainer -from program.media.state import Symlink, Library -from utils.request import get, post -from program.media.item import ( - Movie, - Show, - Season, - Episode, -) - - -class PlexConfig(BaseModel): - user: Optional[str] = None - token: Optional[str] = None - url: Optional[str] = None - - -class Plex(threading.Thread): - """Plex library class""" - - def __init__(self, media_items: MediaItemContainer): - super().__init__(name="Plex") - self.key = "plex" - self.initialized = False - self.library_path = os.path.abspath( - os.path.dirname(settings.get("symlink.container_path")) - ) - self.last_fetch_times = {} - - try: - self.settings = PlexConfig(**settings.get(self.key)) - self.plex = PlexServer( - self.settings.url, self.settings.token, timeout=60 - ) - except Unauthorized: - logger.warn("Plex is not authorized!") - return - except BadRequest as e: - logger.error("Plex is not configured correctly: %s", e) - return - except Exception as e: - logger.error("Plex exception thrown: %s", e) - return - self.running = False - self.log_worker_count = False - self.media_items = media_items - self._update_items(init=True) - self.initialized = True - logger.info("Plex initialized!") - - def run(self): - while self.running: - self._update_items() - for i in range(10): - time.sleep(i) - - def start(self): - self.running = True - super().start() - - def stop(self): - self.running = False - - def _get_last_fetch_time(self, section): - return self.last_fetch_times.get(section.key, datetime(1800, 1, 1)) - - def _update_items(self, init=False): - items = [] - sections = self.plex.library.sections() - processed_sections = set() - max_workers = os.cpu_count() / 2 - with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix="Plex") as executor: - for section in sections: - if section.key in processed_sections or not self._is_wanted_section(section): - continue - if not section.refreshing: - # Fetch only items that have been added or updated since the last fetch - last_fetch_time = self._get_last_fetch_time(section) - filters = {"addedAt>>": last_fetch_time} - if init: - filters = {} - future_items = {executor.submit(self._create_and_match_item, item) for item in section.search(libtype = section.type, filters=filters)} - for future in concurrent.futures.as_completed(future_items): - media_item = future.result() - items.append(media_item) - self.last_fetch_times[section.key] = datetime.now() - processed_sections.add(section.key) - - length = len(items) - if length >= 1 and length <= 5: - for item in items: - logger.info("Found %s from plex", item.log_string) - elif length > 5: - logger.info("Found %s items from plex", length) - - def update_item_section(self, item): - """Update plex library section for a single item""" - item_type = item.type - if item.type == "episode": - item_type = "show" - for section in self.plex.library.sections(): - if section.type != item_type: - continue - - if self._update_section(section, item): - logger.debug("Updated section %s for %s", section.title, item.log_string) - - def _update_section(self, section, item): - if item.state == Symlink and item.get("update_folder") != "updated": - update_folder = item.update_folder - section.update(update_folder) - item.set("update_folder", "updated") - return True - return False - - def _create_and_match_item(self, item): - new_item = self._create_item(item) - if new_item: - self.match_item(new_item) - return new_item - - def _create_item(self, item): - new_item = _map_item_from_data(item) - if new_item and item.type == "show": - for season in item.seasons(): - if season.seasonNumber != 0: - new_season = _map_item_from_data(season) - if new_season: - new_season_episodes = [] - for episode in season.episodes(): - new_episode = _map_item_from_data(episode) - if new_episode: - new_season_episodes.append(new_episode) - new_season.episodes = new_season_episodes - new_item.seasons.append(new_season) - return new_item - - def match_item(self, new_item): - for existing_item in self.media_items: - if existing_item.imdb_id == new_item.imdb_id: - self._update_item(existing_item, new_item) - break - # Leaving this here as a reminder to not forget about deleting items that are removed from plex, needs to be revisited - # if item.state is MediaItemState.LIBRARY and item not in found_items: - # self.media_items.remove(item) - - def _update_item(self, item, library_item): - items_updated = 0 - item.set("guid", library_item.guid) - item.set("key", library_item.key) - if item.type == "show": - for season in item.seasons: - for episode in season.episodes: - if episode.state != Library: - found_season = next((s for s in library_item.seasons if s.number == season.number), None) - if found_season: - found_episode = next((e for e in found_season.episodes if e.number == episode.number), None) - if found_episode: - episode.set("guid", found_episode.guid) - episode.set("key", found_episode.key) - items_updated += 1 - return items_updated - - def _is_wanted_section(self, section): - return any(self.library_path in location for location in section.locations) and section.type in ["movie", "show"] - - def _oauth(self): - random_uuid = uuid.uuid4() - response = get( - url="https://plex.tv/api/v2/user", - additional_headers={ - "X-Plex-Product": "Iceberg", - "X-Plex-Client-Identifier": random_uuid, - "X-Plex-Token": settings.get("plex.token"), - }, - ) - if not response.ok: - data = post( - url="https://plex.tv/api/v2/pins", - additional_headers={ - "strong": "true", - "X-Plex-Product": "Iceberg", - "X-Plex-Client-Identifier": random_uuid, - }, - ) - if data.ok: - pin = data.id - - -def _map_item_from_data(item): - """Map Plex API data to MediaItemContainer.""" - file = None - guid = getattr(item, "guid", None) - if item.type in ["movie", "episode"]: - file = getattr(item, "locations", [None])[0].split("/")[-1] - genres = [genre.tag for genre in getattr(item, "genres", [])] - is_anime = "anime" in genres - title = getattr(item, "title", None) - key = getattr(item, "key", None) - season_number = getattr(item, "seasonNumber", None) - episode_number = getattr(item, "episodeNumber", None) - art_url = getattr(item, "artUrl", None) - imdb_id = None - tvdb_id = None - aired_at = None - - if item.type in ["movie", "show"]: - guids = getattr(item, "guids", []) - imdb_id = next( - (guid.id.split("://")[-1] for guid in guids if "imdb" in guid.id), None - ) - aired_at = getattr(item, "originallyAvailableAt", None) - - # Attempt to get the imdb id from the tvdb id if we don't have it. - # Uses Trakt to get the imdb id from the tvdb id. - # if not imdb_id: - # tvdb_id = next( - # (guid.id.split("://")[-1] for guid in guids if "tvdb" in guid.id), None - # ) - # if tvdb_id: - # imdb_id = get_imdbid_from_tvdb(tvdb_id) - # if imdb_id: - # logger.debug("%s was missing IMDb ID, found IMDb ID from TVdb ID: %s", title, imdb_id) - # If we still don't have an imdb id, we could check TMdb or use external services like cinemeta. - - media_item_data = { - "title": title, - "imdb_id": imdb_id, - "tvdb_id": tvdb_id, - "aired_at": aired_at, - "genres": genres, - "key": key, - "guid": guid, - "art_url": art_url, - "file": file, - "is_anime": is_anime, - } - - # Instantiate the appropriate subclass based on 'item_type' - if item.type == "movie": - return Movie(media_item_data) - elif item.type == "show": - return Show(media_item_data) - elif item.type == "season": - media_item_data["number"] = season_number - return Season(media_item_data) - elif item.type == "episode": - media_item_data["number"] = episode_number - media_item_data["season_number"] = season_number - return Episode(media_item_data) - else: - # Specials may end up here.. - logger.error("Unknown Item: %s with type %s", item.title, item.type) - return None diff --git a/backend/program/program.py b/backend/program/program.py new file mode 100644 index 00000000..d85add33 --- /dev/null +++ b/backend/program/program.py @@ -0,0 +1,320 @@ +import os +import threading +import time +import traceback +import inspect +import json + +from concurrent.futures import ThreadPoolExecutor, Future +from datetime import datetime +from queue import Queue, Empty + +from apscheduler.schedulers.background import BackgroundScheduler +from coverage import Coverage +from deepdiff.diff import DeepDiff, PrettyOrderedSet + +from program.content import Overseerr, PlexWatchlist, Listrr, Mdblist +from program.state_transision import process_event +from program.indexers.trakt import TraktIndexer +from program.media.container import MediaItemContainer +from program.media.item import MediaItem +from program.media.state import States +from program.libaries import SymlinkLibrary +from program.realdebrid import Debrid +from program.scrapers import Scraping +from program.settings.manager import settings_manager +from program.symlink import Symlinker +from program.updaters.plex import PlexUpdater +from program.types import Event, Service, ProcessedEvent +from utils import data_dir_path +from utils.logger import logger +from utils.utils import Pickly + + +class Program(threading.Thread): + """Program class""" + + def __init__(self, args): + super().__init__(name="Iceberg") + self.running = False + self.startup_args = args + logger.configure_logger( + debug=settings_manager.settings.debug, + log=settings_manager.settings.log + ) + + def initialize_services(self): + self.library_services = { + SymlinkLibrary: SymlinkLibrary() + } + self.requesting_services = { + Overseerr: Overseerr(), + PlexWatchlist: PlexWatchlist(), + Listrr: Listrr(), + Mdblist: Mdblist(), + } + self.indexing_services = { + TraktIndexer: TraktIndexer() + } + self.processing_services = { + Scraping: Scraping(), + Debrid: Debrid(), + Symlinker: Symlinker(), + PlexUpdater: PlexUpdater() + } + self.services = { + **self.library_services, + **self.indexing_services, + **self.requesting_services, + **self.processing_services + } + + def start(self): + logger.info("Iceberg v%s starting!", settings_manager.settings.version) + settings_manager.register_observer(self.initialize_services) + self.initialized = False + self.event_queue = Queue() + os.makedirs(data_dir_path, exist_ok=True) + + try: + self.initialize_services() + except Exception: + logger.error(traceback.format_exc()) + + self.media_items = MediaItemContainer() + if not self.startup_args.ignore_cache: + self.pickly = Pickly(self.media_items, data_dir_path) + self.pickly.start() + if not len(self.media_items): + # seed initial MIC with Library State + for item in self.services[SymlinkLibrary].run(): + self.media_items.upsert(item) + + if self.validate(): + logger.info("Iceberg started!") + else: + logger.info("----------------------------------------------") + logger.info("Iceberg is waiting for configuration to start!") + logger.info("----------------------------------------------") + self.scheduler = BackgroundScheduler() + self.executor = ThreadPoolExecutor(thread_name_prefix="Worker") + self._schedule_services() + self._schedule_functions() + super().start() + self.scheduler.start() + self.running = True + self.initialized = True + + def _retry_library(self) -> None: + for item_id, item in self.media_items.get_incomplete_items().items(): + self.event_queue.put(Event(emitted_by=self.__class__, item=item)) + + def _schedule_functions(self) -> None: + """Schedule each service based on its update interval.""" + scheduled_functions = { + self._retry_library: { + 'interval': 60 * 10 + } + } + for func, config in scheduled_functions.items(): + self.scheduler.add_job( + func, + 'interval', + seconds=config['interval'], + args=config.get('args'), + id=f'{func.__name__}', + max_instances=1, + replace_existing=True, # Replace existing jobs with the same ID + next_run_time=datetime.now() + ) + logger.info("Scheduled %s to run every %s seconds.", func.__name__, config['interval']) + return + + def _schedule_services(self) -> None: + """Schedule each service based on its update interval.""" + scheduled_services = { **self.requesting_services, **self.library_services } + for service_cls, service_instance in scheduled_services.items(): + if not service_instance.initialized: + logger.info("Not scheduling %s due to not being initialized", service_cls.__name__) + continue + if not (update_interval := getattr(service_instance.settings, 'update_interval', False)): + logger.info( + "Service %s update_interval set to False or missing, " + + " not schedulings regular updates", + service_cls.__name__ + ) + continue + + self.scheduler.add_job( + self._submit_job, + 'interval', + seconds=update_interval, + args=[service_cls, None], + id=f'{service_cls.__name__}_update', + max_instances=1, + replace_existing=True, # Replace existing jobs with the same ID + next_run_time=datetime.now() if service_cls != SymlinkLibrary else None + ) + logger.info("Scheduled %s to run every %s seconds.", service_cls.__name__, update_interval) + return + + def _process_future_item(self, future: Future, service: Service, input_item: MediaItem) -> None: + """Callback to add the results from a future emitted by a service to the event queue.""" + try: + for item in future.result(): + if not isinstance(item, MediaItem): + logger.error("Service %s emitted item %s of type %s, skipping", service.__name__, item, item.__class__.__name__) + continue + self.event_queue.put(Event(emitted_by=service, item=item)) + except Exception: + logger.error("Service %s failed with exception %s", service.__name__, traceback.format_exc()) + + def _submit_job(self, service: Service, item: MediaItem | None) -> None: + logger.debug( + f"Submitting service {service.__name__} to the pool" + + (f" with {getattr(item, 'log_string', None) or item.item_id}" if item else "") + ) + func = self.services[service].run + future = self.executor.submit(func) if item is None else self.executor.submit(func, item) + future.add_done_callback(lambda f: self._process_future_item(f, service, item)) + + def run(self): + while self.running: + if not self.validate(): + time.sleep(1) + continue + try: + event: Event = self.event_queue.get(timeout=1) + except Empty: + # Unblock after waiting in case we are no longer supposed to be running + continue + existing_item = self.media_items.get(event.item.item_id, None) + func = ( + process_event_and_collect_coverage + if self.startup_args.profile_state_transitions + else process_event + ) + updated_item, next_service, items_to_submit = func( + existing_item, event.emitted_by, event.item + ) + + # before submitting the item to be processed, commit it to the container + if updated_item: + self.media_items.upsert(updated_item) + if updated_item.state == States.Completed: + logger.debug("%s %s has been completed", + updated_item.__class__.__name__, updated_item.log_string + ) + + for item_to_submit in items_to_submit: + self._submit_job(next_service, item_to_submit) + + def validate(self): + return any( + service.initialized + for service in self.requesting_services.values() + ) and all( + service.initialized + for service in self.processing_services.values() + ) + + def stop(self): + if hasattr(self, 'executor'): + self.executor.shutdown(wait=True) + if hasattr(self, 'pickly'): + self.pickly.stop() + settings_manager.save() + symlinker_service = self.processing_services.get(Symlinker) + if symlinker_service: + symlinker_service.stop_monitor() + if hasattr(self, 'scheduler'): + self.scheduler.shutdown(wait=False) # Don't block, doesn't contain data to consume + self.running = False + + +def custom_serializer(obj): + """ + If input object is a type (class), return its name as a string. + Otherwise, raise TypeError. + """ + if isinstance(obj, type): + return obj.__name__ + elif isinstance(obj, PrettyOrderedSet): + return list(obj) + +# Function to execute process_event and collect coverage data +def process_event_and_collect_coverage( + existing_item: MediaItem | None, + emitted_by: Service, + item: MediaItem +) -> ProcessedEvent: + file_path = inspect.getfile(process_event) + + # Load the source code and extract executed lines + with open(file_path, 'r') as file: + source_lines = file.readlines() + + lines, start_line_no = inspect.getsourcelines(process_event) + logic_start_line_no = next( + i + start_line_no + 1 + for i, l in enumerate(source_lines[start_line_no:]) + if l.strip().startswith("if ") + ) + end_line_no = logic_start_line_no + len(lines) - 1 + + cov = Coverage(branch=True) + cov.erase() + cov.start() + + # Call the process_event method + updated_item, next_service, items_to_submit = process_event( + existing_item, emitted_by, item + ) + + cov.stop() + cov.save() + + # Analyze the coverage data for this execution + _, executable_line_nos, excluded, not_executed, _ = cov.analysis2(file_path) + + + not_executed_set = set(not_executed) + executed_lines = [ + (i, source_lines[i-1]) # Adjust line numbers to 0-based indexing + for i in executable_line_nos + if logic_start_line_no <= i <= end_line_no + and i not in not_executed_set + ] + + existing = existing_item.to_extended_dict(abbreviated_children=True) if existing_item else None + current = item.to_extended_dict(abbreviated_children=True) if item else None + updated = updated_item.to_extended_dict(abbreviated_children=True) if updated_item else None + frame_data = { + "current_state": current, + "diffs": { + "existing_to_current": ( + DeepDiff(existing, current, ignore_order=True).to_dict() + if existing + else {} + ), + "current_to_updated": ( + DeepDiff(current, updated, ignore_order=True).to_dict() + if updated + else {} + ) + }, + "executed_lines": executed_lines, + "next_service": next_service.__name__ if next_service else None, + "items_to_submit": [i.log_string for i in items_to_submit], + } + # from pprint import pprint + # pprint(frame_data, indent=2) + frames_dir = data_dir_path / "frames" + os.makedirs(frames_dir, exist_ok=True) + # Write frame data to a JSONL file within the function + collection_filename = frames_dir / f"{item.collection}.jsonl" + with open(collection_filename, 'a') as f: + json.dump(frame_data, f, default=custom_serializer) + f.write('\n') # Newline to separate frames in the file + + return updated_item, next_service, items_to_submit diff --git a/backend/program/realdebrid.py b/backend/program/realdebrid.py index 61fd13ff..613e7ca5 100644 --- a/backend/program/realdebrid.py +++ b/backend/program/realdebrid.py @@ -1,44 +1,39 @@ """Realdebrid module""" -import os -from pathlib import Path import time -from typing import Optional -from pydantic import BaseModel +from pathlib import Path from requests import ConnectTimeout from utils.logger import logger from utils.request import get, post, ping -from utils.settings import settings_manager +from program.settings.manager import settings_manager from utils.parser import parser +from program.media.item import Season, Movie, Episode WANTED_FORMATS = [".mkv", ".mp4", ".avi"] RD_BASE_URL = "https://api.real-debrid.com/rest/1.0" -class DebridConfig(BaseModel): - api_key: Optional[str] - - class Debrid: """Real-Debrid API Wrapper""" - def __init__(self, _): + def __init__(self): # Realdebrid class library is a necessity self.initialized = False self.key = "real_debrid" - self.settings = DebridConfig(**settings_manager.get(self.key)) + self.settings = settings_manager.settings.real_debrid self.auth_headers = {"Authorization": f"Bearer {self.settings.api_key}"} self.running = False - if not self._validate_settings(): + if not self._validate(): logger.error("Realdebrid settings incorrect or not premium!") return logger.info("Real Debrid initialized!") + self.processed_torrents = set() self.initialized = True - def _validate_settings(self): + def _validate(self): try: response = ping( - "https://api.real-debrid.com/rest/1.0/user", + f"{RD_BASE_URL}/user", additional_headers=self.auth_headers, ) if response.ok: @@ -48,26 +43,31 @@ def _validate_settings(self): return False def run(self, item): - self.download(item) - - def download(self, item): """Download movie from real-debrid.com""" - downloaded = 0 - if self.is_cached(item): - if not self._is_downloaded(item): - downloaded = self._download_item(item) - else: - downloaded = True - self._set_file_paths(item) - return downloaded + if not self.is_cached(item): + return + if not self._is_downloaded(item): + self._download_item(item) + self._set_file_paths(item) + yield item + def _is_downloaded(self, item): + """Check if item is already downloaded""" torrents = self.get_torrents() for torrent in torrents: if torrent.hash == item.active_stream.get("hash"): info = self.get_torrent_info(torrent.id) if item.type == "episode": - if not any(file for file in info.files if file.selected == 1 and item.number in parser.episodes_in_season(item.parent.number, Path(file.path).name)): + if not any( + file + for file in info.files + if file.selected == 1 + and item.number + in parser.episodes_in_season( + item.parent.number, Path(file.path).name + ) + ): return False item.set("active_stream.id", torrent.id) @@ -77,6 +77,7 @@ def _is_downloaded(self, item): return False def _download_item(self, item): + """Download item from real-debrid.com""" request_id = self.add_magnet(item) item.set("active_stream.id", request_id) self.set_active_files(item) @@ -84,19 +85,15 @@ def _download_item(self, item): self.select_files(request_id, item) item.set("active_stream.id", request_id) logger.debug("Downloaded %s", item.log_string) - return 1 - - def _get_torrent_info(self, request_id): - data = self.get_torrent_info(request_id) - if not data["id"] in self._torrents.keys(): - self._torrents[data["id"]] = data def set_active_files(self, item): + """Set active files for item from real-debrid.com""" info = self.get_torrent_info(item.get("active_stream")["id"]) item.active_stream["alternative_name"] = info.original_filename item.active_stream["name"] = info.filename def is_cached(self, item): + """Check if item is cached on real-debrid.com""" if len(item.streams) == 0: return @@ -104,12 +101,13 @@ def chunks(lst, n): for i in range(0, len(lst), n): yield lst[i : i + n] - stream_chunks = list(chunks(list(item.streams), 5)) + filtered_streams = [hash for hash in item.streams if hash is not None] + stream_chunks = list(chunks(filtered_streams, 5)) for stream_chunk in stream_chunks: streams = "/".join(stream_chunk) response = get( - f"https://api.real-debrid.com/rest/1.0/torrents/instantAvailability/{streams}/", + f"{RD_BASE_URL}/torrents/instantAvailability/{streams}/", additional_headers=self.auth_headers, response_type=dict, ) @@ -119,38 +117,39 @@ def chunks(lst, n): for containers in provider_list.values(): for container in containers: wanted_files = {} - if item.type == "movie" and all(file["filesize"] > 200000 for file in container.values()): + if isinstance(item, Movie) and all(file["filesize"] > 200000 for file in container.values()): wanted_files = container - if item.type == "season" and all(any(episode.number in parser.episodes_in_season(item.number, file["filename"]) for file in container.values()) for episode in item.episodes): + if isinstance(item, Season) and all(any(episode.number in parser.episodes_in_season(item.number, file["filename"]) for file in container.values()) for episode in item.episodes): wanted_files = container - if item.type == "episode" and any(item.number in parser.episodes_in_season(item.parent.number, episode["filename"]) for episode in container.values()): + if isinstance(item, Episode) and any(item.number in parser.episodes_in_season(item.parent.number, episode["filename"]) for episode in container.values()): wanted_files = container if len(wanted_files) > 0 and all(item for item in wanted_files.values() if Path(item["filename"]).suffix in WANTED_FORMATS): item.set( "active_stream", {"hash": stream_hash, "files": wanted_files, "id": None}, ) - # all_filenames = [file_info["filename"] for file_info in wanted_files.values()] - # for file in all_filenames: - # logger.debug(f"Found cached file {file} for {item.log_string}") return True - item.streams[stream_hash] = None + item.streams[stream_hash] = None + logger.debug("[%s] No cached streams found for item: %s", stream_hash[-6:], item.log_string) return False def _set_file_paths(self, item): - if item.type == "movie": + """Set file paths for item from real-debrid.com""" + if isinstance(item, Movie): self._handle_movie_paths(item) - if item.type == "season": + elif isinstance(item, Season): self._handle_season_paths(item) - if item.type == "episode": + elif isinstance(item, Episode): self._handle_episode_paths(item) def _handle_movie_paths(self, item): + """Set file paths for movie from real-debrid.com""" item.set("folder", item.active_stream.get("name")) item.set("alternative_folder", item.active_stream.get("alternative_name")) item.set("file", next(file for file in item.active_stream.get("files").values())["filename"]) def _handle_season_paths(self, season): + """Set file paths for season from real-debrid.com""" for file in season.active_stream["files"].values(): for episode in parser.episodes_in_season(season.number, file["filename"]): if episode - 1 in range(len(season.episodes)): @@ -163,6 +162,7 @@ def _handle_season_paths(self, season): season.episodes[episode - 1].set("file", file["filename"]) def _handle_episode_paths(self, episode): + """Set file paths for episode from real-debrid.com""" file = next(file for file in episode.active_stream.get("files").values() if episode.number in parser.episodes_in_season(episode.parent.number, file["filename"])) episode.set("folder", episode.active_stream.get("name")) episode.set("alternative_folder", episode.active_stream.get("alternative_name")) @@ -173,7 +173,7 @@ def add_magnet(self, item) -> str: if not item.active_stream.get("hash"): return None response = post( - "https://api.real-debrid.com/rest/1.0/torrents/addMagnet", + f"{RD_BASE_URL}/torrents/addMagnet", { "magnet": "magnet:?xt=urn:btih:" + item.active_stream["hash"] @@ -188,7 +188,7 @@ def add_magnet(self, item) -> str: def get_torrents(self) -> str: """Add magnet link to real-debrid.com""" response = get( - "https://api.real-debrid.com/rest/1.0/torrents/", + f"{RD_BASE_URL}/torrents/", data={"offset": 0, "limit": 2500}, additional_headers=self.auth_headers, ) @@ -200,7 +200,7 @@ def select_files(self, request_id, item) -> bool: """Select files from real-debrid.com""" files = item.active_stream.get("files") response = post( - f"https://api.real-debrid.com/rest/1.0/torrents/selectFiles/{request_id}", + f"{RD_BASE_URL}/torrents/selectFiles/{request_id}", {"files": ",".join(files.keys())}, additional_headers=self.auth_headers, ) @@ -209,8 +209,8 @@ def select_files(self, request_id, item) -> bool: def get_torrent_info(self, request_id): """Get torrent info from real-debrid.com""" response = get( - f"https://api.real-debrid.com/rest/1.0/torrents/info/{request_id}", + f"{RD_BASE_URL}/torrents/info/{request_id}", additional_headers=self.auth_headers, ) if response.is_ok: - return response.data + return response.data \ No newline at end of file diff --git a/backend/program/scrapers/__init__.py b/backend/program/scrapers/__init__.py index b9fb3f42..703f8cc6 100644 --- a/backend/program/scrapers/__init__.py +++ b/backend/program/scrapers/__init__.py @@ -1,60 +1,61 @@ from datetime import datetime -from pydantic import BaseModel -from utils.service_manager import ServiceManager -from utils.settings import settings_manager as settings -# from utils.parser import parser, sort_streams from utils.logger import logger -from .torrentio import Torrentio -from .orionoid import Orionoid -from .jackett import Jackett +from program.settings.manager import settings_manager +from program.scrapers.torrentio import Torrentio +from program.scrapers.orionoid import Orionoid +from program.scrapers.jackett import Jackett +from program.media.item import MediaItem -class ScrapingConfig(BaseModel): - after_2: float - after_5: float - after_10: float - class Scraping: - def __init__(self, _): + def __init__(self): self.key = "scraping" self.initialized = False - self.settings = ScrapingConfig(**settings.get(self.key)) - self.sm = ServiceManager(None, False, Orionoid, Torrentio, Jackett) - if not any(service.initialized for service in self.sm.services): - logger.error( - "You have no scraping services enabled, please enable at least one!" + self.settings = settings_manager.settings.scraping + self.services = { + Orionoid: Orionoid(), + Torrentio: Torrentio(), + Jackett: Jackett() + } + self.initialized = self.validate() + + def run(self, item: MediaItem) -> MediaItem | None: + if not self._can_we_scrape(item): + return None + for service in self.services.values(): + if service.initialized: + item = next(service.run(item)) + item.set("scraped_at", datetime.now()) + item.set("scraped_times", item.scraped_times + 1) + yield item + + + def validate(self): + if not (validated := any(service.initialized for service in self.services.values())): + logger.error("You have no scraping services enabled," + " please enable at least one!" ) - return - self.initialized = True - - def run(self, item) -> None: - if self._can_we_scrape(item): - for service in self.sm.services: - if service.initialized: - service.run(item) - item.set("scraped_at", datetime.now()) - item.set("scraped_times", item.scraped_times + 1) - # sorted_streams = sort_streams(item.streams, parser) - # item.set("streams", sorted_streams) + return validated + + def _can_we_scrape(self, item: MediaItem) -> bool: + return self._is_released(item) and self.should_submit(item) - def _can_we_scrape(self, item) -> bool: - return self._is_released(item) and self._needs_new_scrape(item) - - def _is_released(self, item) -> bool: + def _is_released(self, item: MediaItem) -> bool: return item.aired_at is not None and item.aired_at < datetime.now() - def _needs_new_scrape(self, item) -> bool: - scrape_time = 5 # 5 seconds by default + @staticmethod + def should_submit(item: MediaItem) -> bool: + settings = settings_manager.settings.scraping + scrape_time = 5 # 5 seconds by default if item.scraped_times >= 2 and item.scraped_times <= 5: - scrape_time = self.settings.after_2 * 60 * 60 + scrape_time = settings.after_2 * 60 * 60 elif item.scraped_times > 5 and item.scraped_times <= 10: - scrape_time = self.settings.after_5 * 60 * 60 + scrape_time = settings.after_5 * 60 * 60 elif item.scraped_times > 10: - scrape_time = self.settings.after_10 * 60 * 60 - + scrape_time = settings.after_10 * 60 * 60 + return ( - (datetime.now() - item.scraped_at).total_seconds() - > scrape_time - or item.scraped_times == 0 + not item.scraped_at + or (datetime.now() - item.scraped_at).total_seconds() > scrape_time ) diff --git a/backend/program/scrapers/jackett.py b/backend/program/scrapers/jackett.py index 2ce2f6ad..b6650469 100644 --- a/backend/program/scrapers/jackett.py +++ b/backend/program/scrapers/jackett.py @@ -1,36 +1,29 @@ """ Jackett scraper module """ -import traceback -from typing import Optional -from pydantic import BaseModel from requests import ReadTimeout, RequestException from utils.logger import logger -from utils.settings import settings_manager +from program.settings.manager import settings_manager from utils.parser import parser from utils.request import RateLimitExceeded, get, RateLimiter, ping -class JackettConfig(BaseModel): - enabled: bool - url: Optional[str] - api_key: Optional[str] - - class Jackett: """Scraper for `Jackett`""" - def __init__(self, _): + def __init__(self): self.key = "jackett" self.api_key = None - self.settings = JackettConfig(**settings_manager.get(f"scraping.{self.key}")) - self.initialized = self.validate_settings() + self.settings = settings_manager.settings.scraping.jackett + self.initialized = self.validate() if not self.initialized and not self.api_key: return - self.minute_limiter = RateLimiter(max_calls=1000, period=3600, raise_on_limit=True) - self.second_limiter = RateLimiter(max_calls=1, period=5) self.parse_logging = False + self.minute_limiter = RateLimiter( + max_calls=1000, period=3600, raise_on_limit=True + ) + self.second_limiter = RateLimiter(max_calls=1, period=1) logger.info("Jackett initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: """Validate Jackett settings.""" if not self.settings.enabled: logger.debug("Jackett is set to disabled.") @@ -71,16 +64,15 @@ def run(self, item): return try: self._scrape_item(item) - except RateLimitExceeded as e: + except RateLimitExceeded: self.minute_limiter.limit_hit() logger.warn("Jackett rate limit hit for item: %s", item.log_string) return except RequestException as e: - logger.debug("Jackett request exception: %s", e, exc_info=True) + logger.debug("Jackett request exception: %s", e) return except Exception as e: - logger.debug("Jackett exception for item: %s - Exception: %s", item.log_string, e.args[0], exc_info=True) - logger.debug("Exception details: %s", traceback.format_exc()) + logger.error("Jackett failed to scrape item: %s", e) return def _scrape_item(self, item): @@ -88,9 +80,21 @@ def _scrape_item(self, item): data, stream_count = self.api_scrape(item) if len(data) > 0: item.streams.update(data) - logger.info("Found %s streams out of %s for %s", len(data), stream_count, item.log_string) + logger.debug( + "Found %s streams out of %s for %s", + len(data), + stream_count, + item.log_string, + ) else: - logger.debug("Could not find streams for %s", item.log_string) + if stream_count > 0: + logger.debug( + "Could not find good streams for %s out of %s", + item.log_string, + stream_count, + ) + else: + logger.debug("No streams found for %s", item.log_string) def api_scrape(self, item): """Wrapper for `Jackett` scrape method""" @@ -98,31 +102,43 @@ def api_scrape(self, item): with self.minute_limiter: query = "" if item.type == "movie": - query = f"&cat=2000,2010,2020,2030,2040,2045,2050,2080&t=movie&q={item.title}&year{item.aired_at.year}" + query = f"cat=2000&t=movie&q={item.title}&year{item.aired_at.year}" if item.type == "season": - query = f"&cat=5000,5010,5020,5030,5040,5045,5050,5060,5070,5080&t=tvsearch&q={item.parent.title}&season={item.number}" + query = f"cat=5000&t=tvsearch&q={item.parent.title}&season={item.number}" if item.type == "episode": - query = f"&cat=5000,5010,5020,5030,5040,5045,5050,5060,5070,5080&t=tvsearch&q={item.parent.parent.title}&season={item.parent.number}&ep={item.number}" - url = f"{self.settings.url}/api/v2.0/indexers/!status:failing,test:passed/results/torznab?apikey={self.api_key}{query}" + query = f"cat=5000&t=tvsearch&q={item.parent.parent.title}&season={item.parent.number}&ep={item.number}" + url = f"{self.settings.url}/api/v2.0/indexers/all/results/torznab?apikey={self.api_key}&{query}" with self.second_limiter: response = get(url=url, retry_if_failed=False, timeout=60) if response.is_ok: data = {} streams = response.data["rss"]["channel"].get("item", []) - parsed_data_list = [parser.parse(item, stream.get("title")) for stream in streams if type(stream) != str] + parsed_data_list = [ + parser.parse(item, stream.get("title")) + for stream in streams + if not isinstance(stream, str) + ] for stream, parsed_data in zip(streams, parsed_data_list): - if type(stream) == str: - logger.debug("Found another string: %s", stream) - continue - if parsed_data.get("fetch", True) and parsed_data.get("title_match", False): + if parsed_data.get("fetch", True) and parsed_data.get( + "title_match", False + ): attr = stream.get("torznab:attr", []) - infohash_attr = next((a for a in attr if a.get("@name") == "infohash"), None) + infohash_attr = next( + (a for a in attr if a.get("@name") == "infohash"), None + ) if infohash_attr: infohash = infohash_attr.get("@value") - data[infohash] = {"name": stream.get("title")} - if self.parse_logging: + data[infohash] = { + "name": stream.get("title"), + "cached": None + } + if self.parse_logging: # For debugging parser large data sets for parsed_data in parsed_data_list: - logger.debug("Jackett Fetch: %s - Parsed item: %s", parsed_data["fetch"], parsed_data["string"]) + logger.debug( + "Jackett Fetch: %s - Parsed item: %s", + parsed_data["fetch"], + parsed_data["string"], + ) if data: item.parsed_data.extend(parsed_data_list) return data, len(streams) diff --git a/backend/program/scrapers/orionoid.py b/backend/program/scrapers/orionoid.py index 41665246..1f667db9 100644 --- a/backend/program/scrapers/orionoid.py +++ b/backend/program/scrapers/orionoid.py @@ -1,71 +1,73 @@ """ Orionoid scraper module """ -from typing import Optional -from pydantic import BaseModel +from datetime import datetime from requests import ConnectTimeout from requests.exceptions import RequestException from utils.logger import logger from utils.request import RateLimitExceeded, RateLimiter, get -from utils.settings import settings_manager +from program.settings.manager import settings_manager from utils.parser import parser +from program.media.item import Show, Season, Episode KEY_APP = "D3CH6HMX9KD9EMD68RXRCDUNBDJV5HRR" -class OrionoidConfig(BaseModel): - enabled: bool - api_key: Optional[str] - - class Orionoid: """Scraper for `Orionoid`""" - def __init__(self, _): + def __init__(self): self.key = "orionoid" - self.settings = OrionoidConfig(**settings_manager.get(f"scraping.{self.key}")) + self.settings = settings_manager.settings.scraping.orionoid self.is_premium = False + self.is_unlimited = False self.initialized = False - if self.validate_settings(): + if self.validate(): self.is_premium = self.check_premium() self.initialized = True else: return self.orionoid_limit = 0 - self.orionoid_remaining = 0 + self.orionoid_expiration = datetime.now() self.parse_logging = False self.max_calls = 100 if not self.is_premium else 1000 self.period = 86400 if not self.is_premium else 3600 - self.minute_limiter = RateLimiter(max_calls=self.max_calls, period=self.period, raise_on_limit=True) - self.second_limiter = RateLimiter(max_calls=1, period=5) + self.minute_limiter = RateLimiter( + max_calls=self.max_calls, period=self.period, raise_on_limit=True + ) + self.second_limiter = RateLimiter(max_calls=1, period=1) logger.info("Orionoid initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: """Validate the Orionoid class_settings.""" if not self.settings.enabled: logger.debug("Orionoid is set to disabled.") return False if len(self.settings.api_key) != 32 or self.settings.api_key == "": - logger.error("Orionoid API Key is not valid or not set. Please check your settings.") + logger.error( + "Orionoid API Key is not valid or not set. Please check your settings." + ) return False try: url = f"https://api.orionoid.com?keyapp={KEY_APP}&keyuser={self.settings.api_key}&mode=user&action=retrieve" response = get(url, retry_if_failed=False) if response.is_ok and hasattr(response.data, "result"): if not response.data.result.status == "success": - logger.error(f"Orionoid API Key is invalid. Status: {response.data.result.status}") + logger.error( + "Orionoid API Key is invalid. Status: %s", response.data.result.status + ) return False if not response.is_ok: - logger.error(f"Orionoid Status Code: {response.status_code}, Reason: {response.reason}") + logger.error( + "Orionoid Status Code: %s, Reason: %s", response.status_code, response.data.reason + ) return False + self.is_unlimited = True if response.data.data.subscription.package.type == "unlimited" else False return True except Exception as e: logger.exception("Orionoid failed to initialize: %s", e) return False def check_premium(self) -> bool: - """ - Check the user's status with the Orionoid API. - Returns True if the user is active, has a premium account, and has RealDebrid service enabled. - """ + """Check if the user is active, has a premium account, and has RealDebrid service enabled.""" url = f"https://api.orionoid.com?keyapp={KEY_APP}&keyuser={self.settings.api_key}&mode=user&action=retrieve" response = get(url, retry_if_failed=False) if response.is_ok and hasattr(response.data, "data"): @@ -75,41 +77,55 @@ def check_premium(self) -> bool: if active and premium and debrid: logger.info("Orionoid Premium Account Detected.") return True - else: - logger.error(f"Orionoid Free Account Detected.") + else: + logger.error("Orionoid Free Account Detected.") return False - + def run(self, item): """Scrape the Orionoid site for the given media items - and update the object with scraped streams""" - if item is None or not self.initialized: - return + and update the object with scraped streams""" + item.scraped_at = datetime.now() + item.scraped_times += 1 + if item is None or isinstance(item, Show): + yield item try: - self._scrape_item(item) + item = self._scrape_item(item) except ConnectTimeout: self.minute_limiter.limit_hit() logger.warn("Orionoid connection timeout for item: %s", item.log_string) - return except RequestException as e: self.minute_limiter.limit_hit() logger.exception("Orionoid request exception: %s", e) - return except RateLimitExceeded: self.minute_limiter.limit_hit() logger.warn("Orionoid rate limit hit for item: %s", item.log_string) - return except Exception as e: self.minute_limiter.limit_hit() - logger.exception("Orionoid exception for item: %s - Exception: %s", item.log_string, e) - return + logger.exception( + "Orionoid exception for item: %s - Exception: %s", item.log_string, e + ) + yield item def _scrape_item(self, item): data, stream_count = self.api_scrape(item) if len(data) > 0: item.streams.update(data) - logger.info("Found %s streams out of %s for %s", len(data), stream_count, item.log_string) + logger.debug( + "Found %s streams out of %s for %s", + len(data), + stream_count, + item.log_string, + ) else: - logger.debug("Could not find streams for %s", item.log_string) + if stream_count > 0: + logger.debug( + "Could not find good streams for %s out of %s", + item.log_string, + stream_count, + ) + else: + logger.debug("No streams found for %s", item.log_string) + return item def construct_url(self, media_type, imdb_id, season=None, episode=None) -> str: """Construct the URL for the Orionoid API.""" @@ -123,12 +139,17 @@ def construct_url(self, media_type, imdb_id, season=None, episode=None) -> str: "idimdb": imdb_id[2:], "streamtype": "torrent", "filename": "true", - "limitcount": "200" if self.is_premium else "10", + "limitcount": self.settings.limitcount if self.settings.limitcount else 5, "video3d": "false", "sortorder": "descending", - "sortvalue": "best" if self.is_premium else "popularity", + "sortvalue": "best" if self.is_premium else "popularity" } + if self.is_unlimited: + # This can use 2x towards your Orionoid limits. Only use if user is unlimited. + params["debridlookup"] = "realdebrid" + params["limitcount"] = 100 + if media_type == "show": params["numberseason"] = season params["numberepisode"] = episode if episode else 1 @@ -138,10 +159,10 @@ def construct_url(self, media_type, imdb_id, season=None, episode=None) -> str: def api_scrape(self, item): """Wrapper for Orionoid scrape method""" with self.minute_limiter: - if item.type == "season": + if isinstance(item, Season): imdb_id = item.parent.imdb_id url = self.construct_url("show", imdb_id, season=item.number) - elif item.type == "episode": + elif isinstance(item, Episode): imdb_id = item.parent.parent.imdb_id url = self.construct_url( "show", imdb_id, season=item.parent.number, episode=item.number @@ -153,27 +174,27 @@ def api_scrape(self, item): with self.second_limiter: response = get(url, retry_if_failed=False, timeout=60) if response.is_ok and hasattr(response.data, "data"): - - # Check and log Orionoid API limits - # self.orionoid_limit = response.data.data.requests.daily.limit - # self.orionoid_remaining = response.data.data.requests.daily.remaining - # if self.orionoid_remaining < 10: - # logger.warning(f"Orionoid API limit is low. Limit: {self.orionoid_limit}, Remaining: {self.orionoid_remaining}") - parsed_data_list = [ parser.parse(item, stream.file.name) for stream in response.data.data.streams if stream.file.hash ] data = { - stream.file.hash: {"name": stream.file.name} + stream.file.hash: { + "name": stream.file.name, + "cached": None + } for stream, parsed_data in zip(response.data.data.streams, parsed_data_list) if parsed_data["fetch"] } - if self.parse_logging: + if self.parse_logging: # For debugging parser large data sets for parsed_data in parsed_data_list: - logger.debug("Orionoid Fetch: %s - Parsed item: %s", parsed_data["fetch"], parsed_data["string"]) + logger.debug( + "Orionoid Fetch: %s - Parsed item: %s", + parsed_data["fetch"], + parsed_data["string"], + ) if data: item.parsed_data.extend(parsed_data_list) return data, len(response.data.data.streams) - return {}, 0 \ No newline at end of file + return {}, 0 diff --git a/backend/program/scrapers/torrentio.py b/backend/program/scrapers/torrentio.py index a47308f2..4431fb15 100644 --- a/backend/program/scrapers/torrentio.py +++ b/backend/program/scrapers/torrentio.py @@ -1,35 +1,31 @@ """ Torrentio scraper module """ -from typing import Optional -from pydantic import BaseModel +from datetime import datetime from requests import ConnectTimeout, ReadTimeout from requests.exceptions import RequestException from utils.logger import logger from utils.request import RateLimitExceeded, get, RateLimiter, ping -from utils.settings import settings_manager +from program.settings.manager import settings_manager from utils.parser import parser - - -class TorrentioConfig(BaseModel): - enabled: bool - url: Optional[str] - filter: Optional[str] - +from program.media.item import Show, Episode, Season +import traceback class Torrentio: """Scraper for `Torrentio`""" - def __init__(self, _): + def __init__(self): self.key = "torrentio" - self.settings = TorrentioConfig(**settings_manager.get(f"scraping.{self.key}")) - self.minute_limiter = RateLimiter(max_calls=300, period=3600, raise_on_limit=True) - self.second_limiter = RateLimiter(max_calls=1, period=5) - self.initialized = self.validate_settings() + self.settings = settings_manager.settings.scraping.torrentio + self.minute_limiter = RateLimiter( + max_calls=300, period=3600, raise_on_limit=True + ) + self.second_limiter = RateLimiter(max_calls=1, period=1) + self.initialized = self.validate() if not self.initialized: return self.parse_logging = False logger.info("Torrentio initialized!") - def validate_settings(self) -> bool: + def validate(self) -> bool: """Validate the Torrentio settings.""" if not self.settings.enabled: logger.debug("Torrentio is set to disabled.") @@ -38,7 +34,7 @@ def validate_settings(self) -> bool: logger.error("Torrentio URL is not configured and will not be used.") return False try: - url = f"{self.settings.url}/{self.settings.filter}/stream/movie/tt0068646.json" + url = f"{self.settings.url}/{self.settings.filter}/manifest.json" response = ping(url=url, timeout=10) if response.ok: return True @@ -50,44 +46,61 @@ def validate_settings(self) -> bool: def run(self, item): """Scrape the torrentio site for the given media items and update the object with scraped streams""" - if item is None or not self.initialized: - return + item.scraped_at = datetime.now() + item.scraped_times += 1 + if item is None or isinstance(item, Show): + yield item try: - self._scrape_item(item) + item = self._scrape_item(item) except RateLimitExceeded: self.minute_limiter.limit_hit() - return except ConnectTimeout: + self.minute_limiter.limit_hit() logger.warn("Torrentio connection timeout for item: %s", item.log_string) - return except ReadTimeout: + self.minute_limiter.limit_hit() logger.warn("Torrentio read timeout for item: %s", item.log_string) - return except RequestException as e: + self.minute_limiter.limit_hit() logger.warn("Torrentio request exception: %s", e) - return except Exception as e: - logger.warn("Torrentio exception thrown: %s", e) - return + self.minute_limiter.limit_hit() + logger.warn("Torrentio exception thrown: %s", traceback.format_exc()) + yield item def _scrape_item(self, item): """Scrape torrentio for the given media item""" data, stream_count = self.api_scrape(item) if len(data) > 0: item.streams.update(data) - logger.info("Found %s streams out of %s for %s", len(data), stream_count, item.log_string) + logger.debug( + "Found %s streams out of %s for %s", + len(data), + stream_count, + item.log_string, + ) else: if stream_count > 0: - logger.debug("Could not find good streams for %s out of %s", item.log_string, stream_count) + logger.debug( + "Could not find good streams for %s out of %s", + item.log_string, + stream_count, + ) + else: + logger.debug("No streams found for %s", item.log_string) + return item def api_scrape(self, item): """Wrapper for torrentio scrape method""" with self.minute_limiter: - if item.type == "season": + # Torrentio can't scrape shows + if isinstance(item, Show): + return item + elif isinstance(item, Season): identifier = f":{item.number}:1" scrape_type = "series" imdb_id = item.parent.imdb_id - elif item.type == "episode": + elif isinstance(item, Episode): identifier = f":{item.parent.number}:{item.number}" scrape_type = "series" imdb_id = item.parent.parent.imdb_id @@ -97,7 +110,7 @@ def api_scrape(self, item): imdb_id = item.imdb_id url = ( - f"{self.settings.url}/{self.settings.filter}" + f"{self.settings.url}{self.settings.filter}" + f"/stream/{scrape_type}/{imdb_id}" ) if identifier: @@ -106,16 +119,27 @@ def api_scrape(self, item): response = get(f"{url}.json", retry_if_failed=False, timeout=60) if response.is_ok and len(response.data.streams) > 0: parsed_data_list = [ - parser.parse(item, stream.title.split("\n👤")[0].split("\n")[0]) for stream in response.data.streams + parser.parse(item, stream.title.split("\n👤")[0].split("\n")[0]) + for stream in response.data.streams ] data = { - stream.infoHash: {"name": stream.title.split("\n👤")[0].split("\n")[0]} - for stream, parsed_data in zip(response.data.streams, parsed_data_list) - if parsed_data.get("fetch", False) and parsed_data.get("string", False) + stream.infoHash: { + "name": stream.title.split("\n👤")[0].split("\n")[0], + "cached": None + } + for stream, parsed_data in zip( + response.data.streams, parsed_data_list + ) + if parsed_data.get("fetch", False) + and parsed_data.get("string", False) } - if self.parse_logging: + if self.parse_logging: # For debugging parser large data sets for parsed_data in parsed_data_list: - logger.debug("Torrentio Fetch: %s - Parsed item: %s", parsed_data["fetch"], parsed_data["string"]) + logger.debug( + "Torrentio Fetch: %s - Parsed item: %s", + parsed_data["fetch"], + parsed_data["string"], + ) if data: item.parsed_data.extend(parsed_data_list) return data, len(response.data.streams) diff --git a/backend/program/settings/__init__.py b/backend/program/settings/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/backend/program/settings/manager.py b/backend/program/settings/manager.py new file mode 100644 index 00000000..3bd292c7 --- /dev/null +++ b/backend/program/settings/manager.py @@ -0,0 +1,59 @@ +import json +from pydantic import ValidationError +from program.settings.models import AppModel, Observable +from utils import data_dir_path +from utils.logger import logger + + +class SettingsManager(): + """Class that handles settings, ensuring they are validated against a Pydantic schema.""" + + def __init__(self): + self.observers = [] + self.filename = "settings.json" + self.settings_file = data_dir_path / self.filename + + Observable.set_notify_observers(self.notify_observers) + + if not self.settings_file.exists(): + self.settings = AppModel() + self.notify_observers() + else: + self.load() + + def register_observer(self, observer): + self.observers.append(observer) + + def notify_observers(self): + for observer in self.observers: + observer() + + def load(self, settings_dict: dict | None = None): + """Load settings from file, validating against the AppModel schema.""" + try: + if not settings_dict: + with open(self.settings_file, "r", encoding="utf-8") as file: + settings_dict = json.loads(file.read()) + self.settings = AppModel.model_validate(settings_dict) + except ValidationError as e: + logger.error( + f"Error loading settings: {e}, initializing with default settings" + ) + raise + except json.JSONDecodeError as e: + logger.error( + f"Error parsing settings file: {e}, initializing with default settings" + ) + raise + except FileNotFoundError: + logger.error(f"Error loading settings: {self.settings_file} does not exist") + raise + self.notify_observers() + + def save(self): + """Save settings to file, using Pydantic model for JSON serialization.""" + with open(self.settings_file, "w", encoding="utf-8") as file: + file.write(self.settings.model_dump_json(indent=4)) + + +settings_manager = SettingsManager() diff --git a/backend/program/settings/models.py b/backend/program/settings/models.py new file mode 100644 index 00000000..e40d411b --- /dev/null +++ b/backend/program/settings/models.py @@ -0,0 +1,149 @@ +"""Iceberg settings models""" +from pathlib import Path +from pydantic import BaseModel, HttpUrl, validator +from utils import version_file_path + + +class Observable(BaseModel): + class Config: + arbitrary_types_allowed = True + + # Assuming _notify_observers is a static method or class-level attribute + _notify_observers = None + + # This method sets the change notifier on the class, not an instance + @classmethod + def set_notify_observers(cls, notify_observers_callable): + cls._notify_observers = notify_observers_callable + + def __setattr__(self, name, value): + super().__setattr__(name, value) + if self.__class__._notify_observers: + self.__class__._notify_observers() + + + + +class DebridModel(Observable): + api_key: str = "" + + +class SymlinkModel(Observable): + rclone_path: Path = Path() + library_path: Path = Path() + + +# Content Services + + +class Updatable(Observable): + update_interval: int = 80 + + @validator('update_interval') + def check_update_interval(cls, v): + if v < (limit := 5): + raise ValueError(f"update_interval must be at least {limit} seconds") + return v + +class PlexLibraryModel(Updatable): + update_interval: int = 120 + token: str = "" + url: str = "http://localhost:32400" + + +class ListrrModel(Updatable): + enabled: bool = False + movie_lists: list[str] = [""] + show_lists: list[str] = [""] + api_key: str = "" + update_interval: int = 300 + + +class MdblistModel(Updatable): + enabled: bool = False + api_key: str = "" + lists: list[str] = [""] + update_interval: int = 300 + + +class OverseerrModel(Updatable): + enabled: bool = False + url: str = "http://localhost:5055" + api_key: str = "" + update_interval: int = 60 + + +class PlexWatchlistModel(Updatable): + enabled: bool = False + rss: str = "" + update_interval: int = 60 + + +class ContentModel(Observable): + listrr: ListrrModel = ListrrModel() + mdblist: MdblistModel = MdblistModel() + overseerr: OverseerrModel = OverseerrModel() + plex_watchlist: PlexWatchlistModel = PlexWatchlistModel() + + +# Scraper Services + + +class JackettConfig(Observable): + enabled: bool = False + url: str = "http://localhost:9117" + api_key: str = "" + + +class OrionoidConfig(Observable): + enabled: bool = False + api_key: str = "" + limitcount: int = 5 + + +class TorrentioConfig(Observable): + enabled: bool = False + filter: str = "sort=qualitysize%7Cqualityfilter=480p,scr,cam" + url: HttpUrl = "https://torrentio.strem.fun" + + +class ScraperModel(Observable): + after_2: float = 2 + after_5: int = 6 + after_10: int = 24 + jackett: JackettConfig = JackettConfig() + orionoid: OrionoidConfig = OrionoidConfig() + torrentio: TorrentioConfig = TorrentioConfig() + + +class ParserModel(Observable): + highest_quality: bool = False + include_4k: bool = False + repack_proper: bool = True + language: list[str] = ["English"] + + +# Application Settings + +class IndexerModel(Observable): + update_interval: int = 60 * 60 + + +def get_version() -> str: + with open(version_file_path.resolve()) as file: + return file.read() + + +class AppModel(Observable): + version: str = get_version() + debug: bool = True + log: bool = True + plex: PlexLibraryModel = PlexLibraryModel() + real_debrid: DebridModel = DebridModel() + symlink: SymlinkModel = SymlinkModel() + content: ContentModel = ContentModel() + scraping: ScraperModel = ScraperModel() + parser: ParserModel = ParserModel() + indexer: IndexerModel = IndexerModel() + + diff --git a/backend/program/state_transision.py b/backend/program/state_transision.py new file mode 100644 index 00000000..1eedca1b --- /dev/null +++ b/backend/program/state_transision.py @@ -0,0 +1,99 @@ +from program.content import Overseerr, PlexWatchlist, Listrr, Mdblist +from program.indexers.trakt import TraktIndexer +from program.libaries import SymlinkLibrary +from program.realdebrid import Debrid +from program.scrapers import Scraping +from program.symlink import Symlinker +from program.updaters.plex import PlexUpdater + +from program.types import ProcessedEvent, Service +from program.media import MediaItem, Season, Episode, Show, Movie, States +from utils.logger import logger + + +def process_event(existing_item: MediaItem | None, emitted_by: Service, item: MediaItem) -> ProcessedEvent: + """Take the input event, process it, and output items to submit to a Service, and an item + to update the container with.""" + next_service : Service = None + updated_item = item + no_further_processing: ProcessedEvent = (None, None, []) + # we always want to get metadata for content items before we compare to the container. + # we can't just check if the show exists we have to check if it's complete + source_services = (Overseerr, PlexWatchlist, Listrr, Mdblist, SymlinkLibrary) + if emitted_by in source_services or item.state == States.Unknown: + next_service = TraktIndexer + # seasons can't be indexed so we'll index and process the show instead + if isinstance(item, Season): + item = item.parent + existing_item = existing_item.parent if existing_item else None + # if we already have a copy of this item check if we even need to index it + if existing_item and not TraktIndexer.should_submit(existing_item): + # ignore this item + return no_further_processing + # don't update the container until we've indexed the item + return None, next_service, [item] + elif emitted_by == TraktIndexer or item.state == States.Indexed: + next_service = Scraping + if existing_item: + if not existing_item.indexed_at: + # merge our fresh metadata item to make sure there aren't any + # missing seasons or episodes in our library copy + if isinstance(item, (Show, Season)): + existing_item.fill_in_missing_children(item) + # merge in the metadata in case its missing (like on cold boot) + existing_item.copy_other_media_attr(item) + # update the timestamp now that we have new metadata + existing_item.indexed_at = item.indexed_at + # use the merged data for the rest of the state transition + updated_item = item = existing_item + + # if after filling in missing episodes we are still complete then skip + if existing_item.state == States.Completed: + # make sure to update with the (potentially) newly merged item + return existing_item, None, [] + # we attempted to scrape it already and it failed, so try scraping each component + if item.scraped_times and isinstance(item, (Show, Season)): + if isinstance(item, Show): + items_to_submit = [s for s in item.seasons if s.state != States.Completed] + elif isinstance(item, Season): + items_to_submit = [e for e in item.episodes if e.state != States.Completed] + elif Scraping.should_submit(item): + items_to_submit = [item] + else: + items_to_submit = [] + # Only shows and seasons can be PartiallyCompleted. This is also the last part of the state + # processing that can can be at the show level + elif item.state == States.PartiallyCompleted: + next_service = Scraping + if isinstance(item, Show): + items_to_submit = [s for s in item.seasons if s.state != States.Completed] + elif isinstance(item, Season): + items_to_submit = [e for e in item.episodes if e.state != States.Completed] + # if we successfully scraped the item then send it to debrid + elif item.state == States.Scraped: + next_service = Debrid + items_to_submit = [item] + elif item.state == States.Downloaded: + next_service = Symlinker + if isinstance(item, Season): + proposed_submissions = [e for e in item.episodes] + elif isinstance(item, (Movie, Episode)): + proposed_submissions = [item] + items_to_submit = [] + for item in proposed_submissions: + if not Symlinker.should_submit(item): + logger.error("Item %s rejected by Symlinker, skipping", item.log_string) + else: + items_to_submit.append(item) + elif item.state == States.Symlinked: + next_service = PlexUpdater + if isinstance(item, Show): + items_to_submit = [s for s in item.seasons] + elif isinstance(item, Season): + items_to_submit = [e for e in item.episodes] + else: + items_to_submit = [item] + elif item.state == States.Completed: + return no_further_processing + + return updated_item, next_service, items_to_submit \ No newline at end of file diff --git a/backend/program/symlink.py b/backend/program/symlink.py index 53cd153f..29118c85 100644 --- a/backend/program/symlink.py +++ b/backend/program/symlink.py @@ -1,107 +1,176 @@ """Symlinking module""" import os +from datetime import datetime from pathlib import Path -from typing import NamedTuple -from pydantic import BaseModel -from utils.settings import settings_manager as settings +from watchdog.observers import Observer +from watchdog.events import FileSystemEventHandler from utils.logger import logger +from program.settings.manager import settings_manager +from program.media.item import Movie, Episode -class SymlinkConfig(BaseModel): - host_path: Path - container_path: Path -class Setting(NamedTuple): - key: str - value: str +class DeleteHandler(FileSystemEventHandler): + """Handles the deletion of symlinks.""" -class Symlinker(): + def __init__(self, symlinker): + super().__init__() + self.symlinker = symlinker + + def on_deleted(self, event): + """Called when a file or directory is deleted.""" + if event.src_path: + # TODO: Check if its a file or directory and handle accordingly. + # This is getting called for the file + directory as well.. + # It will first get called on the file, then the parent folder. + # This is not what we want.. but atleast it's a start. + self.symlinker.on_symlink_deleted(event.src_path) + + +class Symlinker: """ A class that represents a symlinker thread. - Attributes: - media_items (MediaItemContainer): The container of media items. - running (bool): Flag indicating if the thread is running. - cache (dict): A dictionary to cache file paths. - container_path (str): The absolute path of the container mount. - host_path (str): The absolute path of the host mount. - symlink_path (str): The path where the symlinks will be created. + Settings Attributes: + rclone_path (str): The absolute path of the rclone mount root directory. + library_path (str): The absolute path of the location we will create our symlinks that point to the rclone_path. """ - def __init__(self, _): + + def __init__(self): self.key = "symlink" - self.settings = SymlinkConfig(**settings.get(self.key)) + self.settings = settings_manager.settings.symlink + self.rclone_path = self.settings.rclone_path self.initialized = self.validate() if not self.initialized: logger.error("Symlink initialization failed due to invalid configuration.") return - logger.info("Rclone path symlinks are pointed to: %s", self.settings.host_path) - logger.info("Symlinks will be placed in: %s", self.library_path) + if self.initialized: + self.start_monitor() + logger.info("Rclone path symlinks are pointed to: %s", self.rclone_path) + logger.info("Symlinks will be placed in: %s", self.settings.library_path) logger.info("Symlink initialized!") - self.initialized = True def validate(self): """Validate paths and create the initial folders.""" - host_path = Path(self.settings.host_path) if self.settings.host_path else None - container_path = Path(self.settings.container_path) if self.settings.container_path else None - if not host_path or not container_path or host_path == Path('.') or container_path == Path('.'): - logger.error("Host or container path not provided, is empty, or is set to the current directory.") + library_path = self.settings.library_path + if ( + not self.rclone_path + or not library_path + or self.rclone_path == Path(".") + or library_path == Path(".") + ): + logger.error( + "rclone_path or library_path not provided, is empty, or is set to the current directory." + ) return False - if not host_path.is_absolute(): - logger.error(f"Host path is not an absolute path: {host_path}") + if not self.rclone_path.is_absolute(): + logger.error("rclone_path is not an absolute path: %s", self.rclone_path) return False - if not container_path.is_absolute(): - logger.error(f"Container path is not an absolute path: {container_path}") + if not library_path.is_absolute(): + logger.error("library_path is not an absolute path: %s", library_path) return False try: - if not host_path.is_dir(): - logger.error(f"Host path is not a directory or does not exist: {host_path}") - return False - if not container_path.is_dir(): - logger.error(f"Container path is not a directory or does not exist: {container_path}") - return False - if Path(self.settings.host_path / "__all__").exists() and Path(self.settings.host_path / "__all__").is_dir(): - logger.debug("Detected Zurg host path. Using __all__ folder for host path.") - self.settings.host_path = self.settings.host_path / "__all__" - elif Path(self.settings.host_path / "torrents").exists() and Path(self.settings.host_path / "torrents").is_dir(): - logger.debug("Detected standard rclone host path. Using torrents folder for host path.") - self.settings.host_path = self.settings.host_path / "torrents" + if ( + all_path := self.settings.rclone_path / "__all__" + ).exists() and all_path.is_dir(): + logger.debug( + "Detected Zurg rclone_path. Using __all__ folder for rclone_path." + ) + self.rclone_path = all_path + elif ( + torrent_path := self.settings.rclone_path / "torrents" + ).exists() and torrent_path.is_dir(): + logger.debug( + "Detected standard rclone_path. Using torrents folder for rclone_path." + ) + self.rclone_path = torrent_path if not self.create_initial_folders(): - logger.error("Failed to create initial library folders.") + logger.error( + "Failed to create initial library folders in your library_path." + ) return False return True except FileNotFoundError as e: - logger.error(f"Path not found: {e}") + logger.error("Path not found: %s", e) except PermissionError as e: - logger.error(f"Permission denied when accessing path: {e}") + logger.error("Permission denied when accessing path: %s", e) except OSError as e: - logger.error(f"OS error when validating paths: {e}") + logger.error("OS error when validating paths: %s", e) return False + def start_monitor(self): + """Starts monitoring the library path for symlink deletions.""" + self.event_handler = DeleteHandler(self) + self.observer = Observer() + self.observer.schedule(self.event_handler, self.settings.library_path, recursive=True) + self.observer.start() + logger.debug("Start monitor for symlink deletions.") + + def stop_monitor(self): + """Stops the directory monitoring.""" + if hasattr(self, 'observer'): + self.observer.stop() + self.observer.join() + logger.debug("Stopped monitoring for symlink deletions.") + + def on_symlink_deleted(self, path): + """Handle a symlink deletion event.""" + logger.debug(f"Detected deletion of symlink: {path}") + # TODO: Implement logic to handle deletion.. + # We should use `update_path` to determine the item, + # and work with the item (instead of path) to remove from content services.. + # Need to bring in media_items from the main program and remove the item from it.. + def create_initial_folders(self): """Create the initial library folders.""" try: - self.library_path = self.settings.container_path.parent / "library" - self.library_path_movies = self.library_path / "movies" - self.library_path_shows = self.library_path / "shows" - self.library_path_anime_movies = self.library_path / "anime_movies" - self.library_path_anime_shows = self.library_path / "anime_shows" - folders = [self.library_path_movies, - self.library_path_shows, - self.library_path_anime_movies, - self.library_path_anime_shows] + self.library_path_movies = self.settings.library_path / "movies" + self.library_path_shows = self.settings.library_path / "shows" + self.library_path_anime_movies = self.settings.library_path / "anime_movies" + self.library_path_anime_shows = self.settings.library_path / "anime_shows" + folders = [ + self.library_path_movies, + self.library_path_shows, + self.library_path_anime_movies, + self.library_path_anime_shows, + ] for folder in folders: if not folder.exists(): folder.mkdir(parents=True, exist_ok=True) except PermissionError as e: - logger.error(f"Permission denied when creating directory: {e}") + logger.error("Permission denied when creating directory: %s", e) return False except OSError as e: - logger.error(f"OS error when creating directory: {e}") + logger.error("OS error when creating directory: %c", e) return False return True def run(self, item): - self._run(item) + """Check if the media item exists and create a symlink if it does""" + found = False + rclone_path = Path(self.settings.rclone_path) + if os.path.exists(rclone_path / item.folder / item.file): + found = True + elif os.path.exists(rclone_path / item.alternative_folder / item.file): + item.set("folder", item.alternative_folder) + found = True + elif os.path.exists(rclone_path / item.file / item.file): + item.set("folder", item.file) + found = True + if found: + self._symlink(item) + else: + logger.error( + "Could not find %s in subdirectories of %s to create symlink," + " maybe it failed to download?", item.log_string, rclone_path + ) + item.symlinked_at = datetime.now() + item.symlinked_times += 1 + yield item + + @staticmethod + def should_submit(item): + return item.symlinked_times < 3 def _determine_file_name(self, item): """Determine the filename of the symlink.""" @@ -124,34 +193,20 @@ def _determine_file_name(self, item): filename = f"{showname} ({showyear}) - s{str(item.parent.number).zfill(2)}{episode_string} - {item.title}" return filename - def _run(self, item): - """Check if the media item exists and create a symlink if it does""" - found = False - if os.path.exists(os.path.join(self.settings.host_path, item.folder, item.file)): - found = True - elif os.path.exists(os.path.join(self.settings.host_path, item.alternative_folder, item.file)): - item.set("folder", item.alternative_folder) - found = True - elif os.path.exists(os.path.join(self.settings.host_path, item.file, item.file)): - item.set("folder", item.file) - found = True - if found: - self._symlink(item) - def _symlink(self, item): """Create a symlink for the given media item""" + # Symlinks get created on host as: destination -> source extension = item.file.split(".")[-1] symlink_filename = f"{self._determine_file_name(item)}.{extension}" - destination = self._create_item_folders(item, symlink_filename) - + source = os.path.join(self.rclone_path, item.folder, item.file) if destination: try: os.remove(destination) except FileNotFoundError: pass os.symlink( - os.path.join(self.settings.container_path, item.folder, item.file), + source, destination, ) logger.debug("Created symlink for %s", item.log_string) @@ -159,35 +214,41 @@ def _symlink(self, item): else: logger.debug( "Could not create symlink for item_id (%s) to (%s)", - item.id, + item.item_id, destination, ) def _create_item_folders(self, item, filename) -> str: - if item.type == "movie": + if isinstance(item, Movie): movie_folder = ( - f"{item.title.replace('/', '-')} ({item.aired_at.year}) " + "{imdb-" + item.imdb_id + "}" + f"{item.title.replace('/', '-')} ({item.aired_at.year}) " + + "{imdb-" + + item.imdb_id + + "}" ) destination_folder = os.path.join(self.library_path_movies, movie_folder) if not os.path.exists(destination_folder): os.mkdir(destination_folder) - destination_path = os.path.join(destination_folder, filename.replace('/', '-')) + destination_path = os.path.join( + destination_folder, filename.replace("/", "-") + ) item.set( "update_folder", os.path.join(self.library_path_movies, movie_folder) ) - if item.type == "episode": + elif isinstance(item, Episode): show = item.parent.parent folder_name_show = ( - f"{show.title.replace('/', '-')} ({show.aired_at.year})" + " {" + show.imdb_id + "}" + f"{show.title.replace('/', '-')} ({show.aired_at.year})" + + " {" + + show.imdb_id + + "}" ) show_path = os.path.join(self.library_path_shows, folder_name_show) - if not os.path.exists(show_path): - os.mkdir(show_path) + os.makedirs(show_path, exist_ok=True) season = item.parent folder_season_name = f"Season {str(season.number).zfill(2)}" season_path = os.path.join(show_path, folder_season_name) - if not os.path.exists(season_path): - os.mkdir(season_path) - destination_path = os.path.join(season_path, filename.replace('/', '-')) + os.makedirs(season_path, exist_ok=True) + destination_path = os.path.join(season_path, filename.replace("/", "-")) item.set("update_folder", os.path.join(season_path)) return destination_path diff --git a/backend/program/types.py b/backend/program/types.py new file mode 100644 index 00000000..e8f830f5 --- /dev/null +++ b/backend/program/types.py @@ -0,0 +1,21 @@ +from dataclasses import dataclass +from typing import Union, Generator + +from program.content import Overseerr, PlexWatchlist, Listrr, Mdblist +from program.media.item import MediaItem +from program.libaries import SymlinkLibrary +from program.realdebrid import Debrid +from program.scrapers import Scraping, Torrentio, Orionoid, Jackett +from program.symlink import Symlinker + +# Typehint classes +Scraper = Union[Scraping, Torrentio, Orionoid, Jackett] +Content = Union[Overseerr, PlexWatchlist, Listrr, Mdblist] +Service = Union[Content, SymlinkLibrary, Scraper, Debrid, Symlinker] +MediaItemGenerator = Generator[MediaItem, None, MediaItem | None] +ProcessedEvent = (MediaItem, Service, list[MediaItem]) + +@dataclass +class Event: + emitted_by: Service + item: MediaItem diff --git a/backend/program/updaters/__init__.py b/backend/program/updaters/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/backend/program/updaters/plex.py b/backend/program/updaters/plex.py new file mode 100644 index 00000000..c709590c --- /dev/null +++ b/backend/program/updaters/plex.py @@ -0,0 +1,50 @@ +"""Plex Updater module""" +import os +from plexapi.server import PlexServer +from plexapi.exceptions import BadRequest, Unauthorized +from utils.logger import logger +from program.settings.manager import settings_manager +from program.media.item import Episode + + +class PlexUpdater: + def __init__(self): + self.key = "plexupdater" + self.initialized = False + self.library_path = os.path.abspath( + os.path.dirname(settings_manager.settings.symlink.library_path) + ) + try: + self.settings = settings_manager.settings.plex + self.plex = PlexServer(self.settings.url, self.settings.token, timeout=60) + except Unauthorized: + logger.error("Plex is not authorized!") + return + except BadRequest as e: + logger.error("Plex is not configured correctly: %s", e) + return + except Exception as e: + logger.error("Plex exception thrown: %s", e) + return + self.initialized = True + + def run(self, item): + """Update plex library section for a single item""" + item_type = "show" if isinstance(item, Episode) else "movie" + for section in self.plex.library.sections(): + if section.type != item_type: + continue + + if self._update_section(section, item): + logger.debug( + "Updated section %s for %s", section.title, item.log_string + ) + yield item + + def _update_section(self, section, item): + if item.symlinked and item.get("update_folder") != "updated": + update_folder = item.update_folder + section.update(str(update_folder)) + item.set("update_folder", "updated") + return True + return False \ No newline at end of file diff --git a/backend/program/updaters/trakt.py b/backend/program/updaters/trakt.py deleted file mode 100644 index 8009fa60..00000000 --- a/backend/program/updaters/trakt.py +++ /dev/null @@ -1,194 +0,0 @@ -"""Trakt updater module""" -import math -import concurrent.futures -from datetime import datetime -from os import path -from utils.logger import get_data_path, logger -from utils.request import get -from program.media.container import MediaItemContainer -from program.media.item import Movie, Show, Season, Episode - -CLIENT_ID = "0183a05ad97098d87287fe46da4ae286f434f32e8e951caad4cc147c947d79a3" - - -class Updater: - """Trakt updater class""" - - def __init__(self): - self.trakt_data = MediaItemContainer() - self.pkl_file = path.join(get_data_path(), "trakt_data.pkl") - self.ids = [] - - def create_items(self, imdb_ids): - """Update media items to state where they can start downloading""" - if len(imdb_ids) == 0: - return MediaItemContainer() - - self.trakt_data.load(self.pkl_file) - new_items = MediaItemContainer() - get_items = MediaItemContainer() - - existing_imdb_ids = {item.imdb_id for item in self.trakt_data.items if item} - - # This is to calculate 10% batch sizes to speed up the process - batch_size = math.ceil(len(imdb_ids) * 0.1) or 1 - imdb_id_batches = [imdb_ids[i:i + batch_size] for i in range(0, len(imdb_ids), batch_size)] - - with concurrent.futures.ThreadPoolExecutor() as executor: - for imdb_id_batch in imdb_id_batches: - future_items = {executor.submit(self._create_item, imdb_id): imdb_id for imdb_id in imdb_id_batch if imdb_id not in existing_imdb_ids or imdb_id is not None} - for future in concurrent.futures.as_completed(future_items): - item = future.result() - if item: - new_items += item - get_items.append(item) - - for imdb_id in imdb_ids: - if imdb_id in existing_imdb_ids: - get_items.append(self.trakt_data.get_item("imdb_id", imdb_id)) - - added_items = self.trakt_data.extend(new_items) - length = len(added_items) - if length >= 1 and length <= 5: - for item in added_items: - logger.debug("Updated metadata for %s", item.log_string) - elif length > 5: - logger.debug("Updated metadata for %s items", len(added_items)) - if length > 0: - self.trakt_data.extend(added_items) - self.trakt_data.save(self.pkl_file) - return get_items - - def _create_item(self, imdb_id): - item = create_item_from_imdb_id(imdb_id) - if item is None: - return None - if item and item.type == "show": - seasons = get_show(imdb_id) - for season in seasons: - if season.number != 0: - new_season = _map_item_from_data(season, "season") - for episode in season.episodes: - new_episode = _map_item_from_data(episode, "episode") - new_season.add_episode(new_episode) - item.add_season(new_season) - return item - - -def _map_item_from_data(data, item_type): - """Map trakt.tv API data to MediaItemContainer""" - if item_type not in ["movie", "show", "season", "episode"]: - logger.debug("Unknown item type %s for %s not found in list of acceptable objects", item_type, data.title) - return None - formatted_aired_at = None - if getattr(data, "first_aired", None) and ( - item_type == "show" - or (item_type == "season" and data.aired_episodes == data.episode_count) - or item_type == "episode" - ): - aired_at = data.first_aired - formatted_aired_at = datetime.strptime(aired_at, "%Y-%m-%dT%H:%M:%S.%fZ") - if getattr(data, "released", None): - released_at = data.released - formatted_aired_at = datetime.strptime(released_at, "%Y-%m-%d") - is_anime = "anime" in getattr(data, "genres", []) - item = { - "title": getattr(data, "title", None), # 'Game of Thrones' - "year": getattr(data, "year", None), # 2011 - "status": getattr(data, "status", None), # 'ended', 'released', 'returning series' - "aired_at": formatted_aired_at, # datetime.datetime(2011, 4, 17, 0, 0) # True" - "imdb_id": getattr(data.ids, "imdb", None), # 'tt0496424' - "tvdb_id": getattr(data.ids, "tvdb", None), # 79488 - "tmdb_id": getattr(data.ids, "tmdb", None), # 1399 - "genres": getattr(data, "genres", None), # ['Action', 'Adventure', 'Drama', 'Fantasy'] - "network": getattr(data, "network", None), # 'HBO' - "country": getattr(data, "country", None), # 'US' - "language": getattr(data, "language", None), # 'en' - "requested_at": datetime.now(), # datetime.datetime(2021, 4, 17, 0, 0) - } - - match item_type: - case "movie": - item["is_anime"] = is_anime - return_item = Movie(item) - case "show": - item["is_anime"] = is_anime - return_item = Show(item) - case "season": - item["number"] = getattr(data, "number") - return_item = Season(item) - case "episode": - item["number"] = getattr(data, "number") - return_item = Episode(item) - case _: - logger.debug("Unknown item type %s for %s", item_type, data.title) - return_item = None - return return_item - - -# API METHODS - -def get_show(imdb_id: str): - """Wrapper for trakt.tv API show method""" - url = f"https://api.trakt.tv/shows/{imdb_id}/seasons?extended=episodes,full" - response = get( - url, - additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, - ) - if response.is_ok: - if response.data: - return response.data - return [] - -def create_item_from_imdb_id(imdb_id: str): - """Wrapper for trakt.tv API search method""" - if imdb_id is None: - logger.debug("Unable to create item from IMDb ID. No IMDb ID provided.") - return - url = f"https://api.trakt.tv/search/imdb/{imdb_id}?extended=full" - response = get( - url, - additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, - ) - if response.is_ok and len(response.data) > 0: - try: - media_type = response.data[0].type - if media_type == "movie": - data = response.data[0].movie - elif media_type == "show": - data = response.data[0].show - elif media_type == "season": - data = response.data[0].season - elif media_type == "episode": - data = response.data[0].episode - if data: - return _map_item_from_data(data, media_type) - except UnboundLocalError: - logger.error("Unknown item %s with response %s", imdb_id, response) - return - logger.error("Unable to create item from IMDb ID %s", imdb_id) - return - -def get_imdbid_from_tvdb(tvdb_id: str) -> str: - """Get IMDb ID from TVDB ID in Trakt""" - url = f"https://api.trakt.tv/search/tvdb/{tvdb_id}?extended=full" - response = get( - url, - additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, - ) - if response.is_ok and len(response.data) > 0: - # noticing there are multiple results for some TVDB IDs - # TODO: Need to check item.type and compare to the resulting types.. - return response.data[0].show.ids.imdb - return None - -def get_imdbid_from_tmdb(tmdb_id: str) -> str: - """Get IMDb ID from TMDB ID in Trakt""" - url = f"https://api.trakt.tv/search/tmdb/{tmdb_id}?extended=full" - response = get( - url, - additional_headers={"trakt-api-version": "2", "trakt-api-key": CLIENT_ID}, - ) - if response.is_ok and len(response.data) > 0: - return response.data[0].movie.ids.imdb - return None \ No newline at end of file diff --git a/backend/tests/test_container.py b/backend/tests/test_container.py new file mode 100644 index 00000000..04ffc718 --- /dev/null +++ b/backend/tests/test_container.py @@ -0,0 +1,62 @@ +import pytest +from program.media.container import MediaItemContainer +from program.media.item import Show, Season, Episode + +# Fixture to setup a MediaItemContainer +@pytest.fixture +def container(): + return MediaItemContainer() + +@pytest.fixture +def test_show(): + # Setup Show with a Season and an Episode + show = Show({'imdb_id': 'tt1405406'}) + season = Season({'number': 1}) + episode = Episode({'number': 1}) + season.add_episode(episode) + show.add_season(season) + return show + +def test_upsert_episode_modification_reflects_in_parent_season(container, test_show): + # Upsert the show with its season and episode + container.upsert(test_show) + + modified_episode = test_show.seasons[0].episodes[0] + + # Modify an attribute of the copied episode + modified_attribute_value = "Modified Value" + modified_episode.some_attribute = modified_attribute_value + + # Upsert the modified episode + container.upsert(modified_episode) + + # Fetch the season from the container to check if it contains the updated episode data + container_season = container._items[modified_episode.item_id.parent_id] + container_episode = container._items[modified_episode.item_id] + + # Verify that the modified episode's attribute is updated in the container + assert container_episode.some_attribute == modified_attribute_value + # Verify that the season in the container now points to the updated episode + assert container_season.episodes[container_episode.number - 1].some_attribute == modified_attribute_value + +def test_upsert_season_modification_reflects_in_parent_show(container, test_show): + container.upsert(test_show) + # Select a season to modify + modified_season = test_show.seasons[0] + + # Modify an attribute of the season + modified_attribute_value = "Modified Season Attribute" + modified_season.some_attribute = modified_attribute_value + + # Upsert the modified season + container.upsert(modified_season) + + # Fetch the show from the container to check if it contains the updated season data + container_show = container._items[test_show.item_id] + # Since the season was replaced with an ID reference, fetch the season directly from the container + container_season = container._items[modified_season.item_id] + + # Verify that the modified season's attribute is updated in the container + assert container_season.some_attribute == modified_attribute_value + # Verify that the show in the container now references the updated season + assert container_show.seasons[container_season.number - 1].some_attribute == modified_attribute_value \ No newline at end of file diff --git a/backend/tests/test_items.py b/backend/tests/test_items.py index ea76c37f..f4d8a58c 100644 --- a/backend/tests/test_items.py +++ b/backend/tests/test_items.py @@ -1,6 +1,6 @@ from starlette.testclient import TestClient from fastapi import FastAPI -from program.media.state import MediaItemStates +from program.media.state import States import controllers.items as items from program.media.container import MediaItemContainer from unittest.mock import MagicMock @@ -8,7 +8,7 @@ app = FastAPI() app.include_router(items.router) app.program = MagicMock() -app.program.media_items = MediaItemContainer(items=[]) +app.program.media_items = MediaItemContainer() client = TestClient(app) @@ -18,7 +18,7 @@ def test_get_states(): assert response.status_code == 200 assert response.json() == { "success": True, - "states": [state.value for state in MediaItemStates], + "states": [state.value for state in States], } @@ -26,5 +26,5 @@ def test_get_items(): response = client.get("/items/") assert response.status_code == 200 assert isinstance(response.json(), dict) - assert response.json()["success"] == True + assert response.json()["success"] is True assert isinstance(response.json()["items"], list) diff --git a/backend/tests/test_parser.py b/backend/tests/test_parser.py index c745be0d..04b045a6 100644 --- a/backend/tests/test_parser.py +++ b/backend/tests/test_parser.py @@ -6,49 +6,63 @@ def parser(): return Parser() + # Test parser def test_fetch_with_movie(parser): # Use mocked movie item in parser test parsed_data = parser.parse(item=None, string="Inception 2010 1080p BluRay x264") - assert parsed_data["fetch"] == True + assert parsed_data["fetch"] is True # Add more assertions as needed + def test_fetch_with_episode(parser): # Use mocked episode item in parser test parsed_data = parser.parse(item=None, string="Breaking Bad S01E01 720p BluRay x264") - assert parsed_data["fetch"] == True + assert parsed_data["fetch"] is True # Add more assertions as needed + def test_parse_resolution_4k(parser): - parsed_data = parser.parse(item=None, string="Movie.Name.2018.2160p.UHD.BluRay.x265") - assert parsed_data["is_4k"] == True + parsed_data = parser.parse( + item=None, string="Movie.Name.2018.2160p.UHD.BluRay.x265" + ) + assert parsed_data["is_4k"] is True assert parsed_data["resolution"] == "2160p" + def test_parse_resolution_1080p(parser): parsed_data = parser.parse(item=None, string="Another.Movie.2019.1080p.WEB-DL.x264") - assert parsed_data["is_4k"] == False + assert parsed_data["is_4k"] is False assert parsed_data["resolution"] == "1080p" + def test_parse_dual_audio_present(parser): - parsed_data = parser.parse(item=None, string="Series S01E01 720p BluRay x264 Dual-Audio") - assert parsed_data["is_dual_audio"] == True + parsed_data = parser.parse( + item=None, string="Series S01E01 720p BluRay x264 Dual-Audio" + ) + assert parsed_data["is_dual_audio"] is True + def test_parse_dual_audio_absent(parser): parsed_data = parser.parse(item=None, string="Series S01E02 720p BluRay x264") - assert parsed_data["is_dual_audio"] == False + assert parsed_data["is_dual_audio"] is False + def test_parse_complete_series_detected(parser): parsed_data = parser.parse(item=None, string="The Complete Series Box Set 1080p") - assert parsed_data["is_complete"] == True + assert parsed_data["is_complete"] is True + def test_parse_complete_series_not_detected(parser): parsed_data = parser.parse(item=None, string="Single.Movie.2020.1080p.BluRay") - assert parsed_data["is_complete"] == False + assert parsed_data["is_complete"] is False + def test_parse_unwanted_quality_detected(parser): parsed_data = parser.parse(item=None, string="Low.Quality.Movie.CAM.2020") - assert parsed_data["is_unwanted_quality"] == True + assert parsed_data["is_unwanted_quality"] is True + def test_parse_unwanted_quality_not_detected(parser): parsed_data = parser.parse(item=None, string="High.Quality.Movie.1080p.2020") - assert parsed_data["is_unwanted_quality"] == False + assert parsed_data["is_unwanted_quality"] is False diff --git a/backend/utils/__init__.py b/backend/utils/__init__.py index e69de29b..212c535b 100644 --- a/backend/utils/__init__.py +++ b/backend/utils/__init__.py @@ -0,0 +1,7 @@ +from pathlib import Path + + +root_dir = Path(__file__).resolve().parents[2] + +data_dir_path = root_dir / "data" +version_file_path = root_dir / "VERSION" diff --git a/backend/utils/default_settings.json b/backend/utils/default_settings.json deleted file mode 100644 index 49551b00..00000000 --- a/backend/utils/default_settings.json +++ /dev/null @@ -1,66 +0,0 @@ -{ - "version": "0.4.5", - "debug": true, - "log": true, - "symlink": { - "host_path": "", - "container_path": "" - }, - "real_debrid": { - "api_key": "" - }, - "plex": { - "token": "", - "url": "http://localhost:32400" - }, - "content": { - "plex_watchlist": { - "enabled": false, - "rss": "", - "update_interval": 80 - }, - "mdblist": { - "enabled": false, - "lists": [""], - "api_key": "", - "update_interval": 80 - }, - "listrr": { - "enabled": false, - "movie_lists": [""], - "show_lists": [""], - "api_key": "", - "update_interval": 80 - }, - "overseerr": { - "enabled": false, - "url": "http://localhost:5055", - "api_key": "" - } - }, - "scraping": { - "after_2": 0.5, - "after_5": 2, - "after_10": 24, - "torrentio": { - "enabled": false, - "url": "https://torrentio.strem.fun", - "filter": "sort=qualitysize%7Cqualityfilter=480p,scr,cam" - }, - "orionoid": { - "enabled": false, - "api_key": "" - }, - "jackett": { - "enabled": false, - "url": "http://localhost:9117", - "api_key": "" - } - }, - "parser": { - "language": ["English"], - "include_4k": false, - "highest_quality": false, - "repack_proper": true - } -} diff --git a/backend/utils/logger.py b/backend/utils/logger.py index 14cd23a3..eaa96f92 100644 --- a/backend/utils/logger.py +++ b/backend/utils/logger.py @@ -3,13 +3,8 @@ import logging import os import re -import sys -from .settings import settings_manager as settings - -def get_data_path(): - main_dir = os.path.dirname(os.path.abspath(sys.modules["__main__"].__file__)) - return os.path.join(os.path.dirname(main_dir), "data") +from utils import data_dir_path class RedactSensitiveInfo(logging.Filter): @@ -62,40 +57,49 @@ class Logger(logging.Logger): """Logging class""" def __init__(self): - timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") - file_name = f"iceberg-{timestamp}.log" - data_path = get_data_path() - - super().__init__(file_name) - formatter = logging.Formatter( + self.timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + self.filename = f"iceberg-{self.timestamp}.log" + super().__init__(self.filename) + self.formatter = logging.Formatter( "[%(asctime)s | %(levelname)s] <%(module)s.%(funcName)s> - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", ) + self.logs_dir_path = data_dir_path / "logs" + os.makedirs(self.logs_dir_path, exist_ok=True) - if not os.path.exists(data_path): - os.mkdir(data_path) + self.addFilter(RedactSensitiveInfo()) - if not os.path.exists(os.path.join(data_path, "logs")): - os.mkdir(os.path.join(data_path, "logs")) + console_handler = logging.StreamHandler() + console_handler.setLevel(logging.INFO) + console_handler.setFormatter(self.formatter) + self.addHandler(console_handler) + self.console_handler = console_handler + self.file_handler = None - self.addFilter(RedactSensitiveInfo()) + def configure_logger(self, debug=False, log=False): + log_level = logging.DEBUG if debug else logging.INFO + self.setLevel(log_level) - log_level = logging.INFO - if settings.get("debug"): - log_level = logging.DEBUG + # Update console handler level + for handler in self.handlers: + handler.setLevel(log_level) - if settings.get("log"): + # Configure file handler + if log and not self.file_handler: + # Only add a new file handler if it hasn't been added before file_handler = logging.FileHandler( - os.path.join(get_data_path(), "logs", file_name), encoding="utf-8" + self.logs_dir_path / self.filename, encoding="utf-8" ) file_handler.setLevel(log_level) - file_handler.setFormatter(formatter) + file_handler.setFormatter(self.formatter) self.addHandler(file_handler) - - console_handler = logging.StreamHandler() - console_handler.setLevel(log_level) - console_handler.setFormatter(formatter) - self.addHandler(console_handler) + self.file_handler = ( + file_handler # Keep a reference to avoid adding it again + ) + elif not log and self.file_handler: + # If logging to file is disabled but the handler exists, remove it + self.removeHandler(self.file_handler) + self.file_handler = None logger = Logger() diff --git a/backend/utils/observable.py b/backend/utils/observable.py deleted file mode 100644 index c10f8320..00000000 --- a/backend/utils/observable.py +++ /dev/null @@ -1,10 +0,0 @@ -class Observable: - def __init__(self): - self.observers = [] - - def register_observer(self, observer): - self.observers.append(observer) - - def notify_observers(self): - for observer in self.observers: - observer.notify() diff --git a/backend/utils/parser.py b/backend/utils/parser.py index 18cebd17..68fb87da 100644 --- a/backend/utils/parser.py +++ b/backend/utils/parser.py @@ -1,228 +1,227 @@ -import re -import PTN -from typing import List -from pydantic import BaseModel -from utils.settings import settings_manager -from thefuzz import fuzz - - -class ParserConfig(BaseModel): - language: List[str] - include_4k: bool - highest_quality: bool - repack_proper: bool - - -class Parser: - - def __init__(self): - self.settings = ParserConfig(**settings_manager.get("parser")) - self.language = self.settings.language - self.resolution = self.determine_resolution() - - def determine_resolution(self): - """Determine the resolution to use based on user settings.""" - if self.settings.highest_quality: - return ["UHD", "2160p", "4K", "1080p", "720p"] - if self.settings.include_4k: - return ["2160p", "4K", "1080p", "720p"] - return ["1080p", "720p"] - - def parse(self, item, string) -> dict: - """Parse the given string and return True if it matches the user settings.""" - return self._parse(item, string) - - def _parse(self, item, string) -> dict: - """Parse the given string and return the parsed data.""" - parse = PTN.parse(string) - parsed_title = parse.get("title", "") - - # episodes - episodes = [] - if parse.get("episode", False): - episode = parse.get("episode") - if type(episode) == list: - for sub_episode in episode: - episodes.append(int(sub_episode)) - else: - episodes.append(int(episode)) - - if item is not None: - title_match = self.check_for_title_match(item, parsed_title) - is_4k = parse.get("resolution", False) in ["2160p", "4K", "UHD"] - is_complete = self._is_complete_series(string) - is_dual_audio = self._is_dual_audio(string) - _is_unwanted_quality = self._is_unwanted_quality(string) - - parsed_data = { - "string": string, - "parsed_title": parsed_title, - "fetch": False, - "is_4k": is_4k, - "is_dual_audio": is_dual_audio, - "is_complete": is_complete, - "is_unwanted_quality": _is_unwanted_quality, - "year": parse.get("year", False), - "resolution": parse.get("resolution", []), - "quality": parse.get("quality", []), - "season": parse.get("season", []), - "episodes": episodes, - "codec": parse.get("codec", []), - "audio": parse.get("audio", []), - "hdr": parse.get("hdr", False), - "upscaled": parse.get("upscaled", False), - "remastered": parse.get("remastered", False), - "proper": parse.get("proper", False), - "repack": parse.get("repack", False), - "subtitles": parse.get("subtitles") == "Available", - "language": parse.get("language", []), - "remux": parse.get("remux", False), - "extended": parse.get("extended", False) - } - - # bandaid for now, this needs to be refactored to make less calls to _parse - if item is not None: - parsed_data["title_match"] = title_match - - parsed_data["fetch"] = self._should_fetch(parsed_data) - return parsed_data - - def episodes(self, string) -> List[int]: - """Return a list of episodes in the given string.""" - parse = self._parse(string) - return parse["episodes"] - - def episodes_in_season(self, season, string) -> List[int]: - """Return a list of episodes in the given season.""" - parse = self._parse(item=None, string=string) - if parse["season"] == season: - return parse["episodes"] - return [] - - def _should_fetch(self, parsed_data: dict) -> bool: - """Determine if the parsed content should be fetched.""" - # This is where we determine if the item should be fetched - # based on the user settings and predefined rules. - # Edit with caution. All have to match for the item to be fetched. - # item_language = self._get_item_language(item) - return ( - parsed_data["resolution"] in self.resolution and - # any(lang in parsed_data.get("language", item_language) for lang in self.language) and - not parsed_data["is_unwanted_quality"] - ) - - def _is_highest_quality(self, parsed_data: dict) -> bool: - """Check if content is `highest quality`.""" - return any( - parsed.get("resolution") in ["UHD", "2160p", "4K"] or - parsed.get("hdr", False) or - parsed.get("remux", False) or - parsed.get("upscaled", False) - for parsed in parsed_data - ) - - def _is_dual_audio(self, string) -> bool: - """Check if any content in parsed_data has dual audio.""" - dual_audio_patterns = [ - re.compile(r"\bmulti(?:ple)?[ .-]*(?:lang(?:uages?)?|audio|VF2)?\b", re.IGNORECASE), - re.compile(r"\btri(?:ple)?[ .-]*(?:audio|dub\w*)\b", re.IGNORECASE), - re.compile(r"\bdual[ .-]*(?:au?$|[aá]udio|line)\b", re.IGNORECASE), - re.compile(r"\bdual\b(?![ .-]*sub)", re.IGNORECASE), - re.compile(r"\b(?:audio|dub(?:bed)?)[ .-]*dual\b", re.IGNORECASE), - re.compile(r"\bengl?(?:sub[A-Z]*)?\b", re.IGNORECASE), - re.compile(r"\beng?sub[A-Z]*\b", re.IGNORECASE), - re.compile(r"\b(?:DUBBED|dublado|dubbing|DUBS?)\b", re.IGNORECASE), - ] - return any(pattern.search(string) for pattern in dual_audio_patterns) - - @staticmethod - def _is_complete_series(string) -> bool: - """Check if string is a `complete series`.""" - # Can be used on either movie or show item types - series_patterns = [ - re.compile(r"(?:\bthe\W)?(?:\bcomplete|collection|dvd)?\b[ .]?\bbox[ .-]?set\b", re.IGNORECASE), - re.compile(r"(?:\bthe\W)?(?:\bcomplete|collection|dvd)?\b[ .]?\bmini[ .-]?series\b", re.IGNORECASE), - re.compile(r"(?:\bthe\W)?(?:\bcomplete|full|all)\b.*\b(?:series|seasons|collection|episodes|set|pack|movies)\b", re.IGNORECASE), - re.compile(r"\b(?:series|seasons|movies?)\b.*\b(?:complete|collection)\b", re.IGNORECASE), - re.compile(r"(?:\bthe\W)?\bultimate\b[ .]\bcollection\b", re.IGNORECASE), - re.compile(r"\bcollection\b.*\b(?:set|pack|movies)\b", re.IGNORECASE), - re.compile(r"\bcollection\b", re.IGNORECASE), - re.compile(r"duology|trilogy|quadr[oi]logy|tetralogy|pentalogy|hexalogy|heptalogy|anthology|saga", re.IGNORECASE) - ] - return any(pattern.search(string) for pattern in series_patterns) - - @staticmethod - def _is_unwanted_quality(string) -> bool: - """Check if string has an 'unwanted' quality. Default to False.""" - unwanted_patterns = [ - re.compile(r"\b(?:H[DQ][ .-]*)?CAM(?:H[DQ])?(?:[ .-]*Rip)?\b", re.IGNORECASE), - re.compile(r"\b(?:H[DQ][ .-]*)?S[ .-]*print\b", re.IGNORECASE), - re.compile(r"\b(?:HD[ .-]*)?T(?:ELE)?S(?:YNC)?(?:Rip)?\b", re.IGNORECASE), - re.compile(r"\b(?:HD[ .-]*)?T(?:ELE)?C(?:INE)?(?:Rip)?\b", re.IGNORECASE), - re.compile(r"\bP(?:re)?DVD(?:Rip)?\b", re.IGNORECASE), - re.compile(r"\b(?:DVD?|BD|BR)?[ .-]*Scr(?:eener)?\b", re.IGNORECASE), - re.compile(r"\bVHS\b", re.IGNORECASE), - re.compile(r"\bHD[ .-]*TV(?:Rip)?\b", re.IGNORECASE), - re.compile(r"\bDVB[ .-]*(?:Rip)?\b", re.IGNORECASE), - re.compile(r"\bSAT[ .-]*Rips?\b", re.IGNORECASE), - re.compile(r"\bTVRips?\b", re.IGNORECASE), - re.compile(r"\bR5\b", re.IGNORECASE), - re.compile(r"\b(DivX|XviD)\b", re.IGNORECASE), - ] - return any(pattern.search(string) for pattern in unwanted_patterns) - - def check_for_title_match(self, item, parsed_title, threshold=90) -> bool: - """Check if the title matches PTN title using fuzzy matching.""" - target_title = item.title - if item.type == "season": - target_title = item.parent.title - elif item.type == "episode": - target_title = item.parent.parent.title - match_score = fuzz.ratio(parsed_title.lower(), target_title.lower()) - if match_score >= threshold: - return True - return False - - def _get_item_language(self, item) -> str: - """Get the language of the item.""" - # This is crap. Need to switch to using a dict instead. - if item.type == "season": - if item.parent.language == "en": - if item.parent.is_anime: - return ["English", "Japanese"] - return ["English"] - elif item.type == "episode": - if item.parent.parent.language == "en": - if item.parent.parent.is_anime: - return ["English", "Japanese"] - return ["English"] - if item.language == "en": - if item.is_anime: - return ["English", "Japanese"] - return ["English"] - if item.is_anime: - return ["English", "Japanese"] - return ["English"] - - -# def sort_streams(streams: dict, parser: Parser) -> dict: -# """Sorts streams based on user preferences.""" -# def sorting_key(item): -# _, stream = item -# parsed_data = stream.get('parsed_data', {}) - -# points = 0 -# if parser._is_dual_audio(parsed_data.get("string", "")): -# points += 5 -# if parser._is_repack_or_proper(parsed_data): -# points += 3 -# if parsed_data.get("is_4k", False) and (parser.settings.highest_quality or parser.settings.include_4k): -# points += 7 -# if not parsed_data.get("is_unwanted", False): -# points -= 10 # Unwanted content should be pushed to the bottom -# return points -# sorted_streams = sorted(streams.items(), key=sorting_key, reverse=True) -# return dict(sorted_streams) - - -parser = Parser() \ No newline at end of file +import re +import PTN +from typing import List +from pydantic import BaseModel +from program.settings.manager import settings_manager +from thefuzz import fuzz + + +class ParserConfig(BaseModel): + language: List[str] + include_4k: bool + highest_quality: bool + repack_proper: bool + + +class Parser: + def __init__(self): + self.settings = settings_manager.settings.parser + self.language = self.settings.language + self.resolution = self.determine_resolution() + + def determine_resolution(self): + """Determine the resolution to use based on user settings.""" + if self.settings.highest_quality: + return ["UHD", "2160p", "4K", "1080p", "720p"] + if self.settings.include_4k: + return ["2160p", "4K", "1080p", "720p"] + return ["1080p", "720p"] + + def parse(self, item, string) -> dict: + """Parse the given string and return True if it matches the user settings.""" + return self._parse(item, string) + + def _parse(self, item, string) -> dict: + """Parse the given string and return the parsed data.""" + parse = PTN.parse(string) + parsed_title = parse.get("title", "") + + # episodes + episodes = [] + if parse.get("episode", False): + episode = parse.get("episode") + if isinstance(episode, list): + for sub_episode in episode: + episodes.append(int(sub_episode)) + else: + episodes.append(int(episode)) + + if item is not None: + title_match = self.check_for_title_match(item, parsed_title) + is_4k = parse.get("resolution", False) in ["2160p", "4K", "UHD"] + is_complete = self._is_complete_series(string) + is_dual_audio = self._is_dual_audio(string) + _is_unwanted_quality = self._is_unwanted_quality(string) + + parsed_data = { + "string": string, + "parsed_title": parsed_title, + "fetch": False, + "is_4k": is_4k, + "is_dual_audio": is_dual_audio, + "is_complete": is_complete, + "is_unwanted_quality": _is_unwanted_quality, + "year": parse.get("year", False), + "resolution": parse.get("resolution", []), + "quality": parse.get("quality", []), + "season": parse.get("season", []), + "episodes": episodes, + "codec": parse.get("codec", []), + "audio": parse.get("audio", []), + "hdr": parse.get("hdr", False), + "upscaled": parse.get("upscaled", False), + "remastered": parse.get("remastered", False), + "proper": parse.get("proper", False), + "repack": parse.get("repack", False), + "subtitles": parse.get("subtitles") == "Available", + "language": parse.get("language", []), + "remux": parse.get("remux", False), + "extended": parse.get("extended", False), + } + + # bandaid for now, this needs to be refactored to make less calls to _parse + if item is not None: + parsed_data["title_match"] = title_match + + parsed_data["fetch"] = self._should_fetch(parsed_data) + return parsed_data + + def episodes(self, string) -> List[int]: + """Return a list of episodes in the given string.""" + parse = self._parse(None, string) + return parse["episodes"] + + def episodes_in_season(self, season, string) -> List[int]: + """Return a list of episodes in the given season.""" + parse = self._parse(None, string) + if parse["season"] == season: + return parse["episodes"] + return [] + + def _should_fetch(self, parsed_data: dict) -> bool: + """Determine if the parsed content should be fetched.""" + # This is where we determine if the item should be fetched + # based on the user settings and predefined rules. + # Edit with caution. All have to match for the item to be fetched. + # item_language = self._get_item_language(item) + return ( + parsed_data["resolution"] in self.resolution + and + # any(lang in parsed_data.get("language", item_language) for lang in self.language) and + not parsed_data["is_unwanted_quality"] + ) + + def _is_highest_quality(self, parsed_data: dict) -> bool: + """Check if content is `highest quality`.""" + return any( + parsed.get("resolution") in ["UHD", "2160p", "4K"] + or parsed.get("hdr", False) + or parsed.get("remux", False) + or parsed.get("upscaled", False) + for parsed in parsed_data + ) + + def _is_dual_audio(self, string) -> bool: + """Check if any content in parsed_data has dual audio.""" + dual_audio_patterns = [ + re.compile( + r"\bmulti(?:ple)?[ .-]*(?:lang(?:uages?)?|audio|VF2)?\b", re.IGNORECASE + ), + re.compile(r"\btri(?:ple)?[ .-]*(?:audio|dub\w*)\b", re.IGNORECASE), + re.compile(r"\bdual[ .-]*(?:au?$|[aá]udio|line)\b", re.IGNORECASE), + re.compile(r"\bdual\b(?![ .-]*sub)", re.IGNORECASE), + re.compile(r"\b(?:audio|dub(?:bed)?)[ .-]*dual\b", re.IGNORECASE), + re.compile(r"\bengl?(?:sub[A-Z]*)?\b", re.IGNORECASE), + re.compile(r"\beng?sub[A-Z]*\b", re.IGNORECASE), + re.compile(r"\b(?:DUBBED|dublado|dubbing|DUBS?)\b", re.IGNORECASE), + ] + return any(pattern.search(string) for pattern in dual_audio_patterns) + + @staticmethod + def _is_complete_series(string) -> bool: + """Check if string is a `complete series`.""" + # Can be used on either movie or show item types + series_patterns = [ + re.compile( + r"(?:\bthe\W)?(?:\bcomplete|collection|dvd)?\b[ .]?\bbox[ .-]?set\b", + re.IGNORECASE, + ), + re.compile( + r"(?:\bthe\W)?(?:\bcomplete|collection|dvd)?\b[ .]?\bmini[ .-]?series\b", + re.IGNORECASE, + ), + re.compile( + r"(?:\bthe\W)?(?:\bcomplete|full|all)\b.*\b(?:series|seasons|collection|episodes|set|pack|movies)\b", + re.IGNORECASE, + ), + re.compile( + r"\b(?:series|seasons|movies?)\b.*\b(?:complete|collection)\b", + re.IGNORECASE, + ), + re.compile(r"(?:\bthe\W)?\bultimate\b[ .]\bcollection\b", re.IGNORECASE), + re.compile(r"\bcollection\b.*\b(?:set|pack|movies)\b", re.IGNORECASE), + re.compile(r"\bcollection\b", re.IGNORECASE), + re.compile( + r"duology|trilogy|quadr[oi]logy|tetralogy|pentalogy|hexalogy|heptalogy|anthology|saga", + re.IGNORECASE, + ), + ] + return any(pattern.search(string) for pattern in series_patterns) + + @staticmethod + def _is_unwanted_quality(string) -> bool: + """Check if string has an 'unwanted' quality. Default to False.""" + unwanted_patterns = [ + re.compile( + r"\b(?:H[DQ][ .-]*)?CAM(?:H[DQ])?(?:[ .-]*Rip)?\b", re.IGNORECASE + ), + re.compile(r"\b(?:H[DQ][ .-]*)?S[ .-]*print\b", re.IGNORECASE), + re.compile(r"\b(?:HD[ .-]*)?T(?:ELE)?S(?:YNC)?(?:Rip)?\b", re.IGNORECASE), + re.compile(r"\b(?:HD[ .-]*)?T(?:ELE)?C(?:INE)?(?:Rip)?\b", re.IGNORECASE), + re.compile(r"\bP(?:re)?DVD(?:Rip)?\b", re.IGNORECASE), + re.compile(r"\b(?:DVD?|BD|BR)?[ .-]*Scr(?:eener)?\b", re.IGNORECASE), + re.compile(r"\bVHS\b", re.IGNORECASE), + re.compile(r"\bHD[ .-]*TV(?:Rip)?\b", re.IGNORECASE), + re.compile(r"\bDVB[ .-]*(?:Rip)?\b", re.IGNORECASE), + re.compile(r"\bSAT[ .-]*Rips?\b", re.IGNORECASE), + re.compile(r"\bTVRips?\b", re.IGNORECASE), + re.compile(r"\bR5\b", re.IGNORECASE), + re.compile(r"\b(DivX|XviD)\b", re.IGNORECASE), + ] + return any(pattern.search(string) for pattern in unwanted_patterns) + + def check_for_title_match(self, item, parsed_title, threshold=90) -> bool: + """Check if the title matches PTN title using fuzzy matching.""" + target_title = item.title + if item.type == "season": + target_title = item.parent.title + elif item.type == "episode": + target_title = item.parent.parent.title + match_score = fuzz.ratio(parsed_title.lower(), target_title.lower()) + if match_score >= threshold: + return True + return False + + def _get_item_language(self, item) -> str: + """Get the language of the item.""" + # This is crap. Need to switch to using a dict instead. + if item.type == "season": + if item.parent.language == "en": + if item.parent.is_anime: + return ["English", "Japanese"] + return ["English"] + elif item.type == "episode": + if item.parent.parent.language == "en": + if item.parent.parent.is_anime: + return ["English", "Japanese"] + return ["English"] + if item.language == "en": + if item.is_anime: + return ["English", "Japanese"] + return ["English"] + if item.is_anime: + return ["English", "Japanese"] + return ["English"] + + +parser = Parser() diff --git a/backend/utils/request.py b/backend/utils/request.py index 1fe29e49..bd0052f1 100644 --- a/backend/utils/request.py +++ b/backend/utils/request.py @@ -32,13 +32,13 @@ def __init__(self, response: requests.Response, response_type=SimpleNamespace): def handle_response(self, response: requests.Response): """Handle different types of responses""" - if not self.is_ok and self.status_code not in [429, 520, 522]: - logger.warning("Error: %s %s", response.status_code, response.content) + if not self.is_ok and self.status_code not in [404, 429, 509, 520, 522]: + logger.error("Error: %s %s", response.status_code, response.content) if self.status_code in [520, 522]: # Cloudflare error from Torrentio raise requests.exceptions.ConnectTimeout(response.content) if self.status_code not in [200, 201, 204]: - if self.status_code in [429]: + if self.status_code in [404, 429, 502, 509]: raise requests.exceptions.RequestException(response.content) return {} if len(response.content) > 0: @@ -59,6 +59,16 @@ def handle_response(self, response: requests.Response): ) return {} + def raise_for_status(self): + """Raises HTTPError, if one occurred.""" + http_error_msg = "" + if 400 <= self.status_code < 500: + http_error_msg = f"{self.status_code} Client Error" + elif 500 <= self.status_code < 600: + http_error_msg = f"{self.status_code} Server Error" + if http_error_msg: + raise requests.HTTPError(http_error_msg, response=self.response) + def _handle_request_exception() -> SimpleNamespace: """Handle exceptions during requests and return a namespace object.""" @@ -149,6 +159,22 @@ def put( retry_if_failed=retry_if_failed, ) +def delete( + url: str, + timeout=10, + data=None, + additional_headers=None, + retry_if_failed=False, +) -> ResponseObject: + """Requests delete wrapper""" + return _make_request( + "DELETE", + url, + data=data, + timeout=timeout, + additional_headers=additional_headers, + retry_if_failed=retry_if_failed, + ) def _xml_to_simplenamespace(xml_string): root = etree.fromstring(xml_string) @@ -168,10 +194,6 @@ class RateLimitExceeded(Exception): pass -import time -from threading import Lock - - class RateLimiter: """ A rate limiter class that limits the number of calls within a specified period. diff --git a/backend/utils/service_manager.py b/backend/utils/service_manager.py deleted file mode 100644 index 6b15c008..00000000 --- a/backend/utils/service_manager.py +++ /dev/null @@ -1,48 +0,0 @@ -from copy import deepcopy -from threading import Thread -from utils.settings import settings_manager - - -class ServiceManager: - def __init__(self, media_items=None, register_observer=False, *services): - self.media_items = media_items - self.services = [] - self.initialize_services(services) - self.settings = deepcopy(settings_manager.get_all()) - if register_observer: - settings_manager.register_observer(self) - - def initialize_services(self, modules=None): - services = [] - - # Reinitialize - if self.services: - for index, service in enumerate(self.services): - if modules and service.key in modules: - self.services[index] = service.__class__(self.media_items) - services.append(self.services[index]) - - # Initialize - elif modules: - for service in modules: - new_service = service(self.media_items) - self.services.append(new_service) - services.append(new_service) - - # Start the services - for service in services: - if Thread in service.__class__.__bases__ and service.initialized and not service.running: - service.start() - - def update_settings(self, new_settings): - modules_to_update = [] - for module, values in self.settings.items(): - for new_module, new_values in new_settings.items(): - if module == new_module: - if values != new_values: - modules_to_update.append(module) - self.settings = deepcopy(new_settings) - self.initialize_services(modules_to_update) - - def notify(self): - self.update_settings(settings_manager.settings) diff --git a/backend/utils/settings.py b/backend/utils/settings.py deleted file mode 100644 index 6fcb5175..00000000 --- a/backend/utils/settings.py +++ /dev/null @@ -1,87 +0,0 @@ -"""Settings manager""" -from utils.observable import Observable -import json -import os -import shutil - - -class SettingsManager(Observable): - """Class that handles settings""" - - def __init__(self): - self.filename = "data/settings.json" - self.config_dir = os.path.abspath( - os.path.join(os.path.dirname(__file__), os.pardir, os.pardir) - ) - self.settings_file = os.path.join(self.config_dir, self.filename) - self.settings = {} - self.observers = [] - self.load() - - def register_observer(self, observer): - self.observers.append(observer) - - def notify_observers(self): - for observer in self.observers: - observer.notify() - - def load(self): - """Load settings from file""" - if not os.path.exists(self.settings_file): - default_settings_path = os.path.join( - os.path.dirname(__file__), "default_settings.json" - ) - shutil.copy(default_settings_path, self.settings_file) - with open(self.settings_file, "r", encoding="utf-8") as file: - self.settings = json.loads(file.read()) - self.notify_observers() - - def save(self): - """Save settings to file""" - with open(self.settings_file, "w", encoding="utf-8") as file: - json.dump(self.settings, file, indent=4) - - def get(self, key): - """Get setting with key""" - return _get_nested_attr(self.settings, key) - - def set(self, data): - """Set setting value with key""" - for setting in data: - _set_nested_attr(self.settings, setting.key, setting.value) - self.notify_observers() - - def get_all(self): - """Return all settings""" - return self.settings - -def _get_nested_attr(obj, key): - if "." in key: - parts = key.split(".", 1) - current_key, rest_of_keys = parts[0], parts[1] - - if not obj.get(current_key, None): - return None - - current_obj = obj.get(current_key) - return _get_nested_attr(current_obj, rest_of_keys) - else: - return obj.get(key, None) - - -def _set_nested_attr(obj, key, value): - if "." in key: - parts = key.split(".", 1) - current_key, rest_of_keys = parts[0], parts[1] - - if not obj.get(current_key): - return False - - current_obj = obj.get(current_key) - return _set_nested_attr(current_obj, rest_of_keys, value) - else: - obj[key] = value - return True - - -settings_manager = SettingsManager() diff --git a/backend/utils/utils.py b/backend/utils/utils.py index b75d1c33..091710e5 100644 --- a/backend/utils/utils.py +++ b/backend/utils/utils.py @@ -13,9 +13,6 @@ def __init__(self, media_items, data_path: str): def start(self) -> None: self.load() self.running = True - for item in self.media_items: - if item._lock.locked(): - item._lock.release() return super().start() def stop(self) -> None: @@ -36,4 +33,3 @@ def run(self): if not self.running: break time.sleep(i) - diff --git a/frontend/src/lib/forms/general-form.svelte b/frontend/src/lib/forms/general-form.svelte index 4495ae07..8f43a76e 100644 --- a/frontend/src/lib/forms/general-form.svelte +++ b/frontend/src/lib/forms/general-form.svelte @@ -56,16 +56,16 @@ data.data[service] === true); + + return { + data: settingsData, + allServicesTrue: allServicesTrue + }; +} + +/** + * Saves the settings from memory to the json file in the backend. + * @param fetch - The fetch function used to make the request. + * @returns A promise that resolves to an object containing the response data. + */ +export async function saveSettings(fetch: any) { + const data = await fetch('http://127.0.0.1:8080/settings/save', { + method: 'POST' + }); + const response = await data.json(); + + return { + data: response + }; +} + +/** + * Loads settings from the json to memory in backend. + * @param fetch - The fetch function used to make the HTTP request. + * @returns A promise that resolves to an object containing the loaded settings. + */ +export async function loadSettings(fetch: any) { + const data = await fetch('http://127.0.0.1:8080/settings/load', { + method: 'GET' + }); + const response = await data.json(); + + return { + data: response + }; +} + // General Settings ----------------------------------------------------------------------------------- export const generalSettingsToGet: string[] = ['debug', 'log', 'symlink', 'real_debrid']; export const generalSettingsServices: string[] = ['symlink', 'real_debrid']; @@ -8,8 +68,8 @@ export const generalSettingsServices: string[] = ['symlink', 'real_debrid']; export const generalSettingsSchema = z.object({ debug: z.boolean().default(true), log: z.boolean().default(true), - host_path: z.string().min(1), - container_path: z.string().min(1), + rclone_path: z.string().min(1), + library_path: z.string().min(1), realdebrid_api_key: z.string().min(1) }); export type GeneralSettingsSchema = typeof generalSettingsSchema; @@ -18,8 +78,8 @@ export function generalSettingsToPass(data: any) { return { debug: data.data.debug, log: data.data.log, - host_path: data.data.symlink.host_path, - container_path: data.data.symlink.container_path, + rclone_path: data.data.symlink.rclone_path, + library_path: data.data.symlink.library_path, realdebrid_api_key: data.data.real_debrid.api_key }; } @@ -37,8 +97,8 @@ export function generalSettingsToSet(form: SuperValidated { key: 'symlink', value: { - host_path: form.data.host_path, - container_path: form.data.container_path + rclone_path: form.data.rclone_path, + library_path: form.data.library_path } }, { @@ -56,14 +116,14 @@ export const contentSettingsServices: string[] = ['content']; export const contentSettingsSchema = z.object({ overseerr_enabled: z.boolean().default(false), - overseerr_url: z.string().url().optional().default(''), + overseerr_url: z.string().optional().default(''), overseerr_api_key: z.string().optional().default(''), mdblist_enabled: z.boolean().default(false), mdblist_api_key: z.string().optional().default(''), mdblist_update_interval: z.number().nonnegative().int().optional().default(80), mdblist_lists: z.string().array().optional().default(['']), plex_watchlist_enabled: z.boolean().default(false), - plex_watchlist_rss: z.union([z.string().url(), z.string().optional()]).optional().default(''), + plex_watchlist_rss: z.string().optional().default(''), plex_watchlist_update_interval: z.number().nonnegative().int().optional().default(80), listrr_enabled: z.boolean().default(false), listrr_api_key: z.string().optional().default(''), @@ -131,13 +191,15 @@ export const mediaServerSettingsToGet: string[] = ['plex']; export const mediaServerSettingsServices: string[] = ['plex']; export const mediaServerSettingsSchema = z.object({ + update_interval: z.string().optional().default(''), plex_token: z.string().optional().default(''), - plex_url: z.string().url().optional().default('') + plex_url: z.string().optional().default('') }); export type MediaServerSettingsSchema = typeof mediaServerSettingsSchema; export function mediaServerSettingsToPass(data: any) { return { + update_interval: data.data.plex.update_interval, plex_token: data.data.plex.token, plex_url: data.data.plex.url }; @@ -148,6 +210,7 @@ export function mediaServerSettingsToSet(form: SuperValidated {/if} +
diff --git a/frontend/src/lib/helpers.ts b/frontend/src/lib/helpers.ts index b1a13a55..6e27eb0c 100644 --- a/frontend/src/lib/helpers.ts +++ b/frontend/src/lib/helpers.ts @@ -73,27 +73,3 @@ export function convertIcebergItemsToObject(items: IcebergItem[]) { return result; } - -export async function saveSettings(fetch: any, toSet: any) { - const data = await fetch('http://127.0.0.1:8080/settings/set', { - method: 'POST', - headers: { - 'Content-Type': 'application/json' - }, - body: JSON.stringify(toSet) - }); - - const saveSettings = await fetch('http://127.0.0.1:8080/settings/save', { - method: 'POST' - }); - - const loadSettings = await fetch('http://127.0.0.1:8080/settings/load', { - method: 'GET' - }); - - return { - data, - saveSettings, - loadSettings - }; -} diff --git a/frontend/src/routes/settings/about/+page.svelte b/frontend/src/routes/settings/about/+page.svelte index 1a8b0625..a04618aa 100644 --- a/frontend/src/routes/settings/about/+page.svelte +++ b/frontend/src/routes/settings/about/+page.svelte @@ -10,13 +10,13 @@ export let data: PageData; const version = data.settings.data.version; - const host_path = data.settings.data.symlink.host_path; - const container_path = data.settings.data.symlink.container_path; + const rclone_path = data.settings.data.symlink.rclone_path; + const library_path = data.settings.data.symlink.library_path; interface AboutData { [key: string]: any; - host_path: string; - container_path: string; + rclone_path: string; + library_path: string; } type SupportData = { @@ -26,8 +26,8 @@ }; const aboutData: AboutData = { - host_path, - container_path + rclone_path, + library_path }; const supportData: SupportData = { github: 'https://github.com/dreulavelle/iceberg', diff --git a/frontend/src/routes/settings/content/+page.server.ts b/frontend/src/routes/settings/content/+page.server.ts index 8ab7dbb2..4eafebe3 100644 --- a/frontend/src/routes/settings/content/+page.server.ts +++ b/frontend/src/routes/settings/content/+page.server.ts @@ -1,8 +1,11 @@ import type { PageServerLoad, Actions } from './$types'; import { fail, error, redirect } from '@sveltejs/kit'; import { message, superValidate } from 'sveltekit-superforms/server'; -import { saveSettings, formatWords } from '$lib/helpers'; +import { formatWords } from '$lib/helpers'; import { + setSettings, + saveSettings, + loadSettings, contentSettingsSchema, contentSettingsToGet, contentSettingsServices, @@ -41,7 +44,19 @@ export const actions: Actions = { const toSet = contentSettingsToSet(form); try { - const data = await saveSettings(event.fetch, toSet); + const data = await setSettings(event.fetch, toSet, contentSettingsServices); + if (!data.allServicesTrue) { + return message( + form, + `${contentSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, + { + status: 400 + } + ); + } + + const save = await saveSettings(event.fetch); + const load = await loadSettings(event.fetch); } catch (e) { console.error(e); return message(form, 'Unable to save settings. API is down.', { @@ -49,21 +64,6 @@ export const actions: Actions = { }); } - const data = await event.fetch('http://127.0.0.1:8080/services'); - const services = await data.json(); - const allServicesTrue: boolean = contentSettingsServices.every( - (service) => services.data[service] === true - ); - if (!allServicesTrue) { - return message( - form, - `${contentSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, - { - status: 400 - } - ); - } - if (event.url.searchParams.get('onboarding') === 'true') { redirect(302, '/onboarding/4'); } diff --git a/frontend/src/routes/settings/general/+page.server.ts b/frontend/src/routes/settings/general/+page.server.ts index 6e420686..eb6fa2e0 100644 --- a/frontend/src/routes/settings/general/+page.server.ts +++ b/frontend/src/routes/settings/general/+page.server.ts @@ -1,8 +1,11 @@ import type { PageServerLoad, Actions } from './$types'; import { fail, error, redirect } from '@sveltejs/kit'; import { message, superValidate } from 'sveltekit-superforms/server'; -import { saveSettings, formatWords } from '$lib/helpers'; +import { formatWords } from '$lib/helpers'; import { + setSettings, + saveSettings, + loadSettings, generalSettingsSchema, generalSettingsToGet, generalSettingsServices, @@ -43,7 +46,18 @@ export const actions: Actions = { const toSet = generalSettingsToSet(form); try { - const data = await saveSettings(event.fetch, toSet); + const data = await setSettings(event.fetch, toSet, generalSettingsServices); + if (!data.allServicesTrue) { + return message( + form, + `${generalSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, + { + status: 400 + } + ); + } + const save = await saveSettings(event.fetch); + const load = await loadSettings(event.fetch); } catch (e) { console.error(e); return message(form, 'Unable to save settings. API is down.', { @@ -51,21 +65,6 @@ export const actions: Actions = { }); } - const data = await event.fetch('http://127.0.0.1:8080/services'); - const services = await data.json(); - const allServicesTrue: boolean = generalSettingsServices.every( - (service) => services.data[service] === true - ); - if (!allServicesTrue) { - return message( - form, - `${generalSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, - { - status: 400 - } - ); - } - if (event.url.searchParams.get('onboarding') === 'true') { redirect(302, '/onboarding/2'); } diff --git a/frontend/src/routes/settings/mediaserver/+page.server.ts b/frontend/src/routes/settings/mediaserver/+page.server.ts index a563079e..bd3a796d 100644 --- a/frontend/src/routes/settings/mediaserver/+page.server.ts +++ b/frontend/src/routes/settings/mediaserver/+page.server.ts @@ -1,8 +1,11 @@ import type { PageServerLoad, Actions } from './$types'; import { fail, error, redirect } from '@sveltejs/kit'; import { message, superValidate } from 'sveltekit-superforms/server'; -import { saveSettings, formatWords } from '$lib/helpers'; +import { formatWords } from '$lib/helpers'; import { + setSettings, + saveSettings, + loadSettings, mediaServerSettingsSchema, mediaServerSettingsToGet, mediaServerSettingsServices, @@ -41,7 +44,19 @@ export const actions: Actions = { const toSet = mediaServerSettingsToSet(form); try { - const data = await saveSettings(event.fetch, toSet); + const data = await setSettings(event.fetch, toSet, mediaServerSettingsServices); + if (!data.allServicesTrue) { + return message( + form, + `${mediaServerSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, + { + status: 400 + } + ); + } + + const save = await saveSettings(event.fetch); + const load = await loadSettings(event.fetch); } catch (e) { console.error(e); return message(form, 'Unable to save settings. API is down.', { @@ -49,21 +64,6 @@ export const actions: Actions = { }); } - const data = await event.fetch('http://127.0.0.1:8080/services'); - const services = await data.json(); - const allServicesTrue: boolean = mediaServerSettingsServices.every( - (service) => services.data[service] === true - ); - if (!allServicesTrue) { - return message( - form, - `${mediaServerSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, - { - status: 400 - } - ); - } - if (event.url.searchParams.get('onboarding') === 'true') { redirect(302, '/onboarding/3'); } diff --git a/frontend/src/routes/settings/scrapers/+page.server.ts b/frontend/src/routes/settings/scrapers/+page.server.ts index d3d73054..f838fd16 100644 --- a/frontend/src/routes/settings/scrapers/+page.server.ts +++ b/frontend/src/routes/settings/scrapers/+page.server.ts @@ -1,8 +1,11 @@ import type { PageServerLoad, Actions } from './$types'; import { fail, error, redirect } from '@sveltejs/kit'; import { message, superValidate } from 'sveltekit-superforms/server'; -import { saveSettings, formatWords } from '$lib/helpers'; +import { formatWords } from '$lib/helpers'; import { + setSettings, + saveSettings, + loadSettings, scrapersSettingsSchema, scrapersSettingsToGet, scrapersSettingsServices, @@ -41,7 +44,19 @@ export const actions: Actions = { const toSet = scrapersSettingsToSet(form); try { - const data = await saveSettings(event.fetch, toSet); + const data = await setSettings(event.fetch, toSet, scrapersSettingsServices); + if (!data.allServicesTrue) { + return message( + form, + `${scrapersSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, + { + status: 400 + } + ); + } + + const save = await saveSettings(event.fetch); + const load = await loadSettings(event.fetch); } catch (e) { console.error(e); return message(form, 'Unable to save settings. API is down.', { @@ -49,21 +64,6 @@ export const actions: Actions = { }); } - const data = await event.fetch('http://127.0.0.1:8080/services'); - const services = await data.json(); - const allServicesTrue: boolean = scrapersSettingsServices.every( - (service) => services.data[service] === true - ); - if (!allServicesTrue) { - return message( - form, - `${scrapersSettingsServices.map(formatWords).join(', ')} service(s) failed to initialize. Please check your settings.`, - { - status: 400 - } - ); - } - if (event.url.searchParams.get('onboarding') === 'true') { redirect(302, '/?onboarding=true'); } diff --git a/makefile b/makefile index d867aa74..323ebef3 100644 --- a/makefile +++ b/makefile @@ -12,6 +12,7 @@ endif help: @echo Iceberg Local Development Environment @echo ------------------------------------------------------------------------- + @echo install : Install the required packages @echo start : Build and run the Iceberg container @echo stop : Stop and remove the Iceberg container and image @echo restart : Restart the Iceberg container (without rebuilding image) @@ -24,11 +25,13 @@ help: @echo backend : Start the backend development server @echo ------------------------------------------------------------------------- +install: + @python3 -m pip install -r requirements.txt --break-system-packages + start: stop @docker build -t iceberg:latest -f Dockerfile . @docker run -d --name iceberg --hostname iceberg --net host -e PUID=1000 -e PGID=1000 -v $(DATA_PATH):/iceberg/data -v /mnt:/mnt iceberg:latest - @echo Iceberg Frontend is running on http://localhost:3000/status/ - @echo Iceberg Backend is running on http://localhost:8080/items/ + @echo Iceberg is running on http://localhost:3000/ @docker logs iceberg -f stop: @@ -55,8 +58,9 @@ ec: @docker exec -it iceberg /bin/bash -c "vim /iceberg/data/settings.json" update: - @-git pull --rebase - @make start + @echo Not implemented yet + # @-git pull --rebase + # @make start frontend: @echo Starting Frontend... diff --git a/requirements.txt b/requirements.txt index 7c090cdf..8920fef3 100644 --- a/requirements.txt +++ b/requirements.txt @@ -8,4 +8,6 @@ pydantic fastapi uvicorn[standard] parse-torrent-title -thefuzz \ No newline at end of file +thefuzz +apscheduler +watchdog \ No newline at end of file