Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document low-level MAM options #3329

Merged
merged 2 commits into from
Oct 13, 2021
Merged

Document low-level MAM options #3329

merged 2 commits into from
Oct 13, 2021

Conversation

gustawlippa
Copy link
Contributor

This PR addresses #3191 (review).
It adds documentation for low-level MAM options. Although some are for example RDBMS only, I think it's better to keep them all in their own group, as these are probably really rarely used.

async_writer_rdbms_pool is not documented - if I am not mistaken this option is not read by anything, but I left it for now, because I'm not sure.
simple seems weird and from what I understand it doubles what db_message_format already does. It may be useful maybe for setups with multiple backends with different formats? If not, the code for mod_mam_cassandra_arch and mod_mam_muc_cassandra_arch could be simplified.

The options and their structure in the TOML config could use a refactor in my opinion, but that may be done in the future.

Some other minor changes like typos fixed (there is no "mod_mam_rdbms_async_writer").

@gustawlippa gustawlippa changed the title Correct name Document low-level MAM options Oct 8, 2021
@mongoose-im

This comment has been minimized.

@codecov
Copy link

codecov bot commented Oct 8, 2021

Codecov Report

Merging #3329 (f46b973) into master (02b7e96) will increase coverage by 0.01%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3329      +/-   ##
==========================================
+ Coverage   80.67%   80.69%   +0.01%     
==========================================
  Files         397      397              
  Lines       32440    32440              
==========================================
+ Hits        26172    26176       +4     
+ Misses       6268     6264       -4     
Impacted Files Coverage Δ
src/mam/mod_mam_meta.erl 94.73% <ø> (ø)
src/mam/mod_mam_muc_rdbms_arch.erl 96.96% <ø> (ø)
src/mam/mod_mam_muc_rdbms_async_pool_writer.erl 71.42% <ø> (ø)
src/mam/mod_mam_rdbms_arch.erl 51.02% <ø> (ø)
src/mam/mod_mam_rdbms_async_pool_writer.erl 66.66% <ø> (ø)
src/elasticsearch/mongoose_elasticsearch.erl 76.92% <0.00%> (-7.70%) ⬇️
src/mod_last_rdbms.erl 96.15% <0.00%> (-3.85%) ⬇️
...c/global_distrib/mod_global_distrib_server_mgr.erl 74.57% <0.00%> (-2.26%) ⬇️
src/mam/mod_mam_elasticsearch_arch.erl 85.08% <0.00%> (-1.76%) ⬇️
src/mod_last.erl 86.76% <0.00%> (-1.48%) ⬇️
... and 6 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 02b7e96...f46b973. Read the comment docs.

@mongoose-im
Copy link
Collaborator

mongoose-im commented Oct 12, 2021

small_tests_24 / small_tests / f46b973
Reports root / small


internal_mnesia_24 / internal_mnesia / f46b973
Reports root/ big
OK: 1589 / Failed: 0 / User-skipped: 297 / Auto-skipped: 0


small_tests_23 / small_tests / f46b973
Reports root / small


dynamic_domains_pgsql_mnesia_24 / pgsql_mnesia / f46b973
Reports root/ big
OK: 2699 / Failed: 3 / User-skipped: 184 / Auto-skipped: 0

service_domain_db_SUITE:db:db_keeps_syncing_after_cluster_join
{error,{test_case_failed,{[<<"example1.com">>],
              [<<"example1.com">>,<<"example2.com">>]}}}

Report log

service_domain_db_SUITE:db:rest_with_auth:rest_delete_domain_cleans_data_from_mam
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,
          <<"bob_rest_delete_domain_cleans_data_from_mam_80.477531@example.org/res1">>,
          escalus_tcp,<0.28588.1>,
          [{event_manager,<0.28582.1>},
           {server,<<"example.org">>},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_80.477531">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.28582.1>},
            {server,<<"example.org">>},
            {username,
              <<"bob_rest_delete_domain_cleans_data_from_mam_80.477531">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_80.477531">>},
           {server,<<"example.org">>},
           {host,<<"localhost">>},
           {port,5232},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_80.477531">>},
           {server,<<"example.org">>},
           {host,<<"localhost">>},
           {password,<<"makota3">>},
           {port,5232},
           {stream_id,<<"efd11ff63ba977c4">>}]},
        5000],
       [{file,
          "/home/circleci/app/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,136}]},
     {service_domain_db_SUITE,
       '-rest_delete_domain_cleans_data_from_mam/1-fun-0-',5...

Report log

service_domain_db_SUITE:db:rest_without_auth:rest_delete_domain_cleans_data_from_mam
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,
          <<"bob_rest_delete_domain_cleans_data_from_mam_88.310183@example.org/res1">>,
          escalus_tcp,<0.29232.1>,
          [{event_manager,<0.29226.1>},
           {server,<<"example.org">>},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_88.310183">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.29226.1>},
            {server,<<"example.org">>},
            {username,
              <<"bob_rest_delete_domain_cleans_data_from_mam_88.310183">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_88.310183">>},
           {server,<<"example.org">>},
           {host,<<"localhost">>},
           {port,5232},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,
             <<"bob_rest_delete_domain_cleans_data_from_mam_88.310183">>},
           {server,<<"example.org">>},
           {host,<<"localhost">>},
           {password,<<"makota3">>},
           {port,5232},
           {stream_id,<<"a8a39fa945db4643">>}]},
        5000],
       [{file,
          "/home/circleci/app/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,136}]},
     {service_domain_db_SUITE,
       '-rest_delete_domain_cleans_data_from_mam/1-fun-0-',5...

Report log


ldap_mnesia_24 / ldap_mnesia / f46b973
Reports root/ big
OK: 1486 / Failed: 0 / User-skipped: 400 / Auto-skipped: 0


dynamic_domains_mysql_redis_24 / mysql_redis / f46b973
Reports root/ big
OK: 2685 / Failed: 0 / User-skipped: 201 / Auto-skipped: 0


dynamic_domains_mssql_mnesia_24 / odbc_mssql_mnesia / f46b973
Reports root/ big
OK: 2702 / Failed: 0 / User-skipped: 184 / Auto-skipped: 0


dynamic_domains_pgsql_mnesia_23 / pgsql_mnesia / f46b973
Reports root/ big
OK: 2702 / Failed: 0 / User-skipped: 184 / Auto-skipped: 0


ldap_mnesia_23 / ldap_mnesia / f46b973
Reports root/ big
OK: 1486 / Failed: 0 / User-skipped: 400 / Auto-skipped: 0


pgsql_mnesia_24 / pgsql_mnesia / f46b973
Reports root/ big
OK: 3071 / Failed: 0 / User-skipped: 211 / Auto-skipped: 0


elasticsearch_and_cassandra_24 / elasticsearch_and_cassandra_mnesia / f46b973
Reports root/ big
OK: 1862 / Failed: 0 / User-skipped: 323 / Auto-skipped: 0


mysql_redis_24 / mysql_redis / f46b973
Reports root/ big
OK: 3054 / Failed: 0 / User-skipped: 228 / Auto-skipped: 0


mssql_mnesia_24 / odbc_mssql_mnesia / f46b973
Reports root/ big
OK: 3071 / Failed: 0 / User-skipped: 211 / Auto-skipped: 0


pgsql_mnesia_23 / pgsql_mnesia / f46b973
Reports root/ big
OK: 3071 / Failed: 0 / User-skipped: 211 / Auto-skipped: 0


riak_mnesia_24 / riak_mnesia / f46b973
Reports root/ big
OK: 1715 / Failed: 1 / User-skipped: 326 / Auto-skipped: 0

mod_ping_SUITE:server_ping:server_ping_pong
{error,{{badmatch,[{[<<"localhost">>,mod_ping,ping_response],
          {expected_diff,5},
          {before_story,0},
          {after_story,4}}]},
    [{escalus_mongooseim,post_story_check_metrics,1,
               [{file,"/home/circleci/app/big_tests/_build/default/lib/escalus/src/escalus_mongooseim.erl"},
                {line,74}]},
     {escalus_mongooseim,maybe_check_metrics_post_story,1,
               [{file,"/home/circleci/app/big_tests/_build/default/lib/escalus/src/escalus_mongooseim.erl"},
                {line,51}]},
     {escalus_story,story,4,
            [{file,"/home/circleci/app/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
             {line,75}]},
     {test_server,ts_tc,3,[{file,"test_server.erl"},{line,1783}]},
     {test_server,run_test_case_eval1,6,
            [{file,"test_server.erl"},{line,1292}]},
     {test_server,run_test_case_eval,9,
            [{file,"test_server.erl"},{line,1224}]}]}}

Report log


dynamic_domains_pgsql_mnesia_24 / pgsql_mnesia / f46b973
Reports root/ big
OK: 2702 / Failed: 0 / User-skipped: 184 / Auto-skipped: 0

Copy link
Member

@chrzaszcz chrzaszcz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Some options could be simplified indeed, but for now I think that your description is all we can do. Regarding async_writer_rdbms_pool, it seems to be dead indeed. Maybe we could clean it up (or start supporting it) in a separate PR.

@chrzaszcz chrzaszcz merged commit e5f6687 into master Oct 13, 2021
@chrzaszcz chrzaszcz deleted the doc-mam-options branch October 13, 2021 10:13
@Premwoik Premwoik added this to the 5.1.0 milestone May 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants