Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove old outgoing pools configs #2101

Merged
merged 28 commits into from
Oct 22, 2018
Merged

Remove old outgoing pools configs #2101

merged 28 commits into from
Oct 22, 2018

Conversation

fenek
Copy link
Member

@fenek fenek commented Oct 3, 2018

This PR removes support for following config options:

  • riak_server
  • http_connections
  • cassandra_servers
  • elasticsearch_server
  • rdbms_server

Other TODOs:

  • Update Migration guide
  • Scan for possible leftovers

Note:

GoCD tools have been removed, as there is no clear indication that they are still used (in fact - we don't verify them) and updating them would be non-trivial and without predictable benefits.

@fenek fenek added this to the 3.1.0++ milestone Oct 3, 2018
@mongoose-im

This comment has been minimized.

@codecov
Copy link

codecov bot commented Oct 4, 2018

Codecov Report

Merging #2101 into master will increase coverage by 1.51%.
The diff coverage is 97.43%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #2101      +/-   ##
=========================================
+ Coverage   75.29%   76.8%   +1.51%     
=========================================
  Files         320     319       -1     
  Lines       28589   28015     -574     
=========================================
- Hits        21527   21518       -9     
+ Misses       7062    6497     -565
Impacted Files Coverage Δ
src/config/mongoose_config_parser.erl 74.52% <ø> (+0.91%) ⬆️
src/cassandra/mongoose_cassandra.erl 78.57% <ø> (+11.9%) ⬆️
src/elasticsearch/mongoose_elasticsearch.erl 76.92% <ø> (+4.19%) ⬆️
src/ejabberd_sup.erl 89.65% <ø> (-0.35%) ⬇️
src/mongoose_riak.erl 95.83% <ø> (+17.04%) ⬆️
src/ejabberd_config.erl 78.29% <ø> (+3.19%) ⬆️
src/cassandra/mongoose_cassandra_pool.erl 75% <ø> (+33.33%) ⬆️
src/mongoose_redis.erl 87.5% <ø> (-6.62%) ⬇️
src/event_pusher/mod_event_pusher_sns.erl 85.33% <ø> (-0.2%) ⬇️
src/event_pusher/mod_event_pusher_push.erl 87.67% <ø> (-0.17%) ⬇️
... and 30 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6245d53...1b798f0. Read the comment docs.

@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

Copy link
Contributor

@michalwski michalwski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I like the most the fact that the PR removes almost 2.5 lines and adds only 82!

I had one comment to the code and here's one more observation/comment.

ejabberd_sm and global_distrib still has their redis pools configured via module opts. I'm wondering if we change it to the "new" way where the pool is specified in the outgoing_pools option and only the name is passed as a configuration of ejabberd_sm and global_distrib.

rel/vars.config.in Show resolved Hide resolved
@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

@mongoose-im

This comment has been minimized.

Copy link
Contributor

@michalwski michalwski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of good cleaning happened in the PR!

I had some comments to the code, nothing major.

{sm_backend, "{redis, [{pool_size, 3}, {worker_config, [{host, \"localhost\"}, {port, 6379}]}]}"},
{auth_method, "internal"},
{outgoing_pools, "{outgoing_pools, [
{redis, global, gd, [{workers, 1}], []}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use more meaningful tag here? For instance: global_dist.

{odbc_mssql_mnesia,
[{dbs, [redis, mssql]},
{sm_backend, "{mnesia, []}"},
{auth_method, "rdbms"},
{rdbms_server_type, "{rdbms_server_type, mssql}."},
{outgoing_pools, "{outgoing_pools, [
{redis, global, gd, [{workers, 1}], []},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that in all cases the redis pool for global dist has only 1 worker. Is this on purpose? Does our test work with more than one worker?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know. I think it may be revised during another round of ungraceful_reconnection fixing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kzemek do you recall this why there is only 1 worker?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I remember, there was simply no point in having more workers for tests

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can try to set it to e.g. 10 and see if it breaks tests somehow. :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably don't have to tell it that with more than one worker there can be some race conditions. Which now don't even have chance to show up. So I stop here :)
Let's try running tests with more than just 1 worker.

```

to:
### Example - New format
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the changes to ejabberd_sm and mod_global_dist configuration changes need to be reflected in the migration guide.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 This is something I wanted to take care of after I fix the tests. :)

%% Default MongooseIM configuration uses only Mnesia (non-Mnesia extensions are disabled),
%% so no options here are uncommented out of the box.
%% This section includes configuration examples; for comprehensive guide
%% please consult MongooseIM documentation, page "Outgoing connections".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding a path to outgoing_connections.md file or even an URL to readthedocs?

wpool:stop_sup_pool(make_pool_name(Type, Host, Tag))
catch
C:R ->
?ERROR_MSG("event=cannot_stop_pool,type=~p,host=~p,tag=~p,class=~p,reason=~p",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why exactly do we need the try ... catch clause here? The most dangerous call_callback function hast a try ... catch already. Did you need this while debugging?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed that in some weird case e.g. ets:delete might crash and also wpool executes internally ok = supervisor:terminate_child(?MODULE, Pid) which might fail with badmatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All right. Could we add a stack trace to the log here?

fun() ->
mim_ct_sup:start_link(ejabberd_sup),
mongoose_wpool:ensure_started(),
mongoose_wpool_http:init(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of calling mongoose_wpool_http:init and then mongoose_wpool:start, the mongoose_wpool:start_configured/1 function could be used.

@@ -68,6 +68,11 @@ init_per_group(redis, Config) ->
init_redis_group(is_redis_running(), Config).

init_redis_group(true, Config) ->
mongoose_wpool:ensure_started(),
mongoose_wpool_redis:init(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of calling mongoose_wpool_redis:init and then mongoose_wpool:start, the mongoose_wpool:start_configured/1 function could be used.

@fenek fenek added the WIP 🚧 label Oct 9, 2018

{{{elasticsearch_server}}}
%% rdbms_server_type specifies what database is used over the RDBMS layer
%% Can take values rdbms, pgsql, mysql
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rdbms should be mssql here. I don't know why it was odbc before

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this place it meant "true" ODBC, not quasi-ejabberd-odbc.

@mongoose-im
Copy link
Collaborator

mongoose-im commented Oct 9, 2018

5742.1 / Erlang 19.3 / small_tests / 2417ab5
Reports root / small


5742.5 / Erlang 19.3 / ldap_mnesia / 2417ab5
Reports root/ big
OK: 1068 / Failed: 0 / User-skipped: 80 / Auto-skipped: 0


5742.3 / Erlang 19.3 / mysql_redis / 2417ab5
Reports root/ big
OK: 2868 / Failed: 0 / User-skipped: 222 / Auto-skipped: 0


5742.2 / Erlang 19.3 / internal_mnesia / 2417ab5
Reports root/ big
OK: 1103 / Failed: 0 / User-skipped: 45 / Auto-skipped: 0


5742.4 / Erlang 19.3 / odbc_mssql_mnesia / 2417ab5
Reports root/ big
OK: 2882 / Failed: 0 / User-skipped: 208 / Auto-skipped: 0


5742.6 / Erlang 19.3 / elasticsearch_and_cassandra_mnesia / 2417ab5
Reports root/ big
OK: 445 / Failed: 0 / User-skipped: 8 / Auto-skipped: 0


5742.8 / Erlang 20.0 / pgsql_mnesia / 2417ab5
Reports root/ big / small
OK: 2927 / Failed: 1 / User-skipped: 176 / Auto-skipped: 0

mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,<<"eve64.132006@localhost/res1">>,escalus_tcp,
          <0.18627.3>,
          [{event_manager,<0.18618.3>},
           {server,<<"localhost">>},
           {username,<<"eve64.132006">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.18618.3>},
            {server,<<"localhost">>},
            {username,<<"eve64.132006">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,<<"eve64.132006">>},
           {server,<<"localhost">>},
           {host,<<"localhost">>},
           {port,5222},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,<<"eve64.132006">>},
           {server,<<"localhost">>},
           {password,<<"password">>},
           {port,5222},
           {stream_management,true},
           {stream_id,<<"7A61EBDA8F0F3939">>}]},
        10000],
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,138}]},
     {mod_global_distrib_SUITE,
       '-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
       4,
       [{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
     {escalus_story,story,4,
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
        {line,72}]},
    ...

Report log


5742.9 / Erlang 21.0 / riak_mnesia / 2417ab5
Reports root/ big / small
OK: 1326 / Failed: 0 / User-skipped: 43 / Auto-skipped: 0

_ ->
{extra_config, []}, {redis_extra_config, [no_opts]} | Config]);
Result ->
ct:pal("Redis check result: ~p", [Result]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a leftover?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was planning to leave it, just in case. However, if it's annoying, I may remove it. :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed only now that it's printed only on failure. That's fine.

@mongoose-im
Copy link
Collaborator

mongoose-im commented Oct 11, 2018

5751.1 / Erlang 19.3 / small_tests / 97dd237
Reports root / small


5751.3 / Erlang 19.3 / mysql_redis / 97dd237
Reports root/ big
OK: 2868 / Failed: 0 / User-skipped: 222 / Auto-skipped: 0


5751.6 / Erlang 19.3 / elasticsearch_and_cassandra_mnesia / 97dd237
Reports root/ big
OK: 445 / Failed: 0 / User-skipped: 8 / Auto-skipped: 0


5751.2 / Erlang 19.3 / internal_mnesia / 97dd237
Reports root/ big
OK: 1116 / Failed: 1 / User-skipped: 45 / Auto-skipped: 0

mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,<<"eve63.878499@localhost/res1">>,escalus_tcp,
          <0.12832.1>,
          [{event_manager,<0.12823.1>},
           {server,<<"localhost">>},
           {username,<<"eve63.878499">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.12823.1>},
            {server,<<"localhost">>},
            {username,<<"eve63.878499">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,<<"eve63.878499">>},
           {server,<<"localhost">>},
           {host,<<"localhost">>},
           {port,5222},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,<<"eve63.878499">>},
           {server,<<"localhost">>},
           {password,<<"password">>},
           {port,5222},
           {stream_management,true},
           {stream_id,<<"98632AB2338F4EE9">>}]},
        10000],
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,138}]},
     {mod_global_distrib_SUITE,
       '-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
       4,
       [{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
     {escalus_story,story,4,
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
        {line,72}]},
    ...

Report log


5751.5 / Erlang 19.3 / ldap_mnesia / 97dd237
Reports root/ big
OK: 1068 / Failed: 0 / User-skipped: 80 / Auto-skipped: 0


5751.4 / Erlang 19.3 / odbc_mssql_mnesia / 97dd237
Reports root/ big
OK: 2882 / Failed: 0 / User-skipped: 208 / Auto-skipped: 0


5751.8 / Erlang 20.0 / pgsql_mnesia / 97dd237
Reports root/ big / small
OK: 2914 / Failed: 0 / User-skipped: 176 / Auto-skipped: 0


5751.9 / Erlang 21.0 / riak_mnesia / 97dd237
Reports root/ big / small
OK: 1351 / Failed: 3 / User-skipped: 43 / Auto-skipped: 0

mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,<<"eve6.205847@localhost/res1">>,escalus_tcp,
          <0.22176.1>,
          [{event_manager,<0.22166.1>},
           {server,<<"localhost">>},
           {username,<<"eve6.205847">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.22166.1>},
            {server,<<"localhost">>},
            {username,<<"eve6.205847">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,<<"eve6.205847">>},
           {server,<<"localhost">>},
           {host,<<"localhost">>},
           {port,5222},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,<<"eve6.205847">>},
           {server,<<"localhost">>},
           {password,<<"password">>},
           {port,5222},
           {stream_management,true},
           {stream_id,<<"6276FE363DBE4ADA">>}]},
        10000],
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,138}]},
     {mod_global_distrib_SUITE,
       '-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
       4,
       [{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
     {escalus_story,story,4,
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
        {line,72}]},
     {tes...

Report log

mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,<<"eve49.776877@localhost/res1">>,escalus_tcp,
          <0.22751.1>,
          [{event_manager,<0.22741.1>},
           {server,<<"localhost">>},
           {username,<<"eve49.776877">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.22741.1>},
            {server,<<"localhost">>},
            {username,<<"eve49.776877">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,<<"eve49.776877">>},
           {server,<<"localhost">>},
           {host,<<"localhost">>},
           {port,5222},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,<<"eve49.776877">>},
           {server,<<"localhost">>},
           {password,<<"password">>},
           {port,5222},
           {stream_management,true},
           {stream_id,<<"DAD9928E38C09F35">>}]},
        10000],
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,138}]},
     {mod_global_distrib_SUITE,
       '-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
       4,
       [{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
     {escalus_story,story,4,
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
        {line,72}]},
    ...

Report log

mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server
{error,
  {timeout_when_waiting_for_stanza,
    [{escalus_client,wait_for_stanza,
       [{client,<<"eve69.72195@localhost/res1">>,escalus_tcp,
          <0.23078.1>,
          [{event_manager,<0.23068.1>},
           {server,<<"localhost">>},
           {username,<<"eve69.72195">>},
           {resource,<<"res1">>}],
          [{event_client,
             [{event_manager,<0.23068.1>},
            {server,<<"localhost">>},
            {username,<<"eve69.72195">>},
            {resource,<<"res1">>}]},
           {resource,<<"res1">>},
           {username,<<"eve69.72195">>},
           {server,<<"localhost">>},
           {host,<<"localhost">>},
           {port,5222},
           {auth,{escalus_auth,auth_plain}},
           {wspath,undefined},
           {username,<<"eve69.72195">>},
           {server,<<"localhost">>},
           {password,<<"password">>},
           {port,5222},
           {stream_management,true},
           {stream_id,<<"1060BFC900F8FE91">>}]},
        10000],
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
        {line,138}]},
     {mod_global_distrib_SUITE,
       '-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
       4,
       [{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
     {escalus_story,story,4,
       [{file,
          "/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
        {line,72}]},
     {tes...

Report log


5751.9 / Erlang 21.0 / riak_mnesia / 97dd237
Reports root/ big / small
OK: 1326 / Failed: 0 / User-skipped: 43 / Auto-skipped: 0

{redis, global, Tag, WorkersOptions, ConnectionOptions}
]}.
```
The `Tag` parameter can be set only to `default` for session backend.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Tag parameter can only be set to default for a session backend?


## mod_global_distrib

If you had `mod_global_distrib` configured like below:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you had mod_global_distrib configured in the following way:

]}
```

The redis pool needs to be defined inside `outgoing_pools` like below:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The redis pool needs to be defined inside outgoing_pools:

@michalwski michalwski merged commit 1dea889 into master Oct 22, 2018
@michalwski michalwski deleted the remove-old-pool-config branch October 22, 2018 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants