-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove old outgoing pools configs #2101
Conversation
This comment has been minimized.
This comment has been minimized.
8a36525
to
def243c
Compare
Codecov Report
@@ Coverage Diff @@
## master #2101 +/- ##
=========================================
+ Coverage 75.29% 76.8% +1.51%
=========================================
Files 320 319 -1
Lines 28589 28015 -574
=========================================
- Hits 21527 21518 -9
+ Misses 7062 6497 -565
Continue to review full report at Codecov.
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I like the most the fact that the PR removes almost 2.5 lines and adds only 82!
I had one comment to the code and here's one more observation/comment.
ejabberd_sm
and global_distrib
still has their redis pools configured via module opts. I'm wondering if we change it to the "new" way where the pool is specified in the outgoing_pools
option and only the name is passed as a configuration of ejabberd_sm
and global_distrib
.
eacb063
to
06771ff
Compare
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of good cleaning happened in the PR!
I had some comments to the code, nothing major.
big_tests/test.config
Outdated
{sm_backend, "{redis, [{pool_size, 3}, {worker_config, [{host, \"localhost\"}, {port, 6379}]}]}"}, | ||
{auth_method, "internal"}, | ||
{outgoing_pools, "{outgoing_pools, [ | ||
{redis, global, gd, [{workers, 1}], []} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use more meaningful tag here? For instance: global_dist
.
big_tests/test.config
Outdated
{odbc_mssql_mnesia, | ||
[{dbs, [redis, mssql]}, | ||
{sm_backend, "{mnesia, []}"}, | ||
{auth_method, "rdbms"}, | ||
{rdbms_server_type, "{rdbms_server_type, mssql}."}, | ||
{outgoing_pools, "{outgoing_pools, [ | ||
{redis, global, gd, [{workers, 1}], []}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that in all cases the redis pool for global dist has only 1 worker. Is this on purpose? Does our test work with more than one worker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. I think it may be revised during another round of ungraceful_reconnection
fixing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kzemek do you recall this why there is only 1 worker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I remember, there was simply no point in having more workers for tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can try to set it to e.g. 10 and see if it breaks tests somehow. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably don't have to tell it that with more than one worker there can be some race conditions. Which now don't even have chance to show up. So I stop here :)
Let's try running tests with more than just 1 worker.
``` | ||
|
||
to: | ||
### Example - New format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the changes to ejabberd_sm
and mod_global_dist
configuration changes need to be reflected in the migration guide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 This is something I wanted to take care of after I fix the tests. :)
rel/files/mongooseim.cfg
Outdated
%% Default MongooseIM configuration uses only Mnesia (non-Mnesia extensions are disabled), | ||
%% so no options here are uncommented out of the box. | ||
%% This section includes configuration examples; for comprehensive guide | ||
%% please consult MongooseIM documentation, page "Outgoing connections". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding a path to outgoing_connections.md
file or even an URL to readthedocs?
src/wpool/mongoose_wpool.erl
Outdated
wpool:stop_sup_pool(make_pool_name(Type, Host, Tag)) | ||
catch | ||
C:R -> | ||
?ERROR_MSG("event=cannot_stop_pool,type=~p,host=~p,tag=~p,class=~p,reason=~p", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why exactly do we need the try ... catch
clause here? The most dangerous call_callback
function hast a try ... catch
already. Did you need this while debugging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed that in some weird case e.g. ets:delete
might crash and also wpool
executes internally ok = supervisor:terminate_child(?MODULE, Pid)
which might fail with badmatch
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. Could we add a stack trace to the log here?
test/auth_http_SUITE.erl
Outdated
fun() -> | ||
mim_ct_sup:start_link(ejabberd_sup), | ||
mongoose_wpool:ensure_started(), | ||
mongoose_wpool_http:init(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of calling mongoose_wpool_http:init
and then mongoose_wpool:start
, the mongoose_wpool:start_configured/1
function could be used.
test/ejabberd_sm_SUITE.erl
Outdated
@@ -68,6 +68,11 @@ init_per_group(redis, Config) -> | |||
init_redis_group(is_redis_running(), Config). | |||
|
|||
init_redis_group(true, Config) -> | |||
mongoose_wpool:ensure_started(), | |||
mongoose_wpool_redis:init(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of calling mongoose_wpool_redis:init
and then mongoose_wpool:start
, the mongoose_wpool:start_configured/1
function could be used.
rel/files/mongooseim.cfg
Outdated
|
||
{{{elasticsearch_server}}} | ||
%% rdbms_server_type specifies what database is used over the RDBMS layer | ||
%% Can take values rdbms, pgsql, mysql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rdbms
should be mssql
here. I don't know why it was odbc
before
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in this place it meant "true" ODBC, not quasi-ejabberd-odbc.
5742.1 / Erlang 19.3 / small_tests / 2417ab5 5742.5 / Erlang 19.3 / ldap_mnesia / 2417ab5 5742.3 / Erlang 19.3 / mysql_redis / 2417ab5 5742.2 / Erlang 19.3 / internal_mnesia / 2417ab5 5742.4 / Erlang 19.3 / odbc_mssql_mnesia / 2417ab5 5742.6 / Erlang 19.3 / elasticsearch_and_cassandra_mnesia / 2417ab5 5742.8 / Erlang 20.0 / pgsql_mnesia / 2417ab5 mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server{error,
{timeout_when_waiting_for_stanza,
[{escalus_client,wait_for_stanza,
[{client,<<"eve64.132006@localhost/res1">>,escalus_tcp,
<0.18627.3>,
[{event_manager,<0.18618.3>},
{server,<<"localhost">>},
{username,<<"eve64.132006">>},
{resource,<<"res1">>}],
[{event_client,
[{event_manager,<0.18618.3>},
{server,<<"localhost">>},
{username,<<"eve64.132006">>},
{resource,<<"res1">>}]},
{resource,<<"res1">>},
{username,<<"eve64.132006">>},
{server,<<"localhost">>},
{host,<<"localhost">>},
{port,5222},
{auth,{escalus_auth,auth_plain}},
{wspath,undefined},
{username,<<"eve64.132006">>},
{server,<<"localhost">>},
{password,<<"password">>},
{port,5222},
{stream_management,true},
{stream_id,<<"7A61EBDA8F0F3939">>}]},
10000],
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
{line,138}]},
{mod_global_distrib_SUITE,
'-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
4,
[{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
{escalus_story,story,4,
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
{line,72}]},
... 5742.9 / Erlang 21.0 / riak_mnesia / 2417ab5 |
_ -> | ||
{extra_config, []}, {redis_extra_config, [no_opts]} | Config]); | ||
Result -> | ||
ct:pal("Redis check result: ~p", [Result]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a leftover?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was planning to leave it, just in case. However, if it's annoying, I may remove it. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed only now that it's printed only on failure. That's fine.
5751.1 / Erlang 19.3 / small_tests / 97dd237 5751.3 / Erlang 19.3 / mysql_redis / 97dd237 5751.6 / Erlang 19.3 / elasticsearch_and_cassandra_mnesia / 97dd237 5751.2 / Erlang 19.3 / internal_mnesia / 97dd237 mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server{error,
{timeout_when_waiting_for_stanza,
[{escalus_client,wait_for_stanza,
[{client,<<"eve63.878499@localhost/res1">>,escalus_tcp,
<0.12832.1>,
[{event_manager,<0.12823.1>},
{server,<<"localhost">>},
{username,<<"eve63.878499">>},
{resource,<<"res1">>}],
[{event_client,
[{event_manager,<0.12823.1>},
{server,<<"localhost">>},
{username,<<"eve63.878499">>},
{resource,<<"res1">>}]},
{resource,<<"res1">>},
{username,<<"eve63.878499">>},
{server,<<"localhost">>},
{host,<<"localhost">>},
{port,5222},
{auth,{escalus_auth,auth_plain}},
{wspath,undefined},
{username,<<"eve63.878499">>},
{server,<<"localhost">>},
{password,<<"password">>},
{port,5222},
{stream_management,true},
{stream_id,<<"98632AB2338F4EE9">>}]},
10000],
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
{line,138}]},
{mod_global_distrib_SUITE,
'-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
4,
[{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
{escalus_story,story,4,
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
{line,72}]},
... 5751.5 / Erlang 19.3 / ldap_mnesia / 97dd237 5751.4 / Erlang 19.3 / odbc_mssql_mnesia / 97dd237 5751.8 / Erlang 20.0 / pgsql_mnesia / 97dd237 5751.9 / Erlang 21.0 / riak_mnesia / 97dd237 mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server{error,
{timeout_when_waiting_for_stanza,
[{escalus_client,wait_for_stanza,
[{client,<<"eve6.205847@localhost/res1">>,escalus_tcp,
<0.22176.1>,
[{event_manager,<0.22166.1>},
{server,<<"localhost">>},
{username,<<"eve6.205847">>},
{resource,<<"res1">>}],
[{event_client,
[{event_manager,<0.22166.1>},
{server,<<"localhost">>},
{username,<<"eve6.205847">>},
{resource,<<"res1">>}]},
{resource,<<"res1">>},
{username,<<"eve6.205847">>},
{server,<<"localhost">>},
{host,<<"localhost">>},
{port,5222},
{auth,{escalus_auth,auth_plain}},
{wspath,undefined},
{username,<<"eve6.205847">>},
{server,<<"localhost">>},
{password,<<"password">>},
{port,5222},
{stream_management,true},
{stream_id,<<"6276FE363DBE4ADA">>}]},
10000],
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
{line,138}]},
{mod_global_distrib_SUITE,
'-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
4,
[{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
{escalus_story,story,4,
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
{line,72}]},
{tes... mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server{error,
{timeout_when_waiting_for_stanza,
[{escalus_client,wait_for_stanza,
[{client,<<"eve49.776877@localhost/res1">>,escalus_tcp,
<0.22751.1>,
[{event_manager,<0.22741.1>},
{server,<<"localhost">>},
{username,<<"eve49.776877">>},
{resource,<<"res1">>}],
[{event_client,
[{event_manager,<0.22741.1>},
{server,<<"localhost">>},
{username,<<"eve49.776877">>},
{resource,<<"res1">>}]},
{resource,<<"res1">>},
{username,<<"eve49.776877">>},
{server,<<"localhost">>},
{host,<<"localhost">>},
{port,5222},
{auth,{escalus_auth,auth_plain}},
{wspath,undefined},
{username,<<"eve49.776877">>},
{server,<<"localhost">>},
{password,<<"password">>},
{port,5222},
{stream_management,true},
{stream_id,<<"DAD9928E38C09F35">>}]},
10000],
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
{line,138}]},
{mod_global_distrib_SUITE,
'-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
4,
[{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
{escalus_story,story,4,
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
{line,72}]},
... mod_global_distrib_SUITE:mod_global_distrib:test_pm_with_ungraceful_reconnection_to_different_server{error,
{timeout_when_waiting_for_stanza,
[{escalus_client,wait_for_stanza,
[{client,<<"eve69.72195@localhost/res1">>,escalus_tcp,
<0.23078.1>,
[{event_manager,<0.23068.1>},
{server,<<"localhost">>},
{username,<<"eve69.72195">>},
{resource,<<"res1">>}],
[{event_client,
[{event_manager,<0.23068.1>},
{server,<<"localhost">>},
{username,<<"eve69.72195">>},
{resource,<<"res1">>}]},
{resource,<<"res1">>},
{username,<<"eve69.72195">>},
{server,<<"localhost">>},
{host,<<"localhost">>},
{port,5222},
{auth,{escalus_auth,auth_plain}},
{wspath,undefined},
{username,<<"eve69.72195">>},
{server,<<"localhost">>},
{password,<<"password">>},
{port,5222},
{stream_management,true},
{stream_id,<<"1060BFC900F8FE91">>}]},
10000],
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_client.erl"},
{line,138}]},
{mod_global_distrib_SUITE,
'-test_pm_with_ungraceful_reconnection_to_different_server/1-fun-0-',
4,
[{file,"mod_global_distrib_SUITE.erl"},{line,607}]},
{escalus_story,story,4,
[{file,
"/home/travis/build/esl/MongooseIM/big_tests/_build/default/lib/escalus/src/escalus_story.erl"},
{line,72}]},
{tes... 5751.9 / Erlang 21.0 / riak_mnesia / 97dd237 |
{redis, global, Tag, WorkersOptions, ConnectionOptions} | ||
]}. | ||
``` | ||
The `Tag` parameter can be set only to `default` for session backend. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Tag
parameter can only be set to default
for a session backend?
doc/migrations/3.1.1_3.1.1++.md
Outdated
|
||
## mod_global_distrib | ||
|
||
If you had `mod_global_distrib` configured like below: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you had mod_global_distrib
configured in the following way:
doc/migrations/3.1.1_3.1.1++.md
Outdated
]} | ||
``` | ||
|
||
The redis pool needs to be defined inside `outgoing_pools` like below: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The redis pool needs to be defined inside outgoing_pools
:
This PR removes support for following config options:
Other TODOs:
Note:
GoCD tools have been removed, as there is no clear indication that they are still used (in fact - we don't verify them) and updating them would be non-trivial and without predictable benefits.