Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DISABLE_USER_SYNC (options.disableUserSync) does not work #523

Closed
5 tasks
jplindquist opened this issue Aug 14, 2018 · 3 comments
Closed
5 tasks

DISABLE_USER_SYNC (options.disableUserSync) does not work #523

jplindquist opened this issue Aug 14, 2018 · 3 comments
Labels
bug M-T: A confirmed bug report. Issues are confirmed when the reproduction steps are documented

Comments

@jplindquist
Copy link
Contributor

Description

Environment variable DISABLE_USER_SYNC (options.disableUserSync) does not work when set

What type of issue is this? (place an x in one of the [ ])

  • [x ] bug
  • enhancement (feature request)
  • question
  • documentation related
  • testing related
  • discussion

Requirements (place an x in each of the [ ])

  • [x ] I've read and understood the Contributing guidelines and have done my best effort to follow them.
  • [x ] I've read and agree to the Code of Conduct.
  • [x ] I've searched for any related issues and avoided creating a duplicate issue.

Bug Report

Filling out the following details about bugs will help us solve your issue sooner.

Reproducible in:

hubot-slack version: v4.5.4

node version: 8.8.1

OS version(s): CentOS 7, OS X

Steps to reproduce:

  1. export HUBOT_SLACK_TOKEN=<slack-token>
  2. export DISABLE_USER_SYNC='true'
  3. Start hubot with slack adapter: ./bin/hubot --name hubot --adapter slack

Expected result:

Hubot will not disable syncing all user data on start

Actual result:

Hubot syncs all user data on start. In our case, this is a very long string which gets loaded and when loaded and parsed by another plugin hubot-ldap-auth, it causes JavaScript heap error when trying to parse all the user data in the workspace

Attachments:

<--- Last few GCs --->

[9991:0x105000000]    30026 ms: Mark-sweep 1401.4 (1516.5) -> 1401.0 (1522.5) MB, 338.0 / 0.0 ms  allocation failure GC in old space requested
[9991:0x105000000]    30345 ms: Mark-sweep 1401.0 (1522.5) -> 1401.0 (1484.0) MB, 318.9 / 0.0 ms  last resort GC in old space requested
[9991:0x105000000]    30660 ms: Mark-sweep 1401.0 (1484.0) -> 1401.0 (1465.0) MB, 315.4 / 0.0 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x36a2bd025e91 <JSObject>
    1: fromString(aka fromString) [buffer.js:~296] [pc=0x311a21392ec3](this=0x36a245082311 <undefined>,string=0x36a278cf2559 <Very long string[14193228]>,encoding=0x36a2bd036569 <String[4]: utf8>)
    3: from [buffer.js:177] [bytecode=0x36a2fa33e299 offset=11](this=0x36a2ce9b0739 <JSFunction Buffer (sfi = 0x36a2bd070781)>,value=0x36a278cf2559 <Very long string[14193228]>,encodingOrOffset=0x36a2b...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 2: node::FatalException(v8::Isolate*, v8::Local<v8::Value>, v8::Local<v8::Message>) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 4: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 5: v8::internal::String::SlowFlatten(v8::internal::Handle<v8::internal::ConsString>, v8::internal::PretenureFlag) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 6: v8::String::WriteUtf8(char*, int, int*, int) const [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 7: node::StringBytes::Write(v8::Isolate*, char*, unsigned long, v8::Local<v8::Value>, node::encoding, int*) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 8: node::Buffer::New(v8::Isolate*, v8::Local<v8::String>, node::encoding) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
 9: node::Buffer::(anonymous namespace)::CreateFromString(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/jlindquist/nvm/versions/node/v8.8.1/bin/node]
10: 0x311a2134e4a7
Abort trap: 6
@aoberoi aoberoi added the bug M-T: A confirmed bug report. Issues are confirmed when the reproduction steps are documented label Aug 14, 2018
@aoberoi
Copy link
Contributor

aoberoi commented Aug 14, 2018

Thanks for reporting! Agreed that this is a bug. I also think this is very closely related to #479.

Your PR seems like a good quick fix, but a little hackey. It seems like we need to clean up and describe the purpose of isLoaded, brainIsLoaded, and needsUserListSync. I’d prefer a solution that was a little more holistic in getting rid of any state we don’t need and potentially also solves (or goes in the direction of a solution to) issue #479.

@sionleroux
Copy link
Contributor

it causes JavaScript heap error when trying to parse all the user data in the workspace

Similar situation here, we have +10k Slack users and for now we've scaled up the Redis instance that persists this data and we dont't get a heap error but the memory usage of the Hubot Node.js app can reach around 800mb which is more than the 512mb you get on the lower tier Heroku dynamos, so the logs are full of Error R14 (Memory quota exceeded) and I'm not sure what bad things are happening that I'm not seeing 😆

It would be ideal if:

  • DISABLE_USER_SYNC worked so that it doesn't sync all users to the brain (scope of this issue)
  • it wouldn't even try to consider users apart from those it interacts with at all (not sure if this is out of scope and should go in a different issue)

@aoberoi what if #524 gets merged now as a quick fix (because basically right now it doesn't work at all) and then we work towards reconciling the different load/sync states in #479?

I guess for now I will downgrade back to 4.2.2.

@aoberoi
Copy link
Contributor

aoberoi commented Oct 1, 2018

Fair enough, i think the two-step solution as @sinisterstuf has described is acceptable and better than continuing to wait for the holistic solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug M-T: A confirmed bug report. Issues are confirmed when the reproduction steps are documented
Projects
None yet
Development

No branches or pull requests

3 participants