Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-evaluate --use-memory-manager #2581

Open
sporksmith opened this issue Nov 30, 2022 · 12 comments
Open

Re-evaluate --use-memory-manager #2581

sporksmith opened this issue Nov 30, 2022 · 12 comments
Assignees
Labels
Type: Bug Error or flaw producing unexpected results

Comments

@sporksmith
Copy link
Contributor

sporksmith commented Nov 30, 2022

We currently have two paths to access managed process memory. The default (--use-memory-manager=true) mmap's most of managed process memory into shared memory (/dev/shm) so that shadow can access it directly. The fallback mechanism (--use-memory-manager=false, or for accessing regions we haven't mmapd) is to use process_vm_readv and process_vm_writev to read and write the memory.

In early iterations of shadow 2.0, the mapping path was a bigger win, because the fallback mechanisms were slower (e.g. transferring control back to the plugin and asking it to copy chunks of memory into a fixed-size shared memory buffer for us in "preload" mode; PTRACE_PEEKDATA or file operations on /proc/.../mem in "ptrace" mode).

Last time we looked at it, mapping does still give us a moderate win in I/O intensive microbenchmarks, but I'm not sure it ends up being a significant win for more realistic simulations.

Downsides of using the mapping path:

  • It requires potentially a lot of space in /dev/shm, depending on the size of the simulation. This can require some configuration of the host OS and/or container.
  • When we run out of space in /dev/shm, things can fail in difficult-to-debug ways. We try to detect when we've run out of space and fail loudly (e.g. ea80705), but in Tor we recently ran into some difficult to debug failures on both v2.2.0 and v2.3.0. Maybe we can do more to detect this situation and fail loudly, but it's always going to be a bit tricky.
  • In general, maintaining Rust soundness for this path is a bit tricky. We haven't run into issues here yet, but I wouldn't be surprised if there were some lurking dragons.

It may be worth doing some benchmarks on more realistic simulation (e.g. tor) to check how much performance benefit the mapping path gives us there. If it's not much, we might want to change the default, and maybe even consider removing that code.

@sporksmith sporksmith added the Type: Bug Error or flaw producing unexpected results label Nov 30, 2022
@sporksmith
Copy link
Contributor Author

Quick ad-hoc benchmark using the tgen tests.

v.2.3.0:

2022-11-30 11:45:28,707 [INFO] calling 'ctest -j12 --timeout 20 -R tgen -C extra'
Test project /home/jnewsome/projects/shadow/dev/build
      Start 137: tgen-duration-1mbit_300ms-1stream-shadow
      Start 138: tgen-duration-10mbit_200ms-1stream-shadow
      Start 139: tgen-duration-100mbit_100ms-1stream-shadow
      Start 140: tgen-duration-1gbit_10ms-1stream-shadow
      Start 141: tgen-duration-1mbit_300ms-10streams-shadow
      Start 142: tgen-duration-10mbit_200ms-10streams-shadow
      Start 143: tgen-duration-100mbit_100ms-10streams-shadow
      Start 144: tgen-duration-1gbit_10ms-10streams-shadow
      Start 145: tgen-duration-1mbit_300ms-100streams-shadow
      Start 146: tgen-duration-10mbit_200ms-100streams-shadow
      Start 147: tgen-duration-100mbit_100ms-100streams-shadow
      Start 148: tgen-duration-1gbit_10ms-100streams-shadow
 1/40 Test #137: tgen-duration-1mbit_300ms-1stream-shadow ..............   Passed    0.44 sec
      Start 149: tgen-duration-1mbit_300ms-1000streams-shadow
 2/40 Test #141: tgen-duration-1mbit_300ms-10streams-shadow ............   Passed    0.70 sec
      Start 150: tgen-duration-10mbit_200ms-1000streams-shadow
 3/40 Test #145: tgen-duration-1mbit_300ms-100streams-shadow ...........   Passed    1.36 sec
      Start 151: tgen-duration-100mbit_100ms-1000streams-shadow
 4/40 Test #138: tgen-duration-10mbit_200ms-1stream-shadow .............   Passed    1.87 sec
      Start 152: tgen-duration-1gbit_10ms-1000streams-shadow
 5/40 Test #139: tgen-duration-100mbit_100ms-1stream-shadow ............   Passed    2.59 sec
      Start 153: tgen-size-1mbit_300ms-1stream_1b_1000x-shadow
 6/40 Test #142: tgen-duration-10mbit_200ms-10streams-shadow ...........   Passed    3.07 sec
      Start 154: tgen-size-10mbit_200ms-1stream_1b_1000x-shadow
 7/40 Test #146: tgen-duration-10mbit_200ms-100streams-shadow ..........   Passed    5.21 sec
      Start 155: tgen-size-100mbit_100ms-1stream_1b_1000x-shadow
 8/40 Test #149: tgen-duration-1mbit_300ms-1000streams-shadow ..........   Passed    6.53 sec
      Start 156: tgen-size-1gbit_10ms-1stream_1b_1000x-shadow
 9/40 Test #154: tgen-size-10mbit_200ms-1stream_1b_1000x-shadow ........   Passed   10.90 sec
      Start 157: tgen-size-1mbit_300ms-1stream_1kib_100x-shadow
10/40 Test #155: tgen-size-100mbit_100ms-1stream_1b_1000x-shadow .......   Passed    9.53 sec
      Start 158: tgen-size-10mbit_200ms-1stream_1kib_100x-shadow
11/40 Test #143: tgen-duration-100mbit_100ms-10streams-shadow ..........   Passed   15.59 sec
      Start 159: tgen-size-100mbit_100ms-1stream_1kib_100x-shadow
12/40 Test #156: tgen-size-1gbit_10ms-1stream_1b_1000x-shadow ..........   Passed    8.63 sec
      Start 160: tgen-size-1gbit_10ms-1stream_1kib_100x-shadow
13/40 Test #153: tgen-size-1mbit_300ms-1stream_1b_1000x-shadow .........   Passed   13.06 sec
      Start 161: tgen-size-1mbit_300ms-1stream_1mib_10x-shadow
14/40 Test #160: tgen-size-1gbit_10ms-1stream_1kib_100x-shadow .........   Passed    1.27 sec
      Start 162: tgen-size-10mbit_200ms-1stream_1mib_10x-shadow
15/40 Test #158: tgen-size-10mbit_200ms-1stream_1kib_100x-shadow .......   Passed    2.43 sec
      Start 163: tgen-size-100mbit_100ms-1stream_1mib_10x-shadow
16/40 Test #159: tgen-size-100mbit_100ms-1stream_1kib_100x-shadow ......   Passed    1.73 sec
      Start 164: tgen-size-1gbit_10ms-1stream_1mib_10x-shadow
17/40 Test #157: tgen-size-1mbit_300ms-1stream_1kib_100x-shadow ........   Passed    3.63 sec
      Start 165: tgen-size-1mbit_300ms-10streams_1mib_10x-shadow
18/40 Test #164: tgen-size-1gbit_10ms-1stream_1mib_10x-shadow ..........   Passed    3.32 sec
      Start 166: tgen-size-10mbit_200ms-10streams_1mib_10x-shadow
19/40 Test #163: tgen-size-100mbit_100ms-1stream_1mib_10x-shadow .......   Passed    3.90 sec
      Start 167: tgen-size-100mbit_100ms-10streams_1mib_10x-shadow
20/40 Test #150: tgen-duration-10mbit_200ms-1000streams-shadow .........   Passed   20.62 sec
      Start 168: tgen-size-1gbit_10ms-10streams_1mib_10x-shadow
21/40 Test #162: tgen-size-10mbit_200ms-1stream_1mib_10x-shadow ........   Passed    7.11 sec
      Start 169: tgen-size-1mbit_300ms-100streams_1kib_100x-shadow
22/40 Test #168: tgen-size-1gbit_10ms-10streams_1mib_10x-shadow ........   Passed    3.30 sec
      Start 170: tgen-size-10mbit_200ms-100streams_1kib_100x-shadow
23/40 Test #147: tgen-duration-100mbit_100ms-100streams-shadow .........   Passed   24.82 sec
      Start 171: tgen-size-100mbit_100ms-100streams_1kib_100x-shadow
24/40 Test #167: tgen-size-100mbit_100ms-10streams_1mib_10x-shadow .....   Passed    4.04 sec
      Start 172: tgen-size-1gbit_10ms-100streams_1kib_100x-shadow
25/40 Test #161: tgen-size-1mbit_300ms-1stream_1mib_10x-shadow .........   Passed    9.60 sec
      Start 173: tgen-size-1mbit_300ms-1000streams_1b_1000x-shadow
26/40 Test #172: tgen-size-1gbit_10ms-100streams_1kib_100x-shadow ......   Passed    1.08 sec
      Start 174: tgen-size-10mbit_200ms-1000streams_1b_1000x-shadow
27/40 Test #171: tgen-size-100mbit_100ms-100streams_1kib_100x-shadow ...   Passed    1.51 sec
      Start 175: tgen-size-100mbit_100ms-1000streams_1b_1000x-shadow
28/40 Test #170: tgen-size-10mbit_200ms-100streams_1kib_100x-shadow ....   Passed    2.16 sec
      Start 176: tgen-size-1gbit_10ms-1000streams_1b_1000x-shadow
29/40 Test #169: tgen-size-1mbit_300ms-100streams_1kib_100x-shadow .....   Passed    3.41 sec
30/40 Test #165: tgen-size-1mbit_300ms-10streams_1mib_10x-shadow .......   Passed   10.24 sec
31/40 Test #166: tgen-size-10mbit_200ms-10streams_1mib_10x-shadow ......   Passed    7.27 sec
32/40 Test #175: tgen-size-100mbit_100ms-1000streams_1b_1000x-shadow ...   Passed    6.97 sec
33/40 Test #176: tgen-size-1gbit_10ms-1000streams_1b_1000x-shadow ......   Passed    6.61 sec
34/40 Test #174: tgen-size-10mbit_200ms-1000streams_1b_1000x-shadow ....   Passed    7.95 sec
35/40 Test #173: tgen-size-1mbit_300ms-1000streams_1b_1000x-shadow .....   Passed    9.17 sec
36/40 Test #151: tgen-duration-100mbit_100ms-1000streams-shadow ........   Passed   71.37 sec
37/40 Test #140: tgen-duration-1gbit_10ms-1stream-shadow ...............   Passed  102.82 sec
38/40 Test #144: tgen-duration-1gbit_10ms-10streams-shadow .............   Passed  107.19 sec
39/40 Test #148: tgen-duration-1gbit_10ms-100streams-shadow ............   Passed  112.93 sec
40/40 Test #152: tgen-duration-1gbit_10ms-1000streams-shadow ...........   Passed  154.33 sec

100% tests passed, 0 tests failed out of 40

Label Time Summary:
shadow    = 770.24 sec*proc (40 tests)
tgen      = 770.24 sec*proc (40 tests)

v2.3.0 with --use-memory-manager=false:

2022-11-30 11:50:14,276 [INFO] calling 'ctest -j12 --timeout 20 -R tgen -C extra'
Test project /home/jnewsome/projects/shadow/dev/build
      Start 152: tgen-duration-1gbit_10ms-1000streams-shadow
      Start 148: tgen-duration-1gbit_10ms-100streams-shadow
      Start 144: tgen-duration-1gbit_10ms-10streams-shadow
      Start 140: tgen-duration-1gbit_10ms-1stream-shadow
      Start 151: tgen-duration-100mbit_100ms-1000streams-shadow
      Start 147: tgen-duration-100mbit_100ms-100streams-shadow
      Start 150: tgen-duration-10mbit_200ms-1000streams-shadow
      Start 143: tgen-duration-100mbit_100ms-10streams-shadow
      Start 153: tgen-size-1mbit_300ms-1stream_1b_1000x-shadow
      Start 154: tgen-size-10mbit_200ms-1stream_1b_1000x-shadow
      Start 165: tgen-size-1mbit_300ms-10streams_1mib_10x-shadow
      Start 161: tgen-size-1mbit_300ms-1stream_1mib_10x-shadow
 1/40 Test #161: tgen-size-1mbit_300ms-1stream_1mib_10x-shadow .........   Passed    9.31 sec
      Start 155: tgen-size-100mbit_100ms-1stream_1b_1000x-shadow
 2/40 Test #154: tgen-size-10mbit_200ms-1stream_1b_1000x-shadow ........   Passed    9.54 sec
      Start 173: tgen-size-1mbit_300ms-1000streams_1b_1000x-shadow
 3/40 Test #165: tgen-size-1mbit_300ms-10streams_1mib_10x-shadow .......   Passed    9.98 sec
      Start 156: tgen-size-1gbit_10ms-1stream_1b_1000x-shadow
 4/40 Test #153: tgen-size-1mbit_300ms-1stream_1b_1000x-shadow .........   Passed   11.70 sec
      Start 174: tgen-size-10mbit_200ms-1000streams_1b_1000x-shadow
 5/40 Test #143: tgen-duration-100mbit_100ms-10streams-shadow ..........   Passed   15.35 sec
      Start 166: tgen-size-10mbit_200ms-10streams_1mib_10x-shadow
 6/40 Test #156: tgen-size-1gbit_10ms-1stream_1b_1000x-shadow ..........   Passed    8.12 sec
      Start 162: tgen-size-10mbit_200ms-1stream_1mib_10x-shadow
 7/40 Test #155: tgen-size-100mbit_100ms-1stream_1b_1000x-shadow .......   Passed    8.88 sec
      Start 175: tgen-size-100mbit_100ms-1000streams_1b_1000x-shadow
 8/40 Test #150: tgen-duration-10mbit_200ms-1000streams-shadow .........   Passed   20.11 sec
      Start 176: tgen-size-1gbit_10ms-1000streams_1b_1000x-shadow
 9/40 Test #173: tgen-size-1mbit_300ms-1000streams_1b_1000x-shadow .....   Passed   11.21 sec
      Start 149: tgen-duration-1mbit_300ms-1000streams-shadow
10/40 Test #174: tgen-size-10mbit_200ms-1000streams_1b_1000x-shadow ....   Passed    9.48 sec
      Start 146: tgen-duration-10mbit_200ms-100streams-shadow
11/40 Test #166: tgen-size-10mbit_200ms-10streams_1mib_10x-shadow ......   Passed    7.48 sec
      Start 167: tgen-size-100mbit_100ms-10streams_1mib_10x-shadow
12/40 Test #147: tgen-duration-100mbit_100ms-100streams-shadow .........   Passed   24.90 sec
      Start 163: tgen-size-100mbit_100ms-1stream_1mib_10x-shadow
13/40 Test #162: tgen-size-10mbit_200ms-1stream_1mib_10x-shadow ........   Passed    7.29 sec
      Start 157: tgen-size-1mbit_300ms-1stream_1kib_100x-shadow
14/40 Test #175: tgen-size-100mbit_100ms-1000streams_1b_1000x-shadow ...   Passed    8.13 sec
      Start 169: tgen-size-1mbit_300ms-100streams_1kib_100x-shadow
15/40 Test #146: tgen-duration-10mbit_200ms-100streams-shadow ..........   Passed    5.33 sec
      Start 164: tgen-size-1gbit_10ms-1stream_1mib_10x-shadow
16/40 Test #167: tgen-size-100mbit_100ms-10streams_1mib_10x-shadow .....   Passed    4.02 sec
      Start 168: tgen-size-1gbit_10ms-10streams_1mib_10x-shadow
17/40 Test #149: tgen-duration-1mbit_300ms-1000streams-shadow ..........   Passed    6.46 sec
      Start 142: tgen-duration-10mbit_200ms-10streams-shadow
18/40 Test #176: tgen-size-1gbit_10ms-1000streams_1b_1000x-shadow ......   Passed    7.87 sec
      Start 139: tgen-duration-100mbit_100ms-1stream-shadow
19/40 Test #157: tgen-size-1mbit_300ms-1stream_1kib_100x-shadow ........   Passed    3.37 sec
      Start 158: tgen-size-10mbit_200ms-1stream_1kib_100x-shadow
20/40 Test #163: tgen-size-100mbit_100ms-1stream_1mib_10x-shadow .......   Passed    4.03 sec
      Start 170: tgen-size-10mbit_200ms-100streams_1kib_100x-shadow
21/40 Test #169: tgen-size-1mbit_300ms-100streams_1kib_100x-shadow .....   Passed    3.35 sec
      Start 138: tgen-duration-10mbit_200ms-1stream-shadow
22/40 Test #164: tgen-size-1gbit_10ms-1stream_1mib_10x-shadow ..........   Passed    3.39 sec
      Start 159: tgen-size-100mbit_100ms-1stream_1kib_100x-shadow
23/40 Test #168: tgen-size-1gbit_10ms-10streams_1mib_10x-shadow ........   Passed    3.34 sec
      Start 171: tgen-size-100mbit_100ms-100streams_1kib_100x-shadow
24/40 Test #142: tgen-duration-10mbit_200ms-10streams-shadow ...........   Passed    3.25 sec
      Start 145: tgen-duration-1mbit_300ms-100streams-shadow
25/40 Test #139: tgen-duration-100mbit_100ms-1stream-shadow ............   Passed    2.80 sec
      Start 160: tgen-size-1gbit_10ms-1stream_1kib_100x-shadow
26/40 Test #158: tgen-size-10mbit_200ms-1stream_1kib_100x-shadow .......   Passed    2.17 sec
      Start 172: tgen-size-1gbit_10ms-100streams_1kib_100x-shadow
27/40 Test #170: tgen-size-10mbit_200ms-100streams_1kib_100x-shadow ....   Passed    2.19 sec
      Start 141: tgen-duration-1mbit_300ms-10streams-shadow
28/40 Test #159: tgen-size-100mbit_100ms-1stream_1kib_100x-shadow ......   Passed    1.58 sec
      Start 137: tgen-duration-1mbit_300ms-1stream-shadow
29/40 Test #171: tgen-size-100mbit_100ms-100streams_1kib_100x-shadow ...   Passed    1.50 sec
30/40 Test #138: tgen-duration-10mbit_200ms-1stream-shadow .............   Passed    2.08 sec
31/40 Test #141: tgen-duration-1mbit_300ms-10streams-shadow ............   Passed    0.66 sec
32/40 Test #145: tgen-duration-1mbit_300ms-100streams-shadow ...........   Passed    1.37 sec
33/40 Test #137: tgen-duration-1mbit_300ms-1stream-shadow ..............   Passed    0.38 sec
34/40 Test #160: tgen-size-1gbit_10ms-1stream_1kib_100x-shadow .........   Passed    1.12 sec
35/40 Test #172: tgen-size-1gbit_10ms-100streams_1kib_100x-shadow ......   Passed    1.03 sec
36/40 Test #151: tgen-duration-100mbit_100ms-1000streams-shadow ........   Passed   72.96 sec
37/40 Test #140: tgen-duration-1gbit_10ms-1stream-shadow ...............   Passed  104.07 sec
38/40 Test #144: tgen-duration-1gbit_10ms-10streams-shadow .............   Passed  108.35 sec
39/40 Test #148: tgen-duration-1gbit_10ms-100streams-shadow ............   Passed  113.74 sec
40/40 Test #152: tgen-duration-1gbit_10ms-1000streams-shadow ...........   Passed  158.34 sec

100% tests passed, 0 tests failed out of 40

Label Time Summary:
shadow    = 780.24 sec*proc (40 tests)
tgen      = 780.24 sec*proc (40 tests)

Seeing some perf benefit here; 780s vs 770s or about 1%.

@sporksmith
Copy link
Contributor Author

Another example of a user needing to disable this feature: #2622

If we don't remove this feature, we should probably stabilize it, and:

  • Better document it, including when to enable/disable it, and adding a section about /dev/shm to https://shadow.github.io/docs/guide/system_configuration.html
  • Give it a more descriptive name. It used to control whether the MemoryManager was used at all, but now it only enables the memory mapping part of it.
  • Maybe: see if we can do a better job of detecting low /dev/shm and dynamically fall back to copying when we can't or shouldn't allocate more.
  • Consider disabling it by default, and leaving it as an advanced feature for improving performance.

@sporksmith
Copy link
Contributor Author

@sporksmith
Copy link
Contributor Author

tgen benchmark: https://github.com/shadow/benchmark/actions/runs/3772103204

Results: https://github.com/shadow/benchmark-results/tree/master/tgen/2022-12-25-T05-07-08

The "real time" graph in https://github.com/shadow/benchmark-results/blob/master/tgen/2022-12-25-T05-07-08/plots/shadow.results.pdf shows a slight performance penalty with the MM disabled.

The RSS gets significantly smaller, since without the MM enabled, the plugin memory isn't mapped into shadow. This doesn't necessarily say anything about total system memory usage.

@robgjansen
Copy link
Member

Is it worth also running a manual Tor benchmark? Because of the better tooling, I think that'll give us the overall system memory usage too (i.e., because we're running free in a loop during the Tor experiment).

@sporksmith
Copy link
Contributor Author

sporksmith commented Jan 23, 2023

Is it worth also running a manual Tor benchmark? Because of the better tooling, I think that'll give us the overall system memory usage too (i.e., because we're running free in a loop during the Tor experiment).

Yeah I'll kick one off. I think we've run into this total memory vs rss issue before and found the MM not to consume more memory, but it's also worth checking the performance difference on our primary use case. Notably, a user in #2686 (comment) reported a large performance difference in their tor simulation, though there are some other variables in play there.

Running: https://github.com/shadow/benchmark/actions/runs/3988395349

@sporksmith
Copy link
Contributor Author

Hmm, looks like GH still thinks we're over quota on storage, and discarded artifacts from this run :/. https://github.com/shadow/benchmark/actions/runs/3988395349/jobs/6839490367. I'm guessing there's some periodic job that re-evaluates storage vs quota, that hasn't run since cleaning up storage earlier today.

Can get a rough idea from the logs at least. Without the MM:

Progress: 100% — simulated: 00:17:59.999/00:18:00, realtime: 07:48:00, processes failed: 0
...
done processing input: simulation ran for 7.799756910277777 hours and consumed 7.18 GiB of RAM

vs the most recent nightly https://github.com/shadow/benchmark/actions/runs/3964662216/jobs/6793720522

Progress: 100% — simulated: 00:17:59.102/00:18:00, realtime: 07:35:00, processes failed: 0
...
done processing input: simulation ran for 7.593928287777778 hours and consumed 21.247 GiB of RAM

I think that RAM is RSS again, so we still don't have a RAM comparison.

As expected, there's a slight performance penalty for disabling the MM - ~2.6%.

Probably enough benefit to justify keeping it around, but maybe not enough to keep it as the default.

@robgjansen
Copy link
Member

From the benchmarks:

run_time

ram_simtime

@sporksmith sporksmith self-assigned this Jan 24, 2023
@sporksmith
Copy link
Contributor Author

After discussion with @robgjansen, the tentative plan is:

  • Keep the memory manager (really MemoryMapper), since it does give a nontrivial performance gain
  • Make it a bit more robust, falling back to the slower path (MemoryCopier) on errors
  • Maybe add an option to never fall back to the slow path, in case a user wants to be sure they're getting the best performance.
  • Change the name of the CLI option --use-memory-manager to be more descriptive, document it better, and take it out of "experimental"

@sporksmith
Copy link
Contributor Author

sporksmith commented Sep 26, 2023

Revisiting this now that fork is implemented, but only when the MemoryManager is disabled.

New tor benchmark, with 3 runs: https://github.com/shadow/benchmark/actions/runs/6316615683 New tgen benchmark: https://github.com/shadow/benchmark/actions/runs/6316645299 forgot to actually disable the MM.

New runs:

tor: https://github.com/shadow/benchmark/actions/runs/6318052173
tgen: https://github.com/shadow/benchmark/actions/runs/6318037310

@sporksmith
Copy link
Contributor Author

The effect on the tgen benchmark appears to be negligible: https://github.com/shadow/benchmark-results/tree/master/tgen/2023-09-26-T20-35-40

image

@sporksmith
Copy link
Contributor Author

The Tor benchmark seems to be a bit slower, as before, though it has an overlapping confidence interval with the weekly. https://github.com/shadow/benchmark-results/tree/master/tor/2023-09-26-T20-37-14

image

sporksmith added a commit that referenced this issue Sep 27, 2023
Progress on #2581

I'm still on the fence about ripping out entirely. I think we definitely
want to disable by default so that `fork` works by default. I'd feel
better waiting a release or two to give ourselves a chance to change our
minds before removing it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Error or flaw producing unexpected results
Projects
None yet
Development

No branches or pull requests

2 participants