Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge development #18

Merged
merged 35 commits into from
Jul 26, 2017
Merged

Merge development #18

merged 35 commits into from
Jul 26, 2017

Conversation

domodwyer
Copy link

Changes:

Verified in staging by @weiishann.

BenLubar and others added 30 commits July 5, 2016 20:13
When walking graphs (hasPreReq), we can actually spend a lot of time
doing the conversion from a 'hex+nonce' token string back to a binary
ObjectId. Cache them in the flusher.
During 'recurse' loading all of the transactions to be done one-by-one
is actually rather expensive. Instead we can load them ahead of time,
and even allow the database to load them in whatever order is optimal
for the db.
When dealing with some forms of 'setup', the existing preload loads too
much data and causes a different O(N^2) behavior. So instead, we cap the
number of transactions we will preload, which gives an upper bound on
how much we'll over-load.
… jameinel-txn-id-caching

# Conflicts:
#	txn/sim_test.go
#	txn/tarjan_test.go
technically speaking, 2.5.5 for the Hint feature
travis is failing for most (all?) PRs even when the exit code is 0.
This only happens for the two older mongo versions
When we have broken transaction data in the database (such as from
mongo getting OOM killed), it can cause cascade failure, where that
document ends up getting too many transactions queued up against it.

This can also happen if you have nothing but assert-only transactions against a
single document.

If we have lots of transactions, it becomes harder and harder to add
new entries and clearing out a large queue is O(N^2) which means capping it
is worthwhile. (It also makes the document grow until it hits max-doc-size.)

The upper bound is still quite large, so it should not be triggered if
everything is operating normally.
Time zone in time format for JSON (un)marshaling is wrong.
All dates used to be parsed in UTC.
See numeric time zone offsets in: https://golang.org/pkg/time/#pkg-constants
Still defaults to 1000 without any other configuration, but
allows callers to know that they can be stricter/less strict.
…-timezone

# Conflicts:
#	bson/json.go
#	bson/json_test.go
…ar-cursor-timeouts

# Conflicts:
#	session.go
#	session_test.go
domodwyer and others added 5 commits July 5, 2017 14:31
…e-hint

Support index hints & deadlines for Count
* development:
  Credit @fmpwizard in the README.
  Add link to improvement by @jameinel
  Credit @BenLubar in README.
  Credit @reenjii in the README.
  Add Runner.SetOptions to control maximum queue length.
  fix json time zone
  Set an upper limit of how large we will let txn-queues grow.
  See if cleaning up mongo instances fixes the build
  Both features only wrk starting on 2.6
  Added Hint and MaxTimeMS support to Count()
  fix running test on mongo 3.2
  Revert "try to reuse the info.Queue conversion has a negative performance effect"
  try to reuse the info.Queue conversion has a negative performance effect
  Batch the preload into chunks.
  Include preloading of a bunch of transactions.
  Cache conversion from token to TXN ObjectId.
  Add the test cases that show O(N^2) performance
  run 'go fmt' using go 1.8
  add test case for no-timeout cursors
  Fix SetCursorTimeout. See https://jira.mongodb.org/browse/SERVER-24899

# Conflicts:
#	README.md
@domodwyer domodwyer merged commit 00b0569 into master Jul 26, 2017
@domodwyer domodwyer deleted the merge-development branch July 26, 2017 11:51
libi pushed a commit to libi/mgo that referenced this pull request Dec 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants