Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}) #6583

Closed
tbg opened this issue May 9, 2016 · 50 comments · Fixed by #40206
Closed

sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}) #6583

tbg opened this issue May 9, 2016 · 50 comments · Fixed by #40206
Labels
A-kv-transactions Relating to MVCC and the transactional model. A-sql-pgcompat Semantic compatibility with PostgreSQL A-sql-semantics C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) X-anchored-telemetry The issue number is anchored by telemetry references.
Milestone

Comments

@tbg
Copy link
Member

tbg commented May 9, 2016

Postgres (and MySQL as well) provide a FOR UPDATE clause which instructs a SELECT operation to acquire row-level locks for the data it is reading.

We currently don't support this syntax, and supporting it with the same semantics is more or less impossible anyway with our underlying concurrency model (we have no locks, and while we could put "intents" in place to simulate an intent to write, it wouldn't quite be the same).

We should discuss whether we want to add least accept the syntax so that frameworks which make use of it can at least proceed (even if they don't get the intended semantics, though it's not clear what the implications of that are).

Apparently the Quartz scheduler makes use of SELECT FOR UPDATE. It is in this context that this came up.

See also http://www.postgresql.org/docs/9.1/static/explicit-locking.html for the explicit locking modes exposed by Postgres.

@karthikg001
Copy link

I think it is a better idea to only accept the syntax without supporting the functionality.

While this would need a lot of highlighting in the docs, many current RDBMS support this explicit locking mechanism (except MSSQL), so accepting it would relieve a lot of pain in modifying the apps.

In an open source library like quartz, accepting this without actually supporting means, possible errors at the commit time (due to OCC) which it doesn't handle.
But, in a custom application which someone looks to migrate to cockroach, handling errors upon commits would / should have been handled at a higher (framework?) level, so it should be fine IMO.

@bdarnell
Copy link
Contributor

Note that FOR UPDATE is not standard SQL, so a portable application shouldn't rely on it without having an alternative, although it's reasonable to assume that FOR UPDATE exists in postgres mode so it would still be good to support it if we can.

We shouldn't support the syntax without providing the semantics that it implies. Fortunately, the locking modifiers are mostly semantically relevant at lower isolation levels; at SERIALIZABLE we'd be safe to accept the syntax as no-ops. I'd have to think more about how SNAPSHOT and FOR UPDATE would interact.

Even though it's not required for SERIALIZABLE semantics, we might want to write a dummy intent in SELECT FOR UPDATE. This would allow a transaction to assert its timestamp and priority during an initial read-only phase so it can abort other conflicting transactions. I'm not sure whether that's helpful, though, or if it is consistent with expectations from this command in other databases.

This looks like the code in quartz scheduler that uses FOR UPDATE: https://github.com/quartz-scheduler/quartz/blob/fb14850e1133dd3d884f2b99748b12286df8afc6/quartz-core/src/main/java/org/quartz/impl/jdbcjobstore/StdRowLockSemaphore.java

It's using a SELECT FOR UPDATE on a dummy row in the database to act as a lock. What is this lock protecting? Does it just cover state that is in the database and accessed as a part of the transaction (if so, why is it needed? Why isn't ordinary transaction isolation sufficient?), or is it being used as an advisory lock to protect state outside the database (e.g. to ensure that only one node starts a scheduled job)? The latter isn't something that we (or any OCC database) can ensure with SELECT FOR UPDATE. If you want to use the database as a lock for external resources you'll need to actually commit writes to the database and not just rely in write locks during an open transaction.

@tbg tbg changed the title sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE}) sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}) Jun 3, 2016
@tbg
Copy link
Member Author

tbg commented Jun 3, 2016

On a related note, I got a question about SELECT FOR UPDATE SKIP LOCKED in a recent tech talk. This is a bit of a shady command because it means that Postgres will simply skip over rows which are already locked by someone else, but is apparently useful in practice when avoiding conflicts is more important than getting all the rows that matched the predicate (i.e. when updating large data sets incrementally and it's fine to skip over a few).

https://blogs.oracle.com/rammenon/entry/select_for_update_nowait_skip has some nice diagrams.

There's also SELECT FOR UPDATE NOWAIT, which on Postgres means that instead of waiting for a lock, the command immediately comes back with an error. This could also be useful in Cockroach: when in an auto-retrying transaction, one could imagine wanting to run a command only when it doesn't run into a conflict, which in our SQL layer would translate to not auto-retrying statements which contain a NOWAIT.

@petermattis petermattis added this to the Later milestone Feb 22, 2017
@vladmihalcea
Copy link

SELECT FOR UPDATE SKIP LOCKED is useful whenever you want to build a queue of tasks that multiple workers try to consume so that one a worker has locked some rows, other concurrent workers will skip those and continue with the unprocessed ones. IT's very useful in practice, that's why MySQL 8.0 is also adding support for it.

SELECT FOR UPDATE NOWAIT is also useful for latency critical situations when you don't want to get blocked until the lock is released on the lock timeout period expires.

@drnybble
Copy link

Any ETA on supporting this SQL syntax? We have a project using openjpa and it generates this SQL using the postgresql driver.

@jordanlewis
Copy link
Member

TPCC also uses SELECT FOR UPDATE.

@petermattis
Copy link
Collaborator

@jordanlewis I'm guessing that is an optimization, not a requirement given our default serializable isolation.

@jordanlewis
Copy link
Member

You're right.

@cfsnyder
Copy link

SELECT FOR UPDATE also appears to be used by Liquibase and consequently also prevents us from using Cockroach with Keycloak. Is there any ETA on supporting this?

I'd be willing to try to contribute if the consensus is that the syntax should be supported without the intended semantics, as was suggested in the issue description.

@bdarnell
Copy link
Contributor

I don't think we want to support FOR UPDATE as a no-op. For SNAPSHOT we need the real semantics (so SELECT FOR UPDATE should write an intent). We could technically make it a no-op in SERIALIZABLE mode but I think making it write an intent there too is useful to allow applications to control their lock ordering and avoid deadlocks.

@cfsnyder
Copy link

That makes sense @bdarnell. Do you guys have plans to work on this anytime soon? I wouldn't mind trying to contribute, although I'd probably need a few pointers to get started because actually implementing the semantics sounds rather involved.

@bdarnell
Copy link
Contributor

We don't currently have any plans to work on this, although we should probably get it on the roadmap (I'm marking it as 1.3 for now because the 1.2 release cycle is already pretty full, although this would be a good candidate for 1.2 if we find some slack in the schedule. cc @nstewart )

Implementing this would touch a lot of different parts of the code. No part is individually too tricky, but there are a lot of them (git grep -i reversescan will give you an idea of the scope). The bulk of the changes would consist of implementing new ScanForUpdate and ReverseScanForUpdate calls in the KV API. These would work similarly to regular scans, but A) would be flagged as read/write commands instead of read-only and B) after performing the scan, they'd use MVCCPut to write back the values that were just read (that's not the most efficient way to do things, but I think it's the right way to start since it will have the right semantics without complicating the backwards-compatibility story). Then the SQL layer would use these instead of plan Scan/ReverseScan when FOR UPDATE has been requested.

@petermattis
Copy link
Collaborator

The bulk of the changes would consist of implementing new ScanForUpdate and ReverseScanForUpdate calls in the KV API.

Adding new API calls is a bit tedious. Could we add an update field to Scan and ReverseScan and change {Scan,ReverseScan}.flags() to indicate that these are write operations if that field is set? We have precedent for doing this with DeleteRangeRequest, though I recall there were arguments at the time over how that wasn't clean.

@bdarnell
Copy link
Contributor

I thought about that but this seems like too big of a distinction to make based on a flag in the request instead of a whole new command. Imagine debugging command-queue problems when a Scan could be either a read or a write. I think it's better to do it cleanly this time.

@rytaft
Copy link
Collaborator

rytaft commented Oct 19, 2017

PostgreSQL (https://www.postgresql.org/docs/current/static/sql-select.html) supports four different levels for the locking clause:

UPDATE
NO KEY UPDATE
SHARE
KEY SHARE

Do we want to support all four or only UPDATE and SHARE?

@bdarnell
Copy link
Contributor

FOR UPDATE is the only one of those that I've heard of, so I think we can probably ignore the others (FOR SHARE would just be a no-op for us, right?). I'm not sure whether FOR NO KEY UPDATE and FOR KEY SHARE are even implementable in our model.

@rytaft
Copy link
Collaborator

rytaft commented Oct 23, 2017

I agree that FOR NO KEY UPDATE and FOR KEY SHARE don't seem to be possible in the CockroachDB model, since we'd need to support locks at a sub-row granularity.

Should I at least support the syntax for FOR SHARE and make it a no-op? Or only FOR UPDATE? I guess the question is, in our model, if someone selects a set of rows with FOR SHARE (or just a regular SELECT as part of a transaction), and then someone else tries to select the same rows with FOR UPDATE, will the second transaction fail/block?

@bdarnell
Copy link
Contributor

At least for serializable transactions, FOR SHARE would be a no-op. For snapshot transactions, I'm not sure whether we need to do anything or not. It might be safer to leave FOR SHARE unimplemented until we find someone who wants to use it.

@rytaft
Copy link
Collaborator

rytaft commented Oct 23, 2017

Sounds good.

@rytaft
Copy link
Collaborator

rytaft commented Oct 24, 2017

In addition to the 4 different "locking strengths", Postgres also supports locking on specific tables as well as different options for waiting on locks. The full syntax is:

FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ]

where lock_strength is one of { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE }. This locking clause can also be repeated multiple times to use different options with different tables.

Do we want to support multiple tables with different options? From the above comments it sounds like we probably want to support NOWAIT and SKIP LOCKED, but I'm not sure about specific tables and/or different options for different tables. By default, if FOR UPDATE is used without specified tables, Postgres will lock all rows returned by the SELECT statement.

Another question is how precisely the locked rows should match the query results. Postgres generally locks only the rows returned by the query, but there are a few side cases when extra rows may be locked (e.g., with LIMIT... OFFSET). In CockroachDB, it seems like it may be difficult to guarantee that only rows matching the predicate will be locked.

@bdarnell
Copy link
Contributor

Do we want to support multiple tables with different options?

I would start with the basic all-or-nothing version. We may want to do the per-table version eventually (if there is demand), but my guess is that the feature is rarely used enough that it's not worth the complexity at this time.

NOWAIT and SKIP LOCKED do seem useful but I wouldn't try to do them at the same time as FOR UPDATE; they seem like separate changes.

It's fine with me if the exact set of rows locked differs from postgres in some edge cases. This feature is less crucial for us because we have stronger transaction isolation anyway; I think the only times it will really matter will be the fairly simple cases.

@rytaft
Copy link
Collaborator

rytaft commented Oct 26, 2017

All sounds good to me. But per Spencer's request, I've also created an RFC to summarize the discussions in this issue and discuss an alternative approach involving locking of ranges: PR #19577.

@rbygrave
Copy link

you're talking strictly about transactions that use weaker isolation levels than SERIALIZABLE

Ah, well not strictly no in that OCC via version column still applies and works at SERIALIZABLE in the low contention cases. That is, applications still need to detect that the thing/row being changed hasn't been changed by some other user/system in the meantime during "user think time".

It is just that in the high contention case were we note that OCC via version column at SERIALIZABLE doesn't strictly detect the concurrency conflict and instead we get the different "failed to serialize transaction" exception instead.

It is likely that this only matters if the application has conditional logic along the lines of catching OptimisticLockException and continue processing maybe skipping some work it would otherwise do if OptimisticLockException wasn't thrown.

The conditional logic based on catching OptimisticLockException that used to work all the time at READ_COMMITTED will now occasionally fail at SERIALIZABLE when under high contention.

builds in protection against them, FOR UPDATE can be a real boon.

Well maybe I am misunderstanding your comment there but at READ_COMMITTED I'd say we need and use FOR UPDATE (pessimistic locking) for other reasons.

For example, blocking exclusive ownership when processing database migrations where 1 instance obtains the lock (and processes the migrations) other instances must WAIT/Block (until the migrations are complete).

For transactions that have conditional logic at SERIALIZABLE we could change to use FOR UPDATE (catch and handle the failure to obtain the lock and continue processing).

@bdarnell
Copy link
Contributor

bdarnell commented Aug 28, 2019

The conditional logic based on catching OptimisticLockException that used to work all the time at READ_COMMITTED will now occasionally fail at SERIALIZABLE when under high contention.

Ah, I think this is what we've been missing. You're talking about a sequence like this, right?

begin;
select version, otherfield from tbl where id=?;
update tbl set version=version+1, otherfield=? where id=? and version=?;
-- If rowCount == 0, goto select
commit;

So you're retrying within a single transaction and relying on the fact that reads are not repeatable. This will only work in READ COMMITTED (and isn't even guaranteed to work there - the standard allows unrepeatable reads but doesn't require them) and will fail in either REPEATABLE READ or SERIALIZABLE.

Instead of just looping back to the select statement when you get rowCount==0, you need to roll back the transaction and retry everything in a new transaction. Or you can avoid transactions completely, and do the select and the update completely independently of each other (the version check makes this safe without any transactions). Either of these approaches will work regardless of the isolation level.

Well maybe I am misunderstanding your comment there but at READ_COMMITTED I'd say we need and use FOR UPDATE (pessimistic locking) for other reasons.

Yes, at READ COMMITTED, FOR UPDATE is required. But CockroachDB does not support READ COMMITTED and has no plans to.

For example, blocking exclusive ownership when processing database migrations where 1 instance obtains the lock (and processes the migrations) other instances must WAIT/Block (until the migrations are complete).

This is the most common use case for SELECT FOR UPDATE that we see, and it doesn't work in CockroachDB. These migration tools will have to be modified to do locking in a different way (such as by actually writing to the database instead of just holding a lock in a transient transaction).

For transactions that have conditional logic at SERIALIZABLE we could change to use FOR UPDATE (catch and handle the failure to obtain the lock and continue processing).

Adding FOR UPDATE to the select in this version column pattern would avoid the need to restart the transaction, but it also means it's not optimistic any more - you'd be holding a write lock across the whole operation.

@rbygrave
Copy link

sequence like this, right?

Not really'ish. The select of data being updated can happen anytime before in a prior transaction (and normally would) and this isn't about retry per say. It is more like:

begin;
-- do work A
update tbl set version=version+1, otherfield=? where id=? and version=?;
-- do work B if update succeeded (no OptimisticLockException was thrown and caught) 
commit;

Where work A and work B are other updates, deletes etc. I should just put up the test in a git repo - I'll look to do that at some point but want to sort out some docker stuff first.

Note that if there is no "do work A" then we don't really care if the transaction fails (we don't care if the exception thrown is recoverable or not recoverable). It is only when we have other updates/deletes in the transaction that we care that the exception is recoverable (transaction can continue).

It maybe helps if I say that if instead we have a single update like:

begin;
update tbl set version=version+1, otherfield=? where id=? and version=?;
commit;

Then for me is a matter of treating OptimisticLockException (thrown when some other transaction update the same row and change the version value) and TransactionRetryWithProtoRefreshError: .. (thrown in the high contention case) as the same failure. Both fail the transaction, both require the user/app to refresh their data reapply the changes and resubmit the update (if appropriate).

With Postgres or Oracle at READ_COMMITTED we always get the OptimisticLockException. With CockroachDB we will sometimes now get the TransactionRetryWithProtoRefreshError ... but this doesn't matter if the application treats both exceptions in the same way - "Sorry, concurrent update, please refresh your data and reapply your changes".

So for myself, I'm looking to map TransactionRetryWithProtoRefreshError to be a special type of OptimisticLockException [maybe UnrecoverableOptimisticLockException].

you need to roll back the transaction and retry everything in a new transaction

Yes for the "loop and retry" scenario.

This is the most common use case for SELECT FOR UPDATE that we see

Yes. It is the "Other clients need to wait (are blocking)" part that is going to be the not so pretty aspect to deal with here. I should be doing this shortly so I'll know for sure soon enough.

Adding FOR UPDATE to the select

Yes agreed.

Apologies for consuming a lot of your time. This has been good for me, I'm getting pretty comfortable with some of the things I need to do.

As a more random thought. Instead of SELECT FOR UPDATE ... Cockroach could provide a completely different command like: LOCK ON [lockName] [NOWAIT] [TIMEOUT duration] ... for the cases people want blocking (where lockName is an arbitrary string).

Cheers, Rob.

donbowman added a commit to donbowman/zammad that referenced this issue Sep 1, 2019
In Cockroachdb there is no select for update
(see cockroachdb/cockroach#6583)
But it is not needed, make it a no-op.
rafiss added a commit to rafiss/cockroach that referenced this issue Sep 4, 2019
The blacklist was missing some expected failures and also incorrectly
marked some tests as expected failure. Some of them are a result of
Hibernate adding new tests to their repository.

Closing cockroachdb#6583 revealed a few new compatibility issues, so many of the
failing tests now link to a new issue.

Release note: None
craig bot pushed a commit that referenced this issue Sep 5, 2019
40479: roachtest: update hibernate blacklist for 19.1 and 19.2 r=rafiss a=rafiss

The blacklist was missing some expected failures and also incorrectly
marked some tests as expected failure. Some of them are a result of
Hibernate adding new tests to their repository.

Closing #6583 revealed a few new compatibility issues, so many of the
failing tests now link to a new issue.

closes #38288 

Release note: None

Co-authored-by: Rafi Shamim <[email protected]>
jordanlewis added a commit to jordanlewis/cockroach that referenced this issue Oct 2, 2019
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: cockroachdb#5807   (sql: Add support for TEMP tables)
 151: cockroachdb#17511  (sql: support stored procedures)
  86: cockroachdb#26097  (sql: make TIMETZ more pg-compatible)
  56: cockroachdb#10735  (sql: support SQL savepoints)
  55: cockroachdb#32552  (multi-dim arrays)
  55: cockroachdb#26508  (sql: restricted DDL / DML inside transactions)
  52: cockroachdb#32565  (sql: support optional TIME precision)
  39: cockroachdb#243    (roadmap: Blob storage)
  33: cockroachdb#26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: cockroachdb#27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: cockroachdb#12123  (sql: Can't drop and replace a table within a transaction)
  24: cockroachdb#26443  (sql: support user-defined schemas between database and table)
  20: cockroachdb#21286  (sql: Add support for geometric types)
  18: cockroachdb#6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: cockroachdb#22329  (Support XA distributed transactions in CockroachDB)
  16: cockroachdb#24062  (sql: 32 bit SERIAL type)
  16: cockroachdb#30352  (roadmap:when CockroachDB  will support cursor?)
  12: cockroachdb#27791  (sql: support RANGE types)
   8: cockroachdb#40195  (pgwire: multiple active result sets (portals) not supported)
   8: cockroachdb#6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: cockroachdb#23468  (sql: support sql arrays of JSONB)
   5: cockroachdb#40854  (sql: set application_name from connection string)
   4: cockroachdb#35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: cockroachdb#32610  (sql: can't insert self reference)
   4: cockroachdb#40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: cockroachdb#35897  (sql: unknown function: pg_terminate_backend())
   4: cockroachdb#4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: cockroachdb#27796  (sql: support user-defined DOMAIN types)
   3: cockroachdb#3781   (sql: Add Data Type Formatting Functions)
   3: cockroachdb#40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: cockroachdb#35882  (sql: support other character sets)
   2: cockroachdb#10028  (sql: Support view queries with star expansions)
   2: cockroachdb#35807  (sql: INTERVAL output doesn't match PG)
   2: cockroachdb#35902  (sql: large object support)
   2: cockroachdb#40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: cockroachdb#18846  (sql: Support CIDR column type)
   1: cockroachdb#9682   (sql: implement computed indexes)
   1: cockroachdb#31632  (sql: FK options (deferrable, etc))
   1: cockroachdb#24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: cockroachdb#36215  (sql: enable setting standard_conforming_strings to off)
   1: cockroachdb#32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: cockroachdb#36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: cockroachdb#26732  (sql: support the binary operator: <int> / <float>)
   1: cockroachdb#23299  (sql: support coercing string literals to arrays)
   1: cockroachdb#36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: cockroachdb#26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: cockroachdb#21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: cockroachdb#36179  (sql: implicity convert date to timestamp)
   1: cockroachdb#36118  (sql: Cannot parse '24:00' as type time)
   1: cockroachdb#31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None
jordanlewis added a commit to jordanlewis/cockroach that referenced this issue Oct 24, 2019
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: cockroachdb#5807   (sql: Add support for TEMP tables)
 151: cockroachdb#17511  (sql: support stored procedures)
  86: cockroachdb#26097  (sql: make TIMETZ more pg-compatible)
  56: cockroachdb#10735  (sql: support SQL savepoints)
  55: cockroachdb#32552  (multi-dim arrays)
  55: cockroachdb#26508  (sql: restricted DDL / DML inside transactions)
  52: cockroachdb#32565  (sql: support optional TIME precision)
  39: cockroachdb#243    (roadmap: Blob storage)
  33: cockroachdb#26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: cockroachdb#27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: cockroachdb#12123  (sql: Can't drop and replace a table within a transaction)
  24: cockroachdb#26443  (sql: support user-defined schemas between database and table)
  20: cockroachdb#21286  (sql: Add support for geometric types)
  18: cockroachdb#6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: cockroachdb#22329  (Support XA distributed transactions in CockroachDB)
  16: cockroachdb#24062  (sql: 32 bit SERIAL type)
  16: cockroachdb#30352  (roadmap:when CockroachDB  will support cursor?)
  12: cockroachdb#27791  (sql: support RANGE types)
   8: cockroachdb#40195  (pgwire: multiple active result sets (portals) not supported)
   8: cockroachdb#6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: cockroachdb#23468  (sql: support sql arrays of JSONB)
   5: cockroachdb#40854  (sql: set application_name from connection string)
   4: cockroachdb#35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: cockroachdb#32610  (sql: can't insert self reference)
   4: cockroachdb#40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: cockroachdb#35897  (sql: unknown function: pg_terminate_backend())
   4: cockroachdb#4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: cockroachdb#27796  (sql: support user-defined DOMAIN types)
   3: cockroachdb#3781   (sql: Add Data Type Formatting Functions)
   3: cockroachdb#40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: cockroachdb#35882  (sql: support other character sets)
   2: cockroachdb#10028  (sql: Support view queries with star expansions)
   2: cockroachdb#35807  (sql: INTERVAL output doesn't match PG)
   2: cockroachdb#35902  (sql: large object support)
   2: cockroachdb#40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: cockroachdb#18846  (sql: Support CIDR column type)
   1: cockroachdb#9682   (sql: implement computed indexes)
   1: cockroachdb#31632  (sql: FK options (deferrable, etc))
   1: cockroachdb#24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: cockroachdb#36215  (sql: enable setting standard_conforming_strings to off)
   1: cockroachdb#32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: cockroachdb#36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: cockroachdb#26732  (sql: support the binary operator: <int> / <float>)
   1: cockroachdb#23299  (sql: support coercing string literals to arrays)
   1: cockroachdb#36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: cockroachdb#26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: cockroachdb#21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: cockroachdb#36179  (sql: implicity convert date to timestamp)
   1: cockroachdb#36118  (sql: Cannot parse '24:00' as type time)
   1: cockroachdb#31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None
craig bot pushed a commit that referenced this issue Nov 7, 2019
41252: roachtest: add test that aggregates orm blacklist failures r=jordanlewis a=jordanlewis

The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: #5807   (sql: Add support for TEMP tables)
 151: #17511  (sql: support stored procedures)
  86: #26097  (sql: make TIMETZ more pg-compatible)
  56: #10735  (sql: support SQL savepoints)
  55: #32552  (multi-dim arrays)
  55: #26508  (sql: restricted DDL / DML inside transactions)
  52: #32565  (sql: support optional TIME precision)
  39: #243    (roadmap: Blob storage)
  33: #26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: #27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: #12123  (sql: Can't drop and replace a table within a transaction)
  24: #26443  (sql: support user-defined schemas between database and table)
  20: #21286  (sql: Add support for geometric types)
  18: #6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: #22329  (Support XA distributed transactions in CockroachDB)
  16: #24062  (sql: 32 bit SERIAL type)
  16: #30352  (roadmap:when CockroachDB  will support cursor?)
  12: #27791  (sql: support RANGE types)
   8: #40195  (pgwire: multiple active result sets (portals) not supported)
   8: #6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: #23468  (sql: support sql arrays of JSONB)
   5: #40854  (sql: set application_name from connection string)
   4: #35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: #32610  (sql: can't insert self reference)
   4: #40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: #35897  (sql: unknown function: pg_terminate_backend())
   4: #4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: #27796  (sql: support user-defined DOMAIN types)
   3: #3781   (sql: Add Data Type Formatting Functions)
   3: #40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: #35882  (sql: support other character sets)
   2: #10028  (sql: Support view queries with star expansions)
   2: #35807  (sql: INTERVAL output doesn't match PG)
   2: #35902  (sql: large object support)
   2: #40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: #18846  (sql: Support CIDR column type)
   1: #9682   (sql: implement computed indexes)
   1: #31632  (sql: FK options (deferrable, etc))
   1: #24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: #36215  (sql: enable setting standard_conforming_strings to off)
   1: #32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: #36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: #26732  (sql: support the binary operator: <int> / <float>)
   1: #23299  (sql: support coercing string literals to arrays)
   1: #36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: #26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: #21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: #36179  (sql: implicity convert date to timestamp)
   1: #36118  (sql: Cannot parse '24:00' as type time)
   1: #31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None

Co-authored-by: Jordan Lewis <[email protected]>
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Jan 7, 2020
…totype SFU

Informs cockroachdb#41720.

This commit creates a new concurrency package that provides a concurrency
manager structure that encapsulates the details of concurrency control and
contention handling for serializable key-value transactions. Any reader of this
commit should start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:
1. centralize the handling of request synchronization and transaction contention
   handling in a single location, allowing for the topic to be documented and
   understood in isolation.
2. rework contention handling to react to intent state transitions directly. This
   simplifies the transaction queueing story, reduces the frequency of transaction
   push RPCs, and allows waiters to proceed after intent resolution as soon as possible.
3. create a framework that naturally permits "update" locking, which is required for
   kv-level SELECT FOR UPDATE support (cockroachdb#6583).
4. provide stronger guarantees around fairness when transactions conflict, in order
   to reduce tail latencies under contended scenarios.
5. create a structure that can extend to address the long-term goals of a fully
   centralized lock-table laid out in cockroachdb#41720.

WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a
working but incomplete implementation. See cockroachdb#43740 for a more complete strawman.

Release note: None
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Jan 28, 2020
…totype SFU

Informs cockroachdb#41720.

This commit creates a new concurrency package that provides a concurrency
manager structure that encapsulates the details of concurrency control and
contention handling for serializable key-value transactions. Any reader of this
commit should start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:
1. centralize the handling of request synchronization and transaction contention
   handling in a single location, allowing for the topic to be documented and
   understood in isolation.
2. rework contention handling to react to intent state transitions directly. This
   simplifies the transaction queueing story, reduces the frequency of transaction
   push RPCs, and allows waiters to proceed after intent resolution as soon as possible.
3. create a framework that naturally permits "update" locking, which is required for
   kv-level SELECT FOR UPDATE support (cockroachdb#6583).
4. provide stronger guarantees around fairness when transactions conflict, in order
   to reduce tail latencies under contended scenarios.
5. create a structure that can extend to address the long-term goals of a fully
   centralized lock-table laid out in cockroachdb#41720.

WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a
working but incomplete implementation. See cockroachdb#43740 for a more complete strawman.

Release note: None
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Feb 6, 2020
Informs cockroachdb#41720.

The commit creates a new concurrency package that provides a concurrency manager
structure that encapsulates the details of concurrency control and contention
handling for serializable key-value transactions. Interested readers should
start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:

1. centralize the handling of request synchronization and transaction contention
handling in a single location, allowing for the topic to be documented,
understood, and tested in isolation.

2. rework contention handling to react to intent state transitions directly.
This simplifies the transaction queueing story, reduces the frequency of
transaction push RPCs, and allows waiters to proceed immediately after intent
resolution.

3. create a framework that naturally permits "update" locking, which is required
for kv-level SELECT FOR UPDATE support (cockroachdb#6583).

4. provide stronger guarantees around fairness when transactions conflict, to
reduce tail latencies under contended sceneries.

5. create a structure that can extend to address the long-term goals of a fully
centralized lock-table laid out in cockroachdb#41720.

This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big
differences are that it updates the lockTable interface to match the one
introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest
of the concurrency control mechanisms in CockroachDB. At this point, a reader
of this file should get a good idea of how KV transactions interact in CRDB...
now we just need to make the system actually work this way.
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Feb 7, 2020
Informs cockroachdb#41720.

The commit creates a new concurrency package that provides a concurrency manager
structure that encapsulates the details of concurrency control and contention
handling for serializable key-value transactions. Interested readers should
start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:

1. centralize the handling of request synchronization and transaction contention
handling in a single location, allowing for the topic to be documented,
understood, and tested in isolation.

2. rework contention handling to react to intent state transitions directly.
This simplifies the transaction queueing story, reduces the frequency of
transaction push RPCs, and allows waiters to proceed immediately after intent
resolution.

3. create a framework that naturally permits "update" locking, which is required
for kv-level SELECT FOR UPDATE support (cockroachdb#6583).

4. provide stronger guarantees around fairness when transactions conflict, to
reduce tail latencies under contended sceneries.

5. create a structure that can extend to address the long-term goals of a fully
centralized lock-table laid out in cockroachdb#41720.

This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big
differences are that it updates the lockTable interface to match the one
introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest
of the concurrency control mechanisms in CockroachDB. At this point, a reader
of this file should get a good idea of how KV transactions interact in CRDB...
now we just need to make the system actually work this way.
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Feb 8, 2020
Informs cockroachdb#41720.

The commit creates a new concurrency package that provides a concurrency manager
structure that encapsulates the details of concurrency control and contention
handling for serializable key-value transactions. Interested readers should
start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:

1. centralize the handling of request synchronization and transaction contention
handling in a single location, allowing for the topic to be documented,
understood, and tested in isolation.

2. rework contention handling to react to intent state transitions directly.
This simplifies the transaction queueing story, reduces the frequency of
transaction push RPCs, and allows waiters to proceed immediately after intent
resolution.

3. create a framework that naturally permits "update" locking, which is required
for kv-level SELECT FOR UPDATE support (cockroachdb#6583).

4. provide stronger guarantees around fairness when transactions conflict, to
reduce tail latencies under contended sceneries.

5. create a structure that can extend to address the long-term goals of a fully
centralized lock-table laid out in cockroachdb#41720.

This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big
differences are that it updates the lockTable interface to match the one
introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest
of the concurrency control mechanisms in CockroachDB. At this point, a reader
of this file should get a good idea of how KV transactions interact in CRDB...
now we just need to make the system actually work this way.
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Feb 10, 2020
Informs cockroachdb#41720.

The commit creates a new concurrency package that provides a concurrency manager
structure that encapsulates the details of concurrency control and contention
handling for serializable key-value transactions. Interested readers should
start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:

1. centralize the handling of request synchronization and transaction contention
handling in a single location, allowing for the topic to be documented,
understood, and tested in isolation.

2. rework contention handling to react to intent state transitions directly.
This simplifies the transaction queueing story, reduces the frequency of
transaction push RPCs, and allows waiters to proceed immediately after intent
resolution.

3. create a framework that naturally permits "update" locking, which is required
for kv-level SELECT FOR UPDATE support (cockroachdb#6583).

4. provide stronger guarantees around fairness when transactions conflict, to
reduce tail latencies under contended sceneries.

5. create a structure that can extend to address the long-term goals of a fully
centralized lock-table laid out in cockroachdb#41720.

This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big
differences are that it updates the lockTable interface to match the one
introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest
of the concurrency control mechanisms in CockroachDB. At this point, a reader
of this file should get a good idea of how KV transactions interact in CRDB...
now we just need to make the system actually work this way.
craig bot pushed a commit that referenced this issue Feb 10, 2020
44787: storage/concurrency: define concurrency control interfaces r=nvanbenschoten a=nvanbenschoten

Informs #41720.

The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:

1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation.
2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution.
3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (#6583).
4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries.
5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in #41720.

This commit pulls over a lot of already reviewed code from #43775. The big differences are that it updates the lockTable interface to match the one introduced in #43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way.

Co-authored-by: Nathan VanBenschoten <[email protected]>
nvanbenschoten added a commit to nvanbenschoten/cockroach that referenced this issue Feb 14, 2020
…totype SFU

Informs cockroachdb#41720.

This commit creates a new concurrency package that provides a concurrency
manager structure that encapsulates the details of concurrency control and
contention handling for serializable key-value transactions. Any reader of this
commit should start at `concurrency_control.go` and move out from there.

The new package has a few primary objectives:
1. centralize the handling of request synchronization and transaction contention
   handling in a single location, allowing for the topic to be documented and
   understood in isolation.
2. rework contention handling to react to intent state transitions directly. This
   simplifies the transaction queueing story, reduces the frequency of transaction
   push RPCs, and allows waiters to proceed after intent resolution as soon as possible.
3. create a framework that naturally permits "update" locking, which is required for
   kv-level SELECT FOR UPDATE support (cockroachdb#6583).
4. provide stronger guarantees around fairness when transactions conflict, in order
   to reduce tail latencies under contended scenarios.
5. create a structure that can extend to address the long-term goals of a fully
   centralized lock-table laid out in cockroachdb#41720.

WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a
working but incomplete implementation. See cockroachdb#43740 for a more complete strawman.

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv-transactions Relating to MVCC and the transactional model. A-sql-pgcompat Semantic compatibility with PostgreSQL A-sql-semantics C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) X-anchored-telemetry The issue number is anchored by telemetry references.
Projects
None yet
Development

Successfully merging a pull request may close this issue.