Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel API for upgrading vats #1848

Closed
warner opened this issue Oct 7, 2020 · 27 comments
Closed

kernel API for upgrading vats #1848

warner opened this issue Oct 7, 2020 · 27 comments
Assignees
Labels
enhancement New feature or request SwingSet package: SwingSet
Milestone

Comments

@warner
Copy link
Member

warner commented Oct 7, 2020

What is the Problem Being Solved?

#1691 describes a way to upgrade a vat by replacing its code and replaying its transcript, such that the new code behaves exactly like the old code up until some future cutover event.

If/when we need to use such a thing, we'll need a way to authorize its use. We want an ocap-appropriate mechanism to change the behavior of existing objects, so that clients of those objects can safely+correctly rely upon that behavior. In particular we want the clients of a contract, who have examined the code of that contract and are relying upon it acting in a certain way, to have a clear model they can follow. By enabling upgrade at all, we change the social contract from "this object will always behave according to code X" to "this object will behave according to code X as amended by upgrade decisions", along with some specific rules about how such upgrade decisions can be exercised.

The ultimate lever to change the behavior of the system is ownership over the underlying platform. In a consensus machine (on-chain), that is expressed by a suitable majority of the validator power deciding to run alternate code. We're looking for a smaller lever that can be expressed in more ocap-ish terms. I'm thinking of a mechanism that allows one vat at a time to be modified, rather than being able to make arbitrary changes to any part of the system.

Description of the Design

Here's a vague sketch, I haven't thought through the details at all.

What if the creator of the vat receives, in addition to the vat's root object and the authority to terminate the vat from the outside, an extra upgrader object. This would accept an upgrade(newCodeBundle, args) message.

When used, this asks the kernel to build a new vatManager around newCodeBundle and replay the entire transcript of the old vat. If this replay fails to match every syscall, upgrade() rejects and nothing else happens. If replay succeeds, the new bundle is then issued a special "you have just been upgraded" message (maybe named cutover()), which includes the args that were given to upgrade(). This cutover message gets one crank to finish, after which the old vat is retired and the new vat takes its place.

The cutover message should maybe be sent to a special object, rather than the root object. One option is for newCodeBundle to export both buildRootObject() and buildCutoverObject(), and the cutover message is sent to the latter. The goals here are to 1: make it clear to a reader when precisely the behavior is allowed to change, and 2: improve the readability of the bundle by separating the "steady state" behavior from the pieces needed specifically for the upgrade.

The cutover method might need to invoke some new syscalls to move objects to different vats, or change their IDs (to hierarchical ones) to move their data into secondary storage.

Another tool that might help make the new bundle easier to read could be to represent all upgrades as an initial "gather data" phase, followed by a "reload data" phase, followed by a regular "run" phase. The gather-data phase would schematize the state of the vat (which might be spread across WeakMaps or whatever) into some flat copyable capdata plus a table of exported object references. The reload-data phase would get those two tables, but otherwise ignores the old code entirely. It reconstructs new objects to implement a suitable new state, and uses some special syscalls to transfer the identity of the old objects to the new ones. Then it switches to the "run" phase which has no remaining trace of the upgrade code.

This would let the upgrade process be examined independently of the new-behavior code. The reader who is interested in how the new vat behaves only has to read the "run" phase. If they are willing to assume the upgrade went smoothly, they can pretend the vat has only ever used the "run" phase of the new code.

The reader interested in the upgrade process can split it at the schema of the data emitted by the gather-data phase (and consumed by the reload-data phase). If the vat was storing most of its state in secondary storage, these phases might be fairly small.

The new syscalls to support identity transfer of individual objects might be expressed at the ocap level as a new vatPowers.become(oldObject, newObject) (remembering that vatPowers are initially only available to buildRootObject(), which can choose to share some of them elsewhere, or not). Or perhaps become should only be made available to the upgrade code, to be used during the reload-data phase. Clearly become is a high-power authority that must be carefully controlled, because in general having message-sending access to object X should certainly not give you the ability to receive other sender's inbound messages to X as well. The upgrade code is special and more powerful than any of the normal runtime code.

Once vats have an upgrade facet, we can build governance/voting mechanisms around its use out of our normal ocap tools. Zoe, when asked to instantiate a contract into a new vat, could also be told what the upgrade policy should be. Zoe then holds the upgrade facet and only exercises it when told to by a suitable vote.

For upgrades that are driven by validator consensus, rather than some other constituency, here's a thought: we number the upgrade proposals, validators sign a Cosmos transaction that includes a message to a Cosmos-SDK governance/voting module, that module watches for the votes to meet a passing threshold, if/when that happens the module sends a special message into SwingSet (following similar pathways as IBC and Mailbox messages travel). That message is routed into some special vat (perhaps Zoe) which can react to it. Maybe someone else sends a message into Zoe first to register the newCodeBundle and assign it a number, then a subsequent validator vote to activate that number can be the signal that Zoe uses to exercise the upgrade facet.

@warner warner added enhancement New feature or request SwingSet package: SwingSet labels Oct 7, 2020
@katelynsills
Copy link
Contributor

How would this compare to upgrade purely at the Zoe level?

@warner
Copy link
Member Author

warner commented Oct 9, 2020

Hm.. what does a Zoe-level upgrade mean? I can think of a few options.

One is that we don't attempt to modify existing contracts, we just make sure that any new ones are created with the new version. That approach wouldn't even require any code changes: you just install V2, tell everyone about it, and then hope they decide to use V2 instead of V1 going forward. You might add a way to make V1 closed for business (denying the ability to instantiate it ever again). Overall it'd be pretty simple, maybe useful in some cases, but probably not very general-purpose (it kinda qualifies as "upgrade" but not really).

Another option would be for contracts to be shipped with upgrade functionality already present: some code that could, in response to some carefully-managed message, evaluate a new source bundle and hand all its state to the result. Zoe or ZCF might hold on to that upgrade facet and only exercise it in response to some governance-type voting mechanism (or only enable it if some assertion trips, or engage a time-lock veto period, or any number of nifty safety mechanisms we might think up).

That would probably be the best in terms of making it clear up front what circumstances might trigger an upgrade, and what would happen to the state as the upgrade happens. The downside is that you have to figure out all of that ahead of time, and whatever you miss might not be upgradeable (or at least not without some deeper mechanism: I expect we'll have multiple layers, and we'll use the shallowest tool that does the job). The state transfer is the part I'd be most worried about: we don't know what the V2 behavior will be ahead of time (if we did, we'd publish that instead of V1), so I'm not sure we could write a correct state exporter ahead of time. But it might be possible.

I should note that this proposed API could be used either for dynamic vats (in which case Zoe would hold the upgrade facet for the dynamic vat that holds ZCF and a contract), or for static vats (in which case some deeper goveranance mechanism would hold the upgrade facet for the static vat that holds Zoe itself). Upgrading Zoe is, of course, a much more serious undertaking than upgrading a single contract. Upgrading a single contract vat cannot violate offer safety (well, I guess it depends upon how much power we invest in the ZCF code), but changing Zoe's behavior could cause all sorts of damage. So the mechanism around a Zoe upgrade would need to be that much more involved and cautious.

This API is mostly about unplanned upgrade, where we weren't prepared enough to add something into the contract or into the vat to perform an orderly handoff of state from old code to new code, and we find ourselves in a situation where the only option is to re-run the vat from scratch but with different code. We should probably have both.

@katelynsills
Copy link
Contributor

Let's call the first version a "manual upgrade" or a "user-has-to-move upgrade". I would agree that that's the default and is currently possible.

I was talking about something different, that I think your "other option" doesn't capture. We already have the mechanism for one contract to create an instance of another contract. And, we already have a task for allowing a Zoe contract to transparently use ZCF to transfer offers to another contract. So no need for any special state exporter for upgrade at the Zoe contract level.

So to sum up the "Zoe-level upgrade" that I'm describing: Contract A can be given an installation and start a new contract B and move the offers that were in contract A to contract B. If Contract A and Contract B are meant to maintain the same identity, we get upgrade for free out of features that we already know we need. Furthermore, this all must be in the contract code, so it's much better than vat upgrade in that it is transparent. Upgrade of this kind can only happen if the code allowed it, so the user can read the code and see where upgrade might or might not happen and decide whether to join on that basis.

That's good to know that this ticket is about the unplanned upgrades. I think it makes sense for upgrading Zoe itself, but I think it doesn't make sense to use vat upgrade for upgrades of contracts if we have a mechanism at a higher level.

@erights
Copy link
Member

erights commented Oct 9, 2020

Because the contracts running under Zoe are how our users express credible commitments to each other, and because Zoe provides the installation and the source code as validation that the commitments you're interacting with are according to that code, I think that contract upgrade is its own conversation.

The code should express somehow what the possibilities for its future upgrade are, and what will be the process by which that is decided. ZCF might well expose an API to make some choices straightforward to express, even those that entail magic brain surgery on the state of the contract. The text of the contract, by omission, makes strong credible commitments about what kinds of upgrades are not possible, and what kinds of decisions about upgrades are not possible.

@erights
Copy link
Member

erights commented Oct 9, 2020

I posted the above before I read @katelynsills '. I think we're agreed on the fundamental principles --- a contract is only upgradable to the extent, and in the manner, that the code of the contract visibly states. @katelynsills also makes a distinct point that we should prefer less magical mechanisms over more magical mechanisms when these are adequate. Despite my brain surgery comment, I agree with this preference. But this is one where I would not be surprised to learn that more magical interventions are indeed sometimes necessary. We need some realistic experience before concluding that we need more magic.

In a separate conversation, @dtribble points out a crucial case that the contract cannot solve for itself without magical help from ZCF and Zoe. If a contract panics --- if it hits an internal error such that it knows that its state is corrupted and it cannot continue, or if something outside the contract with the right to terminate the contract (ZCF, Zoe, SwingSet kernel) makes this determination about the contract or its vat, the default behavior is zcf/contract vat death followed by Zoe doing a payout/exit of all the seats associated with that contract. Our system currently supports only this default. @dtribble points out that this default would be catastrophic for some contracts.

@katelynsills made an interesting suggestion that provides a least-magical way to handle this case specifically: A contract might say only how to "upgrade" it if it panics. If it says only this, then it is immutable until and unless it panics. In whatever way the contract expresses what happens if it does panic, it again would still need to commit to how these decisions get made, in order to still have a credible commitment to how these decisions will not be made.

@erights
Copy link
Member

erights commented Oct 9, 2020

These instructions to Zoe about how it should handle the sudden death of the contract vat can be thought of as an "advance directive". The statement about what should happen to the live offers, or the assets to the extent that the contract has leeway to reallocate those, can be thought of as a "will", with the receivers being the contract's heirs.

The leeway issue is interesting. For any manual upgrade that the contract does for itself, to Zoe, that's just the contract code doing more stuff it is allowed to do. Nothing distinguishes it as an upgrade. Thus, such manual upgrades necessarily cannot violate offer safety or payout liveness, since Zoe enforces that even on adversarial contracts. In the case of the advance directive, we can still enforce both offer safety and payout liveness. We should still enforce offer safety. We probably should still enforce payout liveness, but this is less clear. Such payout liveness imposes a deadline on any emergency repairs. The following is probably a bad idea, but I can at least imagine that we might want to have exit conditions with two deadlines, where the longer one applies only during states of emergency. The fact that a state of emergency is only caused by a panic, and the (presumably vetted) contract code only panics on an undetected bug, provides some degree of safety against abusive declarations of states of emergency.

@erights
Copy link
Member

erights commented Oct 9, 2020

Dispute resolution is much like upgrade, and we may in fact treat it like upgrade. (Attn @kleros) In a split contract (like AMiX) the players can throw the contract into dispute. If they do, then a pre-agreed dispute resolution procedure is engaged, involving either pre-agreed arbiters or a pre-agreed means of selecting arbiters. And a pre-agreed means of composing the judgements of the arbiters into a decision. This can all be done manually without any system support. Enforcing offer safety on the dispute resolution outcome is awesome and unprecedented. This tremendously lowers everyone's risks from corrupt arbiters.

However, payout liveness raises a similar dilemma as #1848 (comment) . Enforcing payout liveness imposes severe deadlines on the dispute resolution process. Deadlines that are reasonable for automatic execution may be painful to apply to a process of human judgement. Again, the following is a bad idea, but I can imagine that split contracts somehow accept a pair of deadlines, where the longer one applies only during dispute resolution. Unlike the rest of dispute resolution, this would require some new mechanism be provided by Zoe/ZCF.

@warner
Copy link
Member Author

warner commented Oct 9, 2020

Yeah, upgrades of the contract code that are expressed entirely within the original contract code are the most pleasant of the alternatives (an orderly transfer of obligations). Replacing a vat via some magical unplanned-for process is the second-least pleasant. Changing the behavior at an even lower level would be the first-least pleasant. I suspect we may need all of these mechanisms sooner or later.

Let's use "contract upgrade" for the first case, the one @katelynsills is describing. And "vat upgrade" for the kernel-implemented vat-admin-facet-managed mechanism described in my opening comment here.

I'm super interested in the notion of assert events triggering an emergency situation which enables certain upgrade mechanisms that were otherwise prohibited. I expect the trick will be how to figure out just how badly the state is corrupted and how much new code will be necessary to recover.

The easiest case I can imagine is a false trigger: we have some invariant check that turns out to be more strict than is really necessary, and we don't notice it until some runtime event provokes it into firing. I think we'd want the contract to freeze in place, not triggering refunds or vat death or anything, just hit pause. (Maybe the kernel should arrange to rewind the crank that triggered the assert first, so the vat is in a known state, rather than executing just the first half of the operation). Then we humans investigate and figure out what happened. When we conclude that the assert was buggy and the state is actually just fine, we'd want a way to resume operation. I can imagine some voting mechanism (perhaps with a fairly small / minimally-stringent set of authorized parties) which disables the assert (perhaps for one crank only) and redelivers the message.

The next more complex situation I can imagine is one where our investigation reveals an actual problem, and we conclude that the simplest response is to kill the contract instance and have Zoe execute the payouts. It might be good if this were the default behavior if we don't implement the pause button (I think an assert in today's code would kill the current turn, but nothing else, which might leave the contract in a state where it can't do anything more than wait for a timeout).

The next more complex case would be us concluding that there is a real problem, but allowing the contract to unwind is not desireable, and we'd rather move the offers to a new contract where the problem is fixed. For this, we might want to abandon the last message (the one that triggered the assert) and instead send a special contract-upgrade message. Once the offers had been moved, we could figure out a way to re-send the troublesome message and allow it to flow through to the new contract.

More complex situations would be where our investigation reveals the problem originating before the assert-triggering message, such that rewinding that last message isn't sufficient. Here is where I think we'd need more invasive surgery.

@warner
Copy link
Member Author

warner commented Sep 22, 2021

Here's a sketch of an upgrade API:

  • when creating a new vat, vatAdmin returns an UpgradeManager object as well as the new vat's root object
    • Zoe is expected to instantiate some publically-visible governance policy and give it the UM
  • within the vat, vatPowers.upgradeRegistry is a facility for making data available to your successor
    • upgradeRegistry.registerExport(name, object) takes a Far Remotable and remembers it for later
    • upgradeRegistry.registerImport(name, object) takes a Presence and does the same
    • upgradeRegistry.registerKind(name, kindConstructor) takes a virtual-object constructor (details TBD) and does the same
    • upgradeRegister.storeData(key, value) takes arbitrary (flat/pass-by-copy) data
    • all three methods update a table (in the DB-backed vatStore) mapping user-controlled strings to vrefs, and probably increment some reference counters
    • there is no way to get the registered objects back from the upgradeRegistry: they're for the successor, not the predecessor
    • vats are expected to register most of their long-lived named objects as soon as they are created, probably during buildRootObject, and their names are "well known" to the successor code (i.e. a programmer sees the name in the predecessor and adds a static string of the same name when they write the successor)
      • however dynamically created objects, e.g. a named Mint in Zoe, can be added later, as long as the names used are known to the successor code somehow (perhaps using storeData)
  • we could add some notifier-like object to inform the outgoing predecessor that an upgrade is about to happen, giving it a chance to close out operations in flight, or perhaps delay the upgrade until the vat is ready
    • this might be better handled between Zoe and the contract, in userspace, but I can imagine the kernel being involved
  • zoe does UpgradeManager~.upgrade(newBundle, data) to trigger the upgrade process
    • (if "delay until ready" is implemented, do that first)
    • the kernel shuts down the old worker
    • the kernel replaces the vat's bundle (in the DB) with the successor bundle, and deletes the transcript
    • the kernel starts up a new worker
      • the new bundle is expected to implement upgrade(vatPowers, vatParameters, upgrader), instead of buildRootObject
    • the kernel makes a special dispatch.upgrade delivery to the vat
    • liveslots processes dispatch.upgrade by building an upgrader object and invoking the new code's upgrade() method
    • when the Promise returned by upgrade is resolved, zoe's UM.upgrade result promise is resolved with the same result
  • the new vat's upgrader object provides:
    • upgrader.subsumeExport(name, newObject) to provide a new Remotable or Representative to take over the identity of something that was exported by the predecessor, and registered with upgradeRegistry.registerExport(name, oldObject)
    • upgrader.getImport(name) returns a Presence that was previously registered by upgradeRegistry.registerImport(name, oldObject)
    • upgrader.replaceKind(name, kindConstructor) (TBD) provides new behavior code for the class of virtual objects managed by the given kind constructor
    • this upgrader object should be closely held by the initial vat code (in upgrade()), and not passed around
    • upgrader should be dropped when the upgrade process is complete
    • it might be a good idea for liveslots to deactivate/revoke upgrader once the upgrade() result promise is resolved
    • any imports that were registered by the predecessor but not accessed by the successor will be dropped
    • any refcounts that were incremented by the registry process will be decremented again when upgrade() finishes, so GC happens as if they were retained by a magical cross-lifetime reference that starts with the upgradeRegistry and ends when upgrade() is done
  • exported virtual objects are retained by the downstream importing vats, as well as any export-side virtualized data, and upgradeRegistry items
  • there is no facility to subsume individual exported virtual objects: you can only subsume the entire Kind by providing a new behavior constructor
  • when upgrade completes, any registered exports that are not subsumed are in a bad state: downstream vats may still have a reference (and may send messages), but the vat has no target associated with them. If/when we can unilaterally revoke objects (revoke a Remotable #2070), this would be a good time to trigger that. Otherwise, liveslots should just attach a dummy error-everything object to all the remaining registered exports
    • non-registered exports should get the same treatment, however it's not obvious how liveslots can learn what they are
    • perhaps dispatch.upgrade should have a conversation with the kernel that walks the clist, or somehow negotiates to see which clist entries are still meaningful, and which should be deleted somehow

@warner
Copy link
Member Author

warner commented Sep 29, 2021

@FUDCo and I walked through some more ideas:

  • vats which are sufficiently prepared can be upgraded with the API above, where a new code bundle is installed, the old worker is terminated, and the new worker launches from the saved virtual-object data (and the registry)
    • developers examine the old code to learn the names with which important objects and virtual-object kind constructors were registered
    • developers write new code which creates Remotables/behaviors to satisfy the obligations of the previously-exported objects
    • the governing body approves the upgrade, sending the new code bundle into the kernel API to perform the switchover
    • the transcript is deleted, the code bundle is replaced, and the upgrade message is delivered
  • for vats which are insufficiently prepared, the first step is a vat upgrade: the retroactive time-travelling manchurian candidate sleeper agent protocol #1691 -based migration step:
    • developers examine the old code and figure out which exported Remotables need to be registered, or (more likely) what state is in RAM rather than in virtual objects
    • developers write the sleeper-agent migration code bundle
      • this bundle is obligated to behave just like the old code: it must have the same syscall trace, although metering can diverge, and we need a way to allow GC to diverge
      • the migration bundle should accumulate the same state in RAM as the old code, but with enough extra tracing to keep track of what state needs to be registered/stored, e.g. keeping a Set of exported objects that did not previously need to be enumerable
      • the migration bundle should have a way to respond to a special "activation message", perhaps through a Promise exposed on vatPowers.upgradeManager, which indicates when it is safe to diverge
        • this would be triggered by a dispatch.something that is called by the kernel upgrade API, and never appears during normal operation
      • upon receipt of this message, the vat should write all its accumulated state to virtual objects, and/or register the important Remotables
      • after processing this message, either we declare that the vat is allowed to self-terminate, or we require it to continue to work (saving new state in the same virtual objects)
    • developers write the new code, which knows how to proceed from the DB/virtual-object/registered state written by the migration bundle
    • the governing body approves the migration, sending the migration bundle into the kernel API
      • the kernel loads a new worker with the migration bundle, and begins to replay the transcript
      • this may take a while, and can proceed in parallel with normal execution, since replay does not interact with the kernel DB, only the transcript
      • once migration replay is complete, it is safe (i.e. won't stall) to submit the activation message
      • the migration vat responds to the activation message with DB writes, moving all important state into the DB
    • after migration is complete, the vat is just as prepared for upgrade as the first case
      • so the governing body approves the actual upgrade, sending the upgrade bundle into the kernel API
      • this stops the old worker and launches a new one, which boots from the persistent/DB/virtual-object state
  • the chances that a vat is "sufficiently prepared" depends upon the complexity of its data and the adequacy of our virtual-object APIs: e.g. if virtual collections: range queries, sort options, indices #2004 is implemented and good enough to store an order book, then an order-book-using contract might use it and might have all its important data in the DB instead of RAM, which both reduces its memory footprint and makes upgrade a one-step process instead of a two-step process

Some other ideas:

We could give a secondary vatstore DB facaility to the migration vat worker, so it could write out the migrated state as it goes, instead of deferring all the writes for the activation message. These could be syscalls, but they are not compared against the syscalls in the transcript (since the original/old code didn't make them). We could probably just filter them out of the transcript before comparison. They probably can't have any return values.

We might want two separate migration events: one which begins replay, and a second "cutover" event which replaces the old vat worker with the migration worker. These could be spaced out over several weeks, to give validators a chance to let the migration worker catch up. The begin-replay event might not need to be part of consensus, it could be an auxilliary message sent into SwingSet to prepare for a cutover. The cutover instruction is part of consensus, and could include a hash of the expected migration vat state (basically a hash of the secondary DB writes). Operationally, we'd announce a planned upgrade, validator operations would submit the first event, we'd wait a few weeks or something for all kernels to prepare the replacement worker, then the governing body would submit the cutover event.

Once all the important state is in the DB, we could perform a dummy upgrade (no code changes) on a regular schedule, perhaps once a month, which wouldn't change any behavior but would truncate the transcript. If we did this across all vats at the same time, a hopeful new validator (who has no state yet) could catch up efficiently if they launch just after the upgrade finishes. They can copy the DB state from an existing validator (assuming we get that hashed properly), and then they can launch new workers and won't have any transcripts to replay. This would not require us to rely upon deterministic/consensus heap snapshots.

@FUDCo
Copy link
Contributor

FUDCo commented Oct 5, 2021

Thoughts on upgrade

This document is to capture the state of my thinking on upgrade. It is not yet a design, but a place to work out the ideas that will ultimately lead to a design. I expect that this will evolve into an actual design document as our understanding gels.

(Note: in a lot of our conversations we have used the term "upgrade". Upon reflection, I think the word "upgrade" implies a value judgement that is irrelevant to the problem at hand. From my perspective, the key issue is how to enable the code to be changed, rather than whether the change itself is an "upgrade". Improvement is often the motivation for changing code, but it's the change itself that introduces the technical challenges. This suggests that "update" might be a better term to use, and for a while I switched to using the term "update" throughout this document. However, in discussion @warner and I concluded that "upgrade" is the term that's been used in most conversations on this topic so far, and it's the word used in the various issues that have been filed on this topic, so for now "upgrade" it is.)

Upgrade strategies

We have identified three flavors of upgrade strategies that each present different tradeoffs with respect to flexibility, difficulty, API complexity, and scheduling of engineering effort. The principal distinction between them is the amount of upfront future-proofing work that must be done inside the deployed vat code. These are not so much upgrade paradigms competing to be The API, but differing approaches that will be appropriate in different circumstances depending on the constraints of the upgrade problem at hand in any particular case.

Designed-in anticipated upgrades - "Builtin"

To the extent that the creators of a body of code (in particular, a contract or other service that runs in/as a vat) anticipate the need for specific changes in the future and thus build in mechanisms to support them directly, we can regard upgrade as a purely user-space problem. This is probably sufficient for simple things like parameter changes (e.g., tweak an interest rate setting) that can be designed into an application's own API and effectuated without any actual modifications to the code itself. While it is plausible that we could add some library support to make this kind of thing easier, I'm not sure at this point what such support might look like. However, I don't think are any fundamental problems we need to solved right now for this case.

Since any particular use of the Builtin strategy is self-contained within whatever application makes use of it, we will not consider it further here. This is not to downplay its importance -- which I expect it to be significant -- but rather reflects that since it is (by definition) entirely inside the application domain it has no specific implementation impact on the design of Swingset.

The time travelling Manchurian Candidate sleeper agent protocol - "Replay"

To the extent that a body of code is written without any forethought at all regarding upgrade, we would instead have to resort to a more brute force approach based on wholesale code replacement. Our current best idea of how to do this is the thing that Brian refers to as "the time-travelling Manchurian Candidate sleeper agent protocol", where we substitute the code that defines a vat prior to t0, then execute a vat replay from the very beginning of time in a "do everything exactly the same as before" mode until reaching a predetermined switchover point, whereupon the new code, now having complete access to any hidden internal state that might have been invisible from the transcript but which got recomputed during the replay, can begin to express new behavior or present a changed API to its clients. In principle, this approach should be sufficient for essentially any imagineable upgrade, but is likely to prove tricky to orchestrate due to the need to never diverge from the recorded transcript during the replay execution. This could be made slightly less tricky by relaxing the deterministic replay rules slightly. In particular, we could:

  • Disable metering during replay (or perhaps, only during replay done specfically for upgrade) - This would free the upgrader from worry about subtly different computational costs associated with the altered code, allowing more flexibility in computing new state.

  • Add new syscalls that access the vatstore but which are not matched against the transcript nor added to the crank hash - This would enable the new vat code to capture previously hidden state into persistent form and to add new or enlarged state without increasing memory pressure.

This approach leaves open the problem of how to get rid of the old vat code once it is no longer required, since that would itself be an additional code upgrade. Obviously you'd like to be able to do this without leading to an infinite regress. One strategy might be to have each new upgrade remove the code that was obsoleted by the previous upgrade, but this feels very unsatisfactory to me for lots of reasons both practical and aesthetic.

Another consideration is that in a production setting, it might be faster, and thus desirable, if we could pause a Swingset while we stop and replay a single vat, leaving the other vat processes undisturbed. If an upgrade needs to regenerate the internal state of a vat (or perhaps a subset of vats), it should not be necessary to also replay all the other vats that are not being upgraded. Note that the current replay mechanism already replays individual vats sequentially, rather than interleaving their execution in the way it was necessarily interleaved when they executed originally. This suggests that implementing single-vat replay might be reasonably straightforward, but as yet the kernel does not have any actual mechanism to do this. (We have previously floated the notion of arranging for vats in separate processes to replay in parallel, as a way to speed up restart. It seems plausible to me that the work needed for selective single-vat replay and the work needed for parallel replay may share some common elements.)

Startup from explicit persistent state only - "Cold Start"

Intermediate between "all has been forseen" and "no change was ever contemplated" is the kind of approach we expect to be followed for most upgrades. To the extent that an application can capture in the vatstore all the information needed to reestablish a completely functional working state, a vat process can simply be stopped, have the code that implements it replaced, and then restarted without replay (indeed, we already do restart without replay for transcriptless vats such as the comms vat). Instead of replaying from t0, it would rebuild its working state directly from the persistent store, possibly performing any required data migration or schema upgrades as part of this (whether to execute such data changes in a batch at upgrade time or incrementally as part of the future execution of the vat process is an important practical question, and one that could have significant impact on the upgrade API design, but I don't believe it's a question fundamental to this upgrade strategy per se).

Since the Cold Start strategy requires that all information necessary to resume operation be captured in the persistent store, it follows that suitably validated copies of the persistent store could also be used to initialize new validator instances. This seems better than demanding that anyone who wants to spin up a new validator be willing to assume the cost of completely re-executing the entire history of the entire chain from the very beginning. We've been worrying about this problem for some time, completely outside the context of the upgrade problem. While XS process snapshots provide a way to restart a vat without requiring replay, they don't lend themselves to being shared in a trustless way with others. In contrast, the contents of the persistent store can be so shared, since its history of data modifications -- and thus its state at any given time -- is part of the chain's consensus state. This suggests that as part of normal operation the chain should periodically (perhaps monthly) checkpoint the persistent store and place a hash of this into a block. New validators could then begin operation only having to replay any activity that had happened since the last of these periodic checkpoints -- or more likely, simply choose to begin operation at the time such a checkpoint is made.

Choosing and combining strategies

A question one might reasonably ask is: given that the Cold Start strategy not only supports upgrade more easily and directly than the Replay strategy, but might also end up being mandatory anyway to enable an open validator ecosystem, why spend time thinking about the Replay strategy at all? The answer is that it provides a recovery pathway in the plausible event of imperfect foresight. If it proves to be the case (presumably by mistake) that a vat actually had some hidden state whose loss might cause consensus breakage, a Replay upgrade might be our only way out of the problem. Note that if this happened it would probably be subtle and wierd, since if we're normally stopping and restarting vats from cold storage with some frequency it seems likely that such a problem would manifest quickly (in particular, during testing before the code in question is even released). Indeed, one approach might be to delay investing any significant engineering effort into implementing the Replay strategy at all until such a time as we find ourselves in a situation where it is needed. Especially if we engineer all of our basic services and contracts around Cold Start upgrades, it is not entirely crazy to speculate that the need for Replay upgrades will never actually happen. The worry, though, is that if we do find ourselves in such a state of need, it might very well be in a situation of extreme time pressure to rectify some kind of urgent, catastrophic operational problem. Consequently we need to think very carefully about how best to invest our development resources here.

One thing that does seem clear is that in the event of a Replay upgrade, one of the things that the upgrade should try to accomplish is to leave the vat in a state where further upgrades can be effected using the Cold Start strategy. In particular, this answers the question raised above as to how a Replay upgrade gets rid of the old code that has been superceded: it is the followed by a Cold Start upgrade that does this.

API

There are two aspects of the upgrade API that can largely be considered independently. These can be roughly labelled the internal API and the external API.

The internal API is used by code within a vat to actually upgrade itself and its data. It is concerned with how the vat accesses persistent storage, how it learns what mode it is executing in, how it determines what it is supposed to do, and so on. It is principally about the means for actually interrogating and manipulating a vat's memory state and data store to effectuate any needed changes.

The external API is used to manage an act of upgrade: when it happens, how it gets initiated, and what actual changes are permitted (e.g., which code bundle gets substituted for whatever the vat was running previously). It is principally about governance and access control.

I anticipate that the actual usage of the internal API will vary idiosyncratically from one upgrade to another depending on the nature and complexity of the changes the upgrade entails, whereas the external API will be used in fairly stable arrangements within the implementations of the various hosting frameworks in which vats are run, largely independent of the details of any particular upgrade.

Internal (transmogrification) API

Both the Replay and Cold Start strategies presume that, from the vat's point of view, the code implementing the vat has been replaced prior to execution with the upgraded code. That new code can effect any data changes needed in addition to implementing any new or changed vat behavior. Consequently, the internal API is not concerned with getting the replacement code installed but rather with what that code is able to do once it's running. Given that constraint, the internal API needs to enable the code to do three things:

  1. Determine what situation it is beginning execution for, i.e., should it start performing data upgrades, or have those already happened and should it instead just start behaving according to whatever upgraded semantics it is supposed to have?

  2. Interrogate persistent data that had been previously stored prior to its beginning execution.

  3. Replace stored data with suitably modified versions of the same data.

In the Cold Start scenario, requirement #1 above can be accomplished by interrogating the persistent state. Upgrade code can record its status in the vatstore for later reference. Absence of such a record can be taken as an indicator that upgrade has yet to happen. In the Replay scenario, things are a bit more subtle, because the code is pretty much by definition in a situation that had not been prepared for it. However, even then inspection of the persistent state is likely sufficient because the code can presume that it is starting from the beginning; indeed, as mentioned above, I expect that a key goal of most Replay upgrade code will be to transform the vat state into one that can henceforth be upgraded via the Cold Start strategy. Consquently, the means to satisfy requirement #1 can be folded into the means to satisfy requirements #2 and #3.

Stored data can take two forms: (a) data explicitly read from or written to the vatstore using the vat power provided for this purpose, indexed by keys managed by the vat code itself, and (b) virtual objects, which in normal operation are managed implicitly by the VOM, stored using keys that are purposely hidden from the vat code. Case (a) does not require special treatment for upgrade, since everything is under the vat code's direct control, so essentially all of the API design challenge concerns case (b).

Persistent collections, once we have them, could introduce further complications, but given that collections are still a work in progress I'm not sure it makes sense to invest a lot of work in them here. A couple of observations: to the extent that they are explicit collections of explicit data, they would fall under case (a) above. To the extent that they contain references to virtual objects, they would indirectly fall under case (b), but not in a way that I think introduces any additional wrinkles into the design. It is entirely conceivable that there are semantic weirdnesses that I've missed which will make the story messier, but for now I'm going to set this question aside.

Given that case (a) is satisfied by the existing data access API, all the further design work here need only concern case (b), namely the how to enable some kind of explicit interaction with persistent data that had hitherto been accessed only implicitly.

Each virtual object has an associated (in-memory) kind object that provides implementations for its behavior and its instance initialization logic. The execution of the initialization logic in turn defines the virtual object's shape, i.e., how it is to be serialized and deserialized to and from persistent storage. Each kind is assigned an internal kind ID when it is created. This kind ID is subsequently used as part of the vrefs of its instances, so that when a virtual object is read from disk, the vref can be used to locate the kind definition to deserialize the object and to associate the in-memory representative that is thus created with the virtual object's behavior.

A kind is described by an instanceKitMaker that is passed to the makeKind function. The canonical instanceKitMaker is shaped like:

  function makeFooInstance(state) {
    return {
      init(args...) {
        state.whatever = value...
        state.whateverElse = anotherValue...
        ...
      },
      self: Far('foo', {
        method1(method1args...) {
          do stuff...
        },
        method2(method2args...) {
          do other stuff...
        },
      }
    };
  }

The init function defines a new virtual object instance's shape (i.e., the collection of properties that get serialized) by assigning properties to the closed over state parameter. The self object provides behavior which accesses that state in the same way. (The real story under the hood is actually more complicated for various reasons, but that's the basic model.) The global makeKind function takes one of these instanceKitMaker objects and returns a kind maker function whose parameters match those of the init function.

I propose adding two new global functions: makeKindUpgrader and kindInstances.

The makeKindUpgrader function has the signature makeKindUpgrader(kindMaker). It will operate in tandem with a new, optional instanceKitMaker function, upgrade, whose signature is upgrade(oldState, args...). The relationship pattern here is parallel to that between makeKind and the initfunction. The parameter to makeKindUpgrader is a kind maker, indicating the virtual object kind whose upgrader is being made. It returns a kind upgrader function, whose signature is upgrade(virtualObject, args...). When the kind upgrader function is called, the VOM reads the current state of the virtual object into a simple JavaScript object, whose properties contain the deserialized properties of the serialized instance, and passes it as the first argument to the corresponding kind's upgrade function, along with the other args, rather the way the init function is called when a new instance is created. The upgrade function's job is to assign properties to the (at that point uninitialized) state object exactly the way the init function does. The overall flow of this operation is essentially the same as init, the key difference being that the virtual object whose state is defined in this way retains the vref of its former self rather than having a new vref generated for it. All references to the old object now refer to the new object (or rather, they refer to the same object but the object's persistent state has been transmogrified). So to the above pseudo-example you'd add something like:

      ...
      upgrade(oldState, args...) {
        state.whatever = oldState.whatever;
        state.whateverElse = computeSomething(oldState.whateverElse);
        state.wholeNewWhatever = someEntirelyNewValue...
        ...
      },
      ...

The kindInstances function has the signature kindInstances(kindMaker). Like makeKindUpgrader it takes the kind maker function as the designator of the kind upon which it will operate. It returns an iterator over all of the virtual objects of the indicated kind.

If all of the instances of a virtual object are upgraded in a batch, then it will be sufficient for the self object of the upgraded implementation to simply provide the upgraded behavior for the virtual object kind. If upgrades are performed incrementally or selectively, then it will be the responsibility the behavior code itself to tell the old (unupgraded) and new (upgraded) instances apart and behave accordingly. In particular, if it is possible to tell them apart by inspection (e.g., by providing a version number or by testing for the presence or absence of particular properties), then doing lazy upgrades should be feasible. It is thus a requirement that it be possible to invoke a virtual object's upgrader from within the same object's behavior.

I believe these two functions will be sufficient to enable the ColdStart upgrade case. However, the Replay case has an additional notable complication: the need to avoid any visible divergence, during replay, from the event stream that was recorded in the vat transcript. I believe this can be accomodated by executing Replay upgrade executions with an additional vat power, silentVatstore, which presents the same data API as the vatstore power but does not log to the transcript. Although this would allow arbitrary intermediate persistent state to be captured explicitly during the replay, it would not support the virtual object instance upgrade machinery described above since that depends on the VOM's implicit access to the vatstore. I don't think it's feasible to somehow augment the virtual object API to distinguish which implicit accesses are part of the replay and which are part of the upgrade. In addition, to maintain historical accuracy, the replay of the old behavior could potentially need continued access to the old state up to the end of the replay process. Consequently, I don't believe in situ upgrade of persistent state is feasible during Replay upgrades. Actual virtual object data upgrades would need to be deferred to the end of the replay period, at which point the upgrade API would become available for use. The key requirement here would be some way for the upgraded code to ascertain that it had reached the end of the replay. One approach would be a special delivery made at the end which acts as the signal that this had happened. I think also that the silentVatStore power should not be available beyond the end of the replay, as after that time it could be a source of consensus breaking non-determinism. (Conceivably, its loss of function at the end of replay might somehow be usable as a pathway for signaling that replay is over, but my sense is this probably be a bad idea.)

External (control) API

Performing an upgrade on a vat involves replacement of the vat's code, which means that at the very least the vat needs to be shut down and restarted. The logic and workflow for doing this differs between static and dynamic vats, since dynamic vats can (by definition) be managed by other vats whereas static vats can only be managed by the Swingset kernel and its associated controller object (and, of course, indirectly by the host in possession of the controller object).

For a static vat, the configuration (which specifies the bundle or source file that the vat code is to be loaded from) is a parameter to initializeSwingset. Restarting a static vat is basically the same thing as restarting the swingset itself. There is currently little provision for managing static vats individually. Consequently, updating a static vat consists of: shut down the swingset, modify the configuration to point at the new vat code, then restart the swingset. From the perspective of chain consensus, this is an out-of-band operation, though of course to actually upgrade swingsets running as part of a chain the validators will have to coordinate the shutdown-replace-restart operation to all perform the same replacement at the same time. This is some kind of governance activity, but I'm not sure what it means in practical, operational terms since it does involve shutting things down. In principal some kind of automation could be added to orchestrate the switchover given some kind of meta-configuration descriptor, which in descriptor in turn could be subject of a governance action of some kind.

<as far as I've gotten written down>

@warner
Copy link
Member Author

warner commented Dec 22, 2021

ZCF and Contracts, Baggage, Zygotes

The upgrade design we've sketched out over the last few weeks looks at roughly three categories. The smallest does not involve the kernel at all: simple parameter changes (which scarcely qualify as "upgrade") and in-same-vat evaluation of new code (but I believe @erights and others aren't a fan of that, and would rather see all code-replacing upgrades use a larger category). The middle category replaces the entire userspace code bundle and gives it a chance to use a subset of the data (virtual objects/collections) prepared by its predecessor. The largest category involves first using the #1691 sleeper-agent protocol to retroactively prepare, then applying the middle-category -type upgrade. This writeup concentrates on the middle category.

While SwingSet will provide a way for any dynamic vat to be upgraded, the primary Agoric use case is specifically for contract vats. All such vats launch with the ZCF (Zoe Contract Facet) bundle, after which Zoe sends a message to ZCF with the contract bundle to be loaded. When we upgrade a contract, we're not generally upgrading ZCF: we're only upgrading the evaluated contract bundle. As a result, the "reload vat with different bundle" scheme described above won't actually help. Also, we need a design that is #2268 zygote-friendly.

Our plan for this is to have ZCF check the "baggage" (data from the predecessor) to determine the state of the contract bundle: has it been installed/evaluated, both installed and started, or neither. Then, during buildRootObject, ZCF repeats these operations, but with the new contract bundle. By performing both install and contract.start() during this setup phase, the contract will have a chance to rewire all virtual-object Kinds (#3062) before any other messages can arrive. Those other messages might cause a virtual object Representative to be instantiated (directly or in the course of executing other code), so we need everything to be ready to go by the end of the setup phase.

We'll need a mechanism for Zoe to tell the new ZCF instance about the different contract bundle to use. The kernel upgrade API should allow the caller (Zoe) to provide both the vat bundle (ZCF, unchanged) and the vatParameters for the new vat, just like it did for the original creation. Then Zoe should put the contract bundle in vatParameters where it will be available during the setup phase, instead of sending it as a message to the ZCF root object. This should become easier when #3269 bundlecaps are implemented (so vatParameters will hold a small reference instead of the whole bundle), however vatParameters cannot currently hold refcounted caps (it is limited to plain JSON-serializable) data, so we may need some enhancements.

Zygotes (#2268) allow us to amortize the cost of evaluating code bundles, by freezing a copy of the vat before it has differentiated too far, and using that copy as a template from which clones can be made and further differentiated. In particular, we can use the template vat's heap snapshot as a starting point, so the clones do not have to repeat the (expensive) code evaluation step. We expect to have a moderate number of contracts, but a much larger number of instances of those contracts, so if the evaluation of the contract code is non-trivial, it's probably a win to start from a post-evaluation heap snapshot.

@erights suggested that the ZCF bundle is likely to be small compared to the contract bundles, and making a template/zygote out of the "post-ZCF but pre-contract" state wouldn't be worthwhile. I'm not sure I agree, but we'll need to measure the costs to know for real.

To support zygotes, the first time around, contract installs will be delivered with a separate message, as are contract instantiations. This provides multiple stages of differentiation in the vat's life, any of which might be used as a template:

  • 1: ZCF has been loaded, but nothing related to a contract
  • 2: a particular contract has been evaluated ("installed"), but not started, so nothing specific to an instance
  • 3: the contract has been started, now it is a distinct instance (with configuration parameters, an owner, divergent mutable state, etc)

We certainly expect to build a zygote template out of stage 2 (one per contract). Zoe will remember this template as part of the "contract install", and make a clone each time it is asked to instantiate the contract. We might also choose to record a single template from stage 1 (just ZCF, no particular contract yet), to accelerate the install() process, or we might not bother.

So the contract bundle will appear in an install() message the first time around. But during upgrade (which cannot involve a new install() message, because we need upgrade to complete before any other message arrives and tries to interact with an upgraded receiver object), we need to deliver the new contract bundle through vatParameters instead.

ZCF must be prepared to be executed in either the initial (empty) vat, or to find itself in a pre-existing vat (with baggage). Likewise, contract code must be equally prepared to be the new version instead of the original. ZCF must extract part of the "baggage" and share it with the contract's start() function, so the contract can tell if it is version1 and responsible for creating its world, or version2+ and responsible for upgrading it. The "baggage" will hold the "kind handles" we'll add to #3062 which allow version2 to attach new behavior to the existing Kinds. Liveslots will require that the newly-upgraded vat rewire all existing Kinds by the end of the setup phase, which is why ZCF in the version2 vat must call the contract's install() method so early.

So the first version of the contract vat will go like this:

  • Zoe creates the ZCF zygote:
    • vat1 = vatAdmin~.createDynamicVat(zcfBundle)
      • the ZCF bundle is evaluated, which might do some makeKind calls at the top level (module initialization)
      • ZCF's buildRootObject is invoked, which (somehow) gets a baggage argument
      • ZCF checks baggage.zcfStarted, which is missing, so ZCF does first-time initialization
        • this may involve some makeKinds, and ZCF must store the Kind handles in the baggage, for future generations
        • ZCF sets the baggage.zcfStarted flag
      • ZCF looks for baggage.installed and baggage.started. Since this is the first time around, both are missing, and no additional work needs to be done.
    • Zoe turns vat1 into a template/zygote, before it becomes specialized for any particular contract, maybe zcfZygote = vat1.handle~.zygotify()
  • Zoe creates the contractXversion1 zygote:
    • vat2 = zcfZygote~.fork()
    • vat2.root~.install(contractXversion1bundle)
      • ZCF reacts to install by 1: evaluating the contract bundle, and 2: setting the installed flag in the baggage
        • the contract bundle module initialization code might do a makeKind
    • contractXzygote = vat2.handle~.zygotify()
  • Zoe creates an instance of contract X
    • vat3 = contractXzygote~.fork()
    • vat3.root~.start(instanceParameters)
      • ZCF reacts to start by 1: invoking the contract's start() method (with various ZCF instance-specific things) and 2: setting the started flag in the baggage
      • start() is given a subset of the baggage. The contract should look in this subset for a "I've been started before" flag. When it doesn't find it, the contract does version1 setup stuff.
      • the contract's start() code will do some makeKinds. All Kind handles should be stored in its baggage subset
  • users interact with vat3 (aka contract X version 1 instance 1)
    • this creates some virtual objects, some of which are exported and then imported into other vats, which become obligations that any subsequent version must be able to honor
    • any state needed for a future version must be Durable, and reachable from a virtual object or the baggage

Then, when Zoe and some governance mechanism decides that this instance should be upgraded to version 2:

  • (optional/uncertain) Zoe tells the contract "prepare to be upgraded" and waits for an ack
  • Zoe does vat3.handle~.upgrade(zcfBundle, { vatParameters: { contractBundle: contractXversion2bundle } })
  • kernel shuts down any online vat worker for vat3
  • kernel deletes the vat3 transcript and heap snapshot, and the recorded vatParameters, but leaves all other data intact
  • kernel stores the newly-provided zcfBundle and new vatParameters
  • kernel/vat-warehouse brings a vat3 worker online
  • since there is no heap snapshot, the worker starts with lockdown(), makeLiveSlots(), setBundle (which evaluates the new/not-new zcfBundle), and buildRootObject
    • ZCF's buildRootObject sees baggage.zcfStarted is true, so it rewires any Kinds created by its predecessor
    • ZCF sees baggage.installed is true, so it looks in vatParameters for the new (version 2) contract bundle, and evaluates it
      • contract bundle top-level module-initialization runs
    • ZCF sees baggage.started flag is true
      • ZCF invokes contract.start() and gives it the same subset of the baggage as its predecessor did
        • contract looks in baggage for its contract-specific "have we started before" flag, sees it set, knows it is upgrading
        • contract pulls Kind handles from baggage and rewires them to new behavior
  • all Kind rewiring finishes by end of buildRootObject. Liveslots confirms this, flunks the upgrade if any were left dangling.
  • vatid remains the same (vat3), all previously-exported objects remain valid
    • (probably/uncertain): exports which were "virtual" but not "durable" are disavowed: the kernel object table marks them as broken (with no owner vat), they are removed from the vat3 clist. Needs support to enumerate all virtual objects. Identity is retained, but unavailable to new vat3 code.
  • vat3 is now ready for new messages
  • someone sends a message to a vat3 virtual object, new Representative is created, uses new Kind behavior code

@warner
Copy link
Member Author

warner commented Dec 22, 2021

Failed Upgrades Should Leave Old Version In Place

As mentioned in the meeting this afternoon, one really desirable property would be for a failed upgrade to leave the vat in its previous configuration. I think we can pull this off by putting all of the upgrade steps into their own crank, which (thanks to the crankBuffer) can be committed or rolled back as a unit. Just as we currently have a create-vat run-queue item type, we should make an upgrade-vat events live on the run-queue. They must halt the current worker, but once that's done, all the subsequent state changes should be revertable things like kvstore writes.

I believe the heap snapshot ID is recorded in the KV store, so as long as the lifetime management of snapStore works (i.e. defer snapshot deletion until after a commit), we should be able to delete the snapshot ID from the kvStore inside a crank, and then crank.abort can revert the deletion without deleting the actual snapshot.

The one part that might be funny is erasing the transcript, because transcripts live in the streamStore, and are indexed by vatID. The upgrade-vat event wants to erase the old transcript and build one delivery of the new transcript, then either commit or revert. I think we build the entry in RAM first, but I don't know if we write the delivery as soon as the crank is finished (before deciding whether to commit or revert), or only after we know we'll be committing. And I don't think the streamStore goes through the crank buffer, so I don't know that we can commit/revert it in the same way we can do with the kvStore.

@FUDCo
Copy link
Contributor

FUDCo commented Dec 22, 2021

streamStore commits are effected by tracking the index of the end of the stream in the KV store.

@warner
Copy link
Member Author

warner commented Dec 22, 2021

streamStore commits are effected by tracking the index of the end of the stream in the KV store.

If the end-of-stream index is set (in the crank buffer) to 0, then we write a new transcript entry, then we set end-of-stream to the end of that new entry, then we revert the crank buffer.. is the start of the original transcript now clobbered? I think the transactionality of the streamStore relies upon it being append-only, and "erasing" the transcript violates that assumption.

If so, then we must either improve streamStore to tolerate this, or make sure we don't write the new entry until after we know we're going to commit for real. Maybe we incorporate a crankBuffer-like thing into streamStore that holds the new entry in RAM until commit.

@warner warner added this to the Mainnet: Phase 1 - RUN Protocol milestone Jan 19, 2022
@FUDCo FUDCo self-assigned this Jan 21, 2022
@Tartuffo Tartuffo added the MN-1 label Jan 21, 2022
@warner warner self-assigned this Jan 26, 2022
@Tartuffo Tartuffo removed the MN-1 label Feb 7, 2022
@Tartuffo Tartuffo removed this from the Mainnet: Phase 1 - RUN Protocol milestone Feb 8, 2022
warner added a commit that referenced this issue Feb 18, 2022
This is a first pass at the API you'd use to tell the kernel to upgrade a
dynamic vat. None of this is implemented yet.

refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This allows liveslots to abandon a previously-exported object. The
kernel marks the object as orphaned (just as if the exporting vat was
terminated), and deletes the exporter's c-list entry. All importing
vats continue to have the same access as before, and the refcounts are
unchanged.

Liveslots will use this during `stopVat()` to revoke all the
non-durable objects that it had exported, since these objects won't
survive the upgrade. The vat version being stopped may still have a
Remotable or a virtual form of the export, so userspace must not be
allowed to execute after this syscall is used, otherwise it might try
to mention the export again, which would allocate a new mismatched
kref, causing confusion and storage leaks.

Our naming scheme would normally call this `syscall.dropExports`
rather than `syscall.abandonExports`, but I figured this is
sufficiently unusual that it deserved a more emphatic name. Vat
exports are an obligation, and this syscall allows a vat to shirk that
obligation.

closes #4951
refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This iterates through all previously-defined durable Kinds and asserts
that they have been reconnected by the time buildRootObject()
completes.

It still needs better error delivery path: we want the upgrade to fail
and get rolled back, but currently `startVat` doesn't have a good way
to signal the error.

refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This iterates through all previously-defined durable Kinds and asserts
that they have been reconnected by the time buildRootObject()
completes.

It still needs better error delivery path: we want the upgrade to fail
and get rolled back, but currently `startVat` doesn't have a good way
to signal the error.

refs #1848
mergify bot pushed a commit that referenced this issue Mar 31, 2022
We create a durable Kind, and reattach behavior to it in v2. The
handle must travel through baggage, demonstrating that baggage works.

I'm still looking for the right way to use VatData these from with
swingset tests.. other packages should import @agoric/vat-data, but
that might be circular from here

refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This iterates through all previously-defined durable Kinds and asserts
that they have been reconnected by the time buildRootObject()
completes.

It still needs better error delivery path: we want the upgrade to fail
and get rolled back, but currently `startVat` doesn't have a good way
to signal the error.

refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This iterates through all previously-defined durable Kinds and asserts
that they have been reconnected by the time buildRootObject()
completes.

It still needs better error delivery path: we want the upgrade to fail
and get rolled back, but currently `startVat` doesn't have a good way
to signal the error.

refs #1848
warner added a commit that referenced this issue Mar 31, 2022
This iterates through all previously-defined durable Kinds and asserts
that they have been reconnected by the time buildRootObject()
completes.

It still needs better error delivery path: we want the upgrade to fail
and get rolled back, but currently `startVat` doesn't have a good way
to signal the error.

refs #1848
@warner
Copy link
Member Author

warner commented Apr 2, 2022

I have two plans for deleting/dropping/abandoning everything. I'm working on implementing the first, but I wanted to write up the second and see if we can switch to it by MN-1 because it has some benefits.

Plan 1: stopVat()

In this approach, we rely upon being able to talk to the vat one last time before the upgrade. We send a dispatch.stopVat() to it, and the vat does internal sweeps to figure out what is getting deleted. The kernel doesn't need to delete anything itself: the vat makes all the decisions and uses syscalls to tell the kernel what goes away.

Within stopVat, liveslots does:

  • Walk pendingPromises and syscall.resolve each with a rejection.

  • Walk exportedRemotables to find the vrefs of all exported Remotables. These are all "precious" and therefore not durable, so these will all be abandoned. Preserve the root object (o+0) because the new version will provide a replacement. For each one, we forge a dispatch.dropExport and dispatch.retireExport, making liveslots think the kernel has given up on it. This deletes the vref and Remotable from exportedRemotables (which is boring, RAM is going away anyways), but also removes it from any weak-keyed virtual/durable collections. We must also do a syscall.abandonExport on these, making the kernel delete it from the c-list, so nobody will bother the vat with them in the future.

  • Ask the VOM for a list of non-durable (merely-virtual) KindIDs

  • Use that list to walk the export-status portion of the vatstore (vNN.vs.vom.es.${baseref}) to find all the virtual facets that are reachable by the kernel (ignoring the ones that are merely recognizable). For each vref, forge a dispatch.dropExport and dispatch.retireExport, and make a syscall.abandonExport, as above. This removes the export pillar from all non-durable merely-virtual objects, which may or may not cause them to be deleted (they might still be retained by the RAM or vdata pillars).

  • (It's probably a good idea to accumulate a big list of vrefs from both walks, then do a single dispatch.dropExports, a single dispatch.retireExports, and a single syscall.abandonExports, with the same big vref array in each. Or do batches of 100 or something.)

  • Walk slotToVal to enumerate every vref/baseref that liveslots is watching with a finalizer. Ignore the entries whose WeakRef is already dead: those must have finalizer callbacks waiting to run as soon as BOYD lets them. For the rest, pretend the finalizer has already run (add the vref to possiblyDeadSet and delete the vref from slotToVal) and unregister the object from the finalizer (to keep the real finalizer from running). This will make the next BOYD believe that userspace has simultaneously dropped everything that can possibly be dropped (Presences and Representatives). The actual objects will still be in memory, and will still be referencing each other, but we're not looking at userspace memory anymore. We're not allowing userspace to run anymore, so it can't create new references either.

  • Perform a bringOutYourDead. The pent-up finalizers will fire, adding the vrefs of anything that was organically GCed before we started to possiblyDeadSet, which should now hold a vref for every Remotable/Presence/Representative that was in RAM. scanForDeadObjects will see the export pillar is missing for all virtual objects. Some number of imports may be dropped because of the Presence vrefs. Some number of virtual/durable objects may be deleted, which might more imports.

    • The remaining non-durable data will be due to cycles within the virtual objects/collection subgraph.
  • Ask the collectionManager to delete all merely-virtual collections, using the deleteCollection hook it provided to vrm.registerKind. We change collectionManager to silently ignore attempts to delete already-deleted collections, to avoid problems if/when a decref causes the collection to be deleted via the normal path. This will visit every entry of every merely-virtual collection, delete its DB entry, and decref everything it pointed to. This may cause vrefs of all kinds to be added to possiblyDeadSet.

  • Perform another bringOutYourDead. This may drop more virtual data, and possibly more durables and imports.

  • Use the VOM's list of non-durable KindIDs to enumerate every merely-virtual object in the DB (vNN.vs.vom.${vref}). For each one, parse the value enough to get a list of slots for each property. Ignore the slots that point to remotables and other virtual objects. Accumulate decref counts for the slots that point to imports and durable objects. Then delete the DB entry.

  • When all the virtual objects have been deleted, apply the decrefs to vNN.vs.vom.rc.${baseref}. This may cause baseref to be added to possiblyDeadSet.

  • Perform a final bringOutYourDead. At this point possiblyDeadSet should only contain vrefs of imports and durables, so the fact that we've deleted (corrupted) the state of virtuals shouldn't matter. Durables cannot reference virtuals, so a decref that deletes a durable should not be able to decref a virtual. This may drop more durable data and imports.

We could break the work up into phases at the calls to bringOutYourDead. The first phase would be to just abandonExport on the exportedRemotables and the exported non-durable objects. This would delete the c-list entries, but not the DB entries for those objects. The new version couldn't reach anything it wasn't supposed to, but we leak a lot of DB space (O(N) in the number+size of virtual objects), and we might leak some imports and durables that could otherwise have been deleted (referenced only by RAM or virtuals).

The second phase would also delete the virtual collections and all virtual objects, but wouldn't try to decref anything they pointed to. This would fix the DB space leak, but would still leak some number of imports and durables.

The third phase would do all three. Reference cycles within the durable subgraph could still leak durables and imports.

Assuming that people use virtuals/durables appropriately and don't keep a lot of references in RAM, the first phase costs O(N) in the number of exported virtuals, the second adds in O(N) in the total size of virtual collections, and the third adds O(N) in the number of virtual objects and the count of edges (really the size of the set of referenced objects) from VOs to imports and DOs.

Plan 2: all kernel-side

To allow the kernel to do this work without the help of the retiring vat, the kernel needs to know which vrefs and DB keys are durable and which are not. We can either couple the kernel and liveslots together (which feels like a bad idea), or we can change the key formats to embed the information that the kernel needs to know.

For the vat store, I'll propose enhancing the vatstore syscalls with an extra durable boolean flag. From the vat's point of view, it has two distinct stores, each with an independent keyspace. Internally the kernel maps vat-provided keys to vNN.vse.${key} (for "ephemeral") and vNN.vsd.${key} (for "durable"). This should be pretty easy, the main complication would be unit tests that look directly at the DB.

For the c-list, the proposal is more radical. Currently parseVatSlots() defines how vrefs are shaped, and we have a core syntax of ${type}${sign}${id}${virtualstuff}, where type is "o" or "d" or "p", and "sign" is + (for vat-allocated IDs) or - (for kernel-allocated IDs).

I'm suggesting that we add * to the possible signs (where * is sort of like a super-plus.. I'd use ++ if it didn't share a prefix with +). parseVatSlots would report allocatedByVat: true for both + and *, and false for - as usual, but it would return a new durable: boolean that is only true for *. We'd change parseVatSlots to return +NN or *NN for the .id property, instead of just NN. (For symmetry we'd probably make it return -NN for imports). We'd need to check, but clients of parseVatSlots should not have been depending upon .id being an integer, in fact I can't think of a strong reason for clients to look at .id at all (in most cases they should be content with the pre-virtualstuff prefix). The exportID counter that liveslots uses to create .id is shared for both durable and ephemeral exports.

When the VOM creates vrefs for virtual/durable kinds, it uses durable: to make distinctive vrefs for the two. The kernel can then distinguish between durable and ephemeral exports by only looking at the vref.

Once we have those tools at our disposal, we don't need stopVat() (except for maybe a bringOutYourDead or something to flush the VOM LRU cache). When the kernel upgrades a vat, it does the following:

  • Find all c-list entries with a vNN.c.p prefix, filter on those decided by the vat, forge a syscall.resolve with a rejection (i.e. mark the promise resolved, enqueue notifies to subscribers, requeue any queued messages, and delete the c-list entries)
  • Abandon all c-list entries with a vNN.c.o+ prefix (except o+0). This must delete the matching vNN.c.koNN entry and delete the kernel object table's koNN.owner property, but doesn't notify anybody else. This will abandon all Remotables (except the root object) and all merely-virtual objects.
  • Delete all vNN.vse. DB keys.
  • Now do the mark+sweep, which still requires coordination/awareness between liveslots and the kernel:
    • Build a set of all vNN.c.o*-prefixed c-list entries. These o*NN/II or o*NN/II:FF vrefs are the durable exports, and form the roots of the mark phase. We don't need the facet IDs, just the o*NN/II baseref.
    • Look up the key for "baggage" and add it to the root set.
    • Do the usual transitive walk from the root set. For each entry, read the vNN.vsd.vom.${baseref} DB entry, parse the result enough to find the outbound vrefs, convert each into a baseref, add the baseref to the set (unless already visited)
    • For vrefs that point to a durable collection, walk all values of the collection (and all vref-based keys, for strong collections) and visit them too.
    • At the end of the process, we'll have a large set of marked baserefs, all of which are either imports or durable objects (and some of the DOs may be durable collections).
  • Now do the sweep:
    • walk all vNN.vsd.vom.${baseref} keys again, deleting any which are not in the marked set
    • walk the list of collections, and for any that is not in the set, delete all of their entries
    • walk the vNN.c.o- c-list imports, and for any that is not in the set, perform a syscall.dropImport and syscall.retireImport, which wil delete the two entries from the c-list and push gcActions to decref kernel refcounts (possibly notifying upstream exporters)

If we were to break this approach into similar phases, the first phase would only clear promises and the virtual exports. With a simple kvstore DB and the same assumptions as above, the cost would be O(N) in virtual exports. The resulting state would be safe, but would leak DB space, imports, and otherwise-unreferenced durables.

The second phase would omit the mark+sweep. The cost would be O(N) summed across virtual exports, the number of virtual objects, and the number of items in virtual collections. It would leak imports and otherwise-unreferenced durables, but would delete all the virtual data.

The third phase (complete approach) would add O(N) in the number of referenced durable objects, plus their edges, plus another O(N) across all durable objects (for the sweep). It would not leak anything, not even when there are cycles within the durable subgraph.

If/when we move to something like SQLite for the kernel DB, this might get more efficient. A single query could delete an entire range of keys (it's probably still O(N), but with a much smaller constant factor because the DB is optimized for it). Deleting the c-list reverse pointers can probably be done with a clever subquery. The mark phase could benefit from visited and marked columns in the DB, allowing us to fetch a batch of nodes to visit (SELECT WHERE visited = 0 LIMIT 20), to reduce the number of DB queries. Having liveslots use a proper schema (so the kernel sees the reference edges as separate rows) would probably let us throw more SQL at it and make things even faster. The sweep/delete phase could be a single operation.

Comparison

I think the costs are similar. The complexity is a lot lower if we only try for the first two phases now (i.e. we tolerate leaked imports/durables that result from reference cycles). The third phase costs O(N) in the number of virtual objects for the stopVat approach, vs O(N) in the number of durable objects for the kernel-side approach. The stopVat approach requires an in-RAM Set for every dereferenced import/durable. The kernel approach requires an in-RAM Set for every referenced import/durable (the marked table). OTOH, the stopVat approach will have a higher constant factor because all of its DB calls are syscalls, whereas the kernel has direct access to the DB.

The benefits of doing this work on the kernel, rather than stopVat, is that we can avoid depending upon the vat still being viable. I'm worried about metering and memory usage. In the stopVat() approach, there are a number of ways we could break up the work into smaller pieces (call stopVat(limit) some number of times until it reports completion, only free/examine a certain number of items on each call). But if we didn't build enough limiters, we could wind up with a vat that can never be stopVat()ed because it always takes too long or uses up too much memory, and then we either can't upgrade, can upgrade but we leak a lot of DB space, or must resort to some new cleverness.

But doing it on the kernel side requires more coordination between liveslots and the kernel. The ephemeral/durable vatstore and vref format removes a lot of the coordination, but performing a mark+sweep requires more. One idea I had for this was to bundle a subset of the liveslots code (just enough to understand the vatstore encoding formats and perform deletion) at the same time that we make the full bundle. For each vat, we're going to be stashing the bundleID of it's liveslots (#4376), so it's not hard to also stash a "deletion helper" bundleID. During upgrade, we could importBundle this deletion helper into the kernel process, where we'd give it DB access and let it figure out what needed to be abandoned/deleted/etc. Sort of a pluggable cleanup function.

My big interest in doing it on the kernel side depends upon having something like SQLite, that's where things could really be sped up.

Next Steps

I have the stopVat approach mostly written, so I think I'm going to push through and finish it. But once it's working, assuming we have time, I'd like to explore the kernel-side approach.

@Tartuffo Tartuffo modified the milestones: Mainnet 1, RUN Protocol RC0 Apr 5, 2022
warner added a commit that referenced this issue Apr 9, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

refs #1848
warner added a commit that referenced this issue Apr 9, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

refs #1848
warner added a commit that referenced this issue Apr 9, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

refs #1848
warner added a commit that referenced this issue Apr 9, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

The collectionManager was changed to keep an in-RAM set of the vrefs
for all collections, both virtual and durable. We need the virtuals to
implement `deleteAllVirtualCollections` because there's no efficient
way to enumerate them from the vatstore entries, and the code is a lot
simpler if I just track all of them. We also need the Set to tolerate
duplicate deletion attempts: `deleteAllVirtualCollections` runs first,
but just afterwards a `bringOutYourDead` might notice a zero refcount
on a virtual collection and attempt to delete it a second time. We
cannot keep this Set in RAM: if we have a very large number of
collections, it violates our RAM budget, so we need to change our DB
structure to accomodate this need (#5058).

refs #1848
warner added a commit that referenced this issue Apr 9, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

The collectionManager was changed to keep an in-RAM set of the vrefs
for all collections, both virtual and durable. We need the virtuals to
implement `deleteAllVirtualCollections` because there's no efficient
way to enumerate them from the vatstore entries, and the code is a lot
simpler if I just track all of them. We also need the Set to tolerate
duplicate deletion attempts: `deleteAllVirtualCollections` runs first,
but just afterwards a `bringOutYourDead` might notice a zero refcount
on a virtual collection and attempt to delete it a second time. We
cannot keep this Set in RAM: if we have a very large number of
collections, it violates our RAM budget, so we need to change our DB
structure to accomodate this need (#5058).

refs #1848
warner added a commit that referenced this issue Apr 12, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

The collectionManager was changed to keep an in-RAM set of the vrefs
for all collections, both virtual and durable. We need the virtuals to
implement `deleteAllVirtualCollections` because there's no efficient
way to enumerate them from the vatstore entries, and the code is a lot
simpler if I just track all of them. We also need the Set to tolerate
duplicate deletion attempts: `deleteAllVirtualCollections` runs first,
but just afterwards a `bringOutYourDead` might notice a zero refcount
on a virtual collection and attempt to delete it a second time. We
cannot keep this Set in RAM: if we have a very large number of
collections, it violates our RAM budget, so we need to change our DB
structure to accomodate this need (#5058).

refs #1848
warner added a commit that referenced this issue Apr 12, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

The collectionManager was changed to keep an in-RAM set of the vrefs
for all collections, both virtual and durable. We need the virtuals to
implement `deleteAllVirtualCollections` because there's no efficient
way to enumerate them from the vatstore entries, and the code is a lot
simpler if I just track all of them. We also need the Set to tolerate
duplicate deletion attempts: `deleteAllVirtualCollections` runs first,
but just afterwards a `bringOutYourDead` might notice a zero refcount
on a virtual collection and attempt to delete it a second time. We
cannot keep this Set in RAM: if we have a very large number of
collections, it violates our RAM budget, so we need to change our DB
structure to accomodate this need (#5058).

refs #1848
warner added a commit that referenced this issue Apr 12, 2022
This deletes most non-durable data during upgrade. stopVat() delegates
to a new function `releaseOldState()`, which makes an incomplete
effort to drop everything.

The portions which are complete are:

* find all locally-decided promises and rejects them
* find all exported Remotables and virtual objects, and abandons them
* simulate finalizers for all in-RAM Presences and Representatives
* use collectionManager to delete all virtual collections
* perform a bringOutYourDead to clean up resulting dead references

After that, `deleteVirtualObjectsWithoutDecref` walks the vatstore and
deletes the data from all virtual objects, without attempting to
decref the things they pointed to. This fails to release durables and
imports which were referenced by those virtual objects (e.g. cycles
that escaped the earlier purge).

Code is written, but not yet complete, to decref those objects
properly. A later update to this file will activate that (and update
the tests to confirm it works).

The new unit test constructs a large object graph and examines it
afterwards to make sure everything was deleted appropriately. The test
knows about the limitations of `deleteVirtualObjectsWithoutDecref`, as
well as bug #5053 which causes some other objects to be retained
incorrectly.

The collectionManager was changed to keep an in-RAM set of the vrefs
for all collections, both virtual and durable. We need the virtuals to
implement `deleteAllVirtualCollections` because there's no efficient
way to enumerate them from the vatstore entries, and the code is a lot
simpler if I just track all of them. We also need the Set to tolerate
duplicate deletion attempts: `deleteAllVirtualCollections` runs first,
but just afterwards a `bringOutYourDead` might notice a zero refcount
on a virtual collection and attempt to delete it a second time. We
cannot keep this Set in RAM: if we have a very large number of
collections, it violates our RAM budget, so we need to change our DB
structure to accomodate this need (#5058).

refs #1848
@FUDCo
Copy link
Contributor

FUDCo commented Apr 26, 2022

@warner Although the upgrade design for contracts specifically is still a work in progress, from the kernel perspective can we regard this one as done?

@FUDCo FUDCo removed their assignment Apr 29, 2022
@warner
Copy link
Member Author

warner commented May 11, 2022

adminFacet~.upgrade(bundleCap, vatParameters) now works well enough to build on top of. So I'm going to close this.

There are several remaining tasks that are important for MN-1, which I'll break out into new tickets:

  • failure of the new buildRootObject should unwind the vat to its original version, as if the upgrade() request were never made. The upgrade() result promise should be rejected. failed vat upgrades should be rewound #5344
  • the upgrade() signature should change to upgrade(bundleCap, { vatParameters }), to match the options bag we give to vatAdminService~.createVat() vat upgrade should put vatParameters in an option bag #5345
  • stopVat() should abandon non-durable exports and reject pending promises, but at this time it should not attempt to delete any other data (non-durable virtual objects), nor examine refcounts on those objects to GC durables or imports. This is a compromise to minimize the chances that stopVat() will take too long (causing upgrade overall to take too long). We believe we can do a proper deletion/GC from the stored data in a later version, perhaps spread out over multiple bringOutYourDead deliveries. stopVat should only reject promises and abandon non-durables #5342

@warner
Copy link
Member Author

warner commented May 11, 2022

Now that we've got new tickets for the remaining MN-1 work, I'll close this one.

@warner warner closed this as completed May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request SwingSet package: SwingSet
Projects
None yet
Development

No branches or pull requests

6 participants