1
0
Fork 0
mirror of https://we.phorge.it/source/phorge.git synced 2024-11-28 17:52:43 +01:00
Commit graph

3 commits

Author SHA1 Message Date
epriestley
4cf1270ecd In Harbormaster, make sure artifacts are destroyed even if a build is aborted
Summary:
Ref T9252. Currently, Harbormaster and Drydock work like this in some cases:

  # Queue a lease for activation.
  # Then, a little later, save the lease PHID somewhere.
  # When the target/resource is destroyed, destroy the lease.

However, something can happen between (1) and (2). In Drydock this window is very short and the "something" would have to be a lighting strike or something similar, but in Harbormaster we wait until the resource activates to do (2) so the window can be many minutes long. In particular, a user can use "Abort Build" during those many minutes.

If they do, the target is destroyed but it doesn't yet have a record of the artifact, so the artifact isn't cleaned up.

Make these things work like this instead:

  # Create a new lease and pre-generate a PHID for it.
  # Save that PHID as something that needs to be cleaned up.
  # Queue the lease for activation.
  # When the target/resource is destroyed, destroy the lease if it exists.

This makes sure there's no step in the process where we might lose track of a lease/resource.

Also, clean up and standardize some other stuff I hit.

Test Plan:
  - Stopped daemons.
  - Restarted a build in Harbormaster.
  - Stepped through the build one stage at a time using `bin/worker execute ...`.
  - After the lease was queued, but before it activated, aborted the build.
  - Processed the Harbormaster side of things only.
  - Saw the lease get destroyed properly.

Reviewers: chad, hach-que

Reviewed By: hach-que

Maniphest Tasks: T9252

Differential Revision: https://secure.phabricator.com/D14234
2015-10-05 05:58:53 -07:00
epriestley
f1119ffcf5 Support working copies and separate allocate + activate steps for resources/leases in Drydock
Summary:
Ref T9253. For resources and leases that need to do something which takes a lot of time or requires waiting, allow them to allocate/acquire first and then activate later.

When we allocate a resource or acquire a lease, the blueprint can either activate it immediately (if all the work can happen quickly/inline) or activate it later. If the blueprint activates it later, we queue a worker to handle activating it.

Rebuild the "working copy" blueprint to work with this model: it allocates/acquires and activates in a separate step, once it is able to acquire a host.

Test Plan: With some power of imagination, brought up a bunch of working copies with `bin/drydock lease --type working-copy ...`

Reviewers: hach-que, chad

Reviewed By: hach-que, chad

Maniphest Tasks: T9253

Differential Revision: https://secure.phabricator.com/D14127
2015-09-21 04:46:24 -07:00
epriestley
3ac99006bf Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:

  - Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
  - Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.

The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:

```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```

When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.

If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:

```
mylock(whatever).slot(2)
```

Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.

This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.

(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)

Test Plan:
To create a race where the same binding is allocated as a resource twice:

  - Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
  - (Comment out slot lock acquisition if you have this patch.)
  - Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.

This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.

To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.

Reviewers: hach-que, chad

Reviewed By: hach-que, chad

Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 04:45:25 -07:00