Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
<?php
|
|
|
|
|
2013-12-26 19:41:36 +01:00
|
|
|
final class DrydockLease extends DrydockDAO
|
|
|
|
implements PhabricatorPolicyInterface {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
|
|
|
|
protected $resourceID;
|
2012-11-02 00:53:17 +01:00
|
|
|
protected $resourceType;
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
protected $until;
|
|
|
|
protected $ownerPHID;
|
|
|
|
protected $attributes = array();
|
2012-11-02 00:53:17 +01:00
|
|
|
protected $status = DrydockLeaseStatus::STATUS_PENDING;
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
|
2013-09-03 15:02:14 +02:00
|
|
|
private $resource = self::ATTACHABLE;
|
2012-12-17 22:43:26 +01:00
|
|
|
private $releaseOnDestruction;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
private $isAcquired = false;
|
2015-09-21 13:46:24 +02:00
|
|
|
private $isActivated = false;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
private $activateWhenAcquired = false;
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
private $slotLocks = array();
|
2012-12-17 22:43:26 +01:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Flag this lease to be released when its destructor is called. This is
|
|
|
|
* mostly useful if you have a script which acquires, uses, and then releases
|
|
|
|
* a lease, as you don't need to explicitly handle exceptions to properly
|
|
|
|
* release the lease.
|
|
|
|
*/
|
|
|
|
public function releaseOnDestruction() {
|
|
|
|
$this->releaseOnDestruction = true;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function __destruct() {
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
if (!$this->releaseOnDestruction) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!$this->canRelease()) {
|
|
|
|
return;
|
2012-12-17 22:43:26 +01:00
|
|
|
}
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
|
|
|
|
$actor = PhabricatorUser::getOmnipotentUser();
|
|
|
|
$drydock_phid = id(new PhabricatorDrydockApplication())->getPHID();
|
|
|
|
|
|
|
|
$command = DrydockCommand::initializeNewCommand($actor)
|
|
|
|
->setTargetPHID($this->getPHID())
|
|
|
|
->setAuthorPHID($drydock_phid)
|
|
|
|
->setCommand(DrydockCommand::COMMAND_RELEASE)
|
|
|
|
->save();
|
|
|
|
|
|
|
|
$this->scheduleUpdate();
|
2012-12-17 22:43:26 +01:00
|
|
|
}
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
|
2012-11-07 00:30:11 +01:00
|
|
|
public function getLeaseName() {
|
|
|
|
return pht('Lease %d', $this->getID());
|
|
|
|
}
|
|
|
|
|
2015-01-13 20:47:05 +01:00
|
|
|
protected function getConfiguration() {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
return array(
|
|
|
|
self::CONFIG_AUX_PHID => true,
|
|
|
|
self::CONFIG_SERIALIZATION => array(
|
|
|
|
'attributes' => self::SERIALIZATION_JSON,
|
|
|
|
),
|
2014-09-18 20:15:49 +02:00
|
|
|
self::CONFIG_COLUMN_SCHEMA => array(
|
|
|
|
'status' => 'uint32',
|
|
|
|
'until' => 'epoch?',
|
|
|
|
'resourceType' => 'text128',
|
|
|
|
'ownerPHID' => 'phid?',
|
|
|
|
'resourceID' => 'id?',
|
|
|
|
),
|
|
|
|
self::CONFIG_KEY_SCHEMA => array(
|
|
|
|
'key_phid' => null,
|
|
|
|
'phid' => array(
|
|
|
|
'columns' => array('phid'),
|
2014-10-01 16:53:50 +02:00
|
|
|
'unique' => true,
|
2014-09-18 20:15:49 +02:00
|
|
|
),
|
|
|
|
),
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
) + parent::getConfiguration();
|
|
|
|
}
|
|
|
|
|
2012-03-27 05:54:26 +02:00
|
|
|
public function setAttribute($key, $value) {
|
2012-11-29 15:05:35 +01:00
|
|
|
$this->attributes[$key] = $value;
|
2012-03-27 05:54:26 +02:00
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getAttribute($key, $default = null) {
|
2012-11-29 15:05:35 +01:00
|
|
|
return idx($this->attributes, $key, $default);
|
2012-03-27 05:54:26 +02:00
|
|
|
}
|
|
|
|
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
public function generatePHID() {
|
2014-07-24 00:05:46 +02:00
|
|
|
return PhabricatorPHID::generateNewPHID(DrydockLeasePHIDType::TYPECONST);
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
public function getInterface($type) {
|
|
|
|
return $this->getResource()->getInterface($this, $type);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getResource() {
|
2013-09-03 15:02:14 +02:00
|
|
|
return $this->assertAttached($this->resource);
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|
|
|
|
|
2013-12-27 22:15:19 +01:00
|
|
|
public function attachResource(DrydockResource $resource = null) {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
$this->resource = $resource;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2012-12-17 23:47:21 +01:00
|
|
|
public function hasAttachedResource() {
|
|
|
|
return ($this->resource !== null);
|
|
|
|
}
|
|
|
|
|
2012-11-02 00:53:17 +01:00
|
|
|
public function queueForActivation() {
|
|
|
|
if ($this->getID()) {
|
|
|
|
throw new Exception(
|
2015-05-22 09:27:56 +02:00
|
|
|
pht('Only new leases may be queued for activation!'));
|
2012-11-02 00:53:17 +01:00
|
|
|
}
|
|
|
|
|
2015-09-23 22:21:41 +02:00
|
|
|
$this
|
|
|
|
->setStatus(DrydockLeaseStatus::STATUS_PENDING)
|
|
|
|
->save();
|
2012-11-02 00:53:17 +01:00
|
|
|
|
2013-12-27 22:15:12 +01:00
|
|
|
$task = PhabricatorWorker::scheduleTask(
|
|
|
|
'DrydockAllocatorWorker',
|
2015-09-21 13:46:24 +02:00
|
|
|
array(
|
|
|
|
'leasePHID' => $this->getPHID(),
|
|
|
|
),
|
|
|
|
array(
|
|
|
|
'objectPHID' => $this->getPHID(),
|
|
|
|
));
|
2012-11-02 00:53:17 +01:00
|
|
|
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2012-12-17 22:53:32 +01:00
|
|
|
public function isActive() {
|
2012-12-17 22:43:26 +01:00
|
|
|
switch ($this->status) {
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
case DrydockLeaseStatus::STATUS_ACQUIRED:
|
2012-12-17 22:43:26 +01:00
|
|
|
case DrydockLeaseStatus::STATUS_ACTIVE:
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
private function assertActive() {
|
2012-12-17 22:43:26 +01:00
|
|
|
if (!$this->isActive()) {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
throw new Exception(
|
2015-05-22 09:27:56 +02:00
|
|
|
pht(
|
|
|
|
'Lease is not active! You can not interact with resources through '.
|
|
|
|
'an inactive lease.'));
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-23 22:21:41 +02:00
|
|
|
public function waitUntilActive() {
|
2012-01-24 01:58:38 +01:00
|
|
|
while (true) {
|
2015-09-23 22:21:41 +02:00
|
|
|
$lease = $this->reload();
|
|
|
|
if (!$lease) {
|
|
|
|
throw new Exception(pht('Failed to reload lease.'));
|
Undumb the Drydock resource allocator pipeline
Summary:
This was the major goal of D3859/D3855, and to a lesser degree D3854/D3852.
As Drydock is allocating a resource, it may need to allocate other resources first. For example, if it's allocating a working copy, it may need to allocate a host first.
Currently, we have the process basically queue up the allocation (insert a task into the queue) and sleep() until it finishes. This is problematic for a bunch of reasons, but the major one is that if allocation takes more resources (host, port, machine, DNS) than you have daemons, they could all end up sleeping and waiting for some other daemon to do their work. This is really stupid. Even if you only take up some of them, you're spending slots sleeping when you could be doing useful work.
To partially get around this and make the CLI experience less dumb, there's this goofy `synchronous` flag that gets passed around everywhere and pushes the workflow through a pile of special cases. Basically the `synchronous` flag causes us to do everything in-process. But this is dumb too because we'd rather do things in parallel if we can, and we have to have a lot of special case code to make it work at all.
Get rid of all of this. Instead of sleep()ing, try to work on the tasks that need to be worked on. If another daemon grabbed them already that's fine, but in the worst case we just gracefully degrade and do everything in process. So we get the best of both worlds: if we have parallelizable tasks and free daemons, things will execute in parallel. If we have nonparallelizable tasks or no free daemons, things will execute in process.
Test Plan: Ran `drydock_control.php --trace` and saw it perform cascading allocations without sleeping or special casing.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2015
Differential Revision: https://secure.phabricator.com/D3861
2012-11-01 19:30:42 +01:00
|
|
|
}
|
|
|
|
|
2015-09-23 22:21:41 +02:00
|
|
|
$status = $lease->getStatus();
|
|
|
|
|
|
|
|
switch ($status) {
|
|
|
|
case DrydockLeaseStatus::STATUS_ACTIVE:
|
|
|
|
return;
|
|
|
|
case DrydockLeaseStatus::STATUS_RELEASED:
|
|
|
|
throw new Exception(pht('Lease has already been released!'));
|
|
|
|
case DrydockLeaseStatus::STATUS_EXPIRED:
|
|
|
|
throw new Exception(pht('Lease has already expired!'));
|
|
|
|
case DrydockLeaseStatus::STATUS_BROKEN:
|
|
|
|
throw new Exception(pht('Lease has been broken!'));
|
|
|
|
case DrydockLeaseStatus::STATUS_PENDING:
|
|
|
|
case DrydockLeaseStatus::STATUS_ACQUIRED:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Lease has unknown status "%s".',
|
|
|
|
$status));
|
2012-01-24 01:58:38 +01:00
|
|
|
}
|
|
|
|
|
2015-09-23 22:21:41 +02:00
|
|
|
sleep(1);
|
2012-11-27 21:48:14 +01:00
|
|
|
}
|
2012-01-24 01:58:38 +01:00
|
|
|
}
|
|
|
|
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
public function setActivateWhenAcquired($activate) {
|
|
|
|
$this->activateWhenAcquired = true;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
public function needSlotLock($key) {
|
|
|
|
$this->slotLocks[] = $key;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
public function acquireOnResource(DrydockResource $resource) {
|
|
|
|
$expect_status = DrydockLeaseStatus::STATUS_PENDING;
|
|
|
|
$actual_status = $this->getStatus();
|
|
|
|
if ($actual_status != $expect_status) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to acquire a lease on a resource which is in the wrong '.
|
|
|
|
'state: status must be "%s", actually "%s".',
|
|
|
|
$expect_status,
|
|
|
|
$actual_status));
|
|
|
|
}
|
|
|
|
|
|
|
|
if ($this->activateWhenAcquired) {
|
|
|
|
$new_status = DrydockLeaseStatus::STATUS_ACTIVE;
|
|
|
|
} else {
|
2015-09-21 13:46:24 +02:00
|
|
|
$new_status = DrydockLeaseStatus::STATUS_ACQUIRED;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
}
|
|
|
|
|
2015-09-21 13:46:24 +02:00
|
|
|
if ($new_status == DrydockLeaseStatus::STATUS_ACTIVE) {
|
|
|
|
if ($resource->getStatus() == DrydockResourceStatus::STATUS_PENDING) {
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to acquire an active lease on a pending resource. '.
|
|
|
|
'You can not immediately activate leases on resources which '.
|
|
|
|
'need time to start up.'));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
$this->openTransaction();
|
|
|
|
|
|
|
|
$this
|
|
|
|
->setResourceID($resource->getID())
|
|
|
|
->setStatus($new_status)
|
|
|
|
->save();
|
|
|
|
|
|
|
|
DrydockSlotLock::acquireLocks($this->getPHID(), $this->slotLocks);
|
|
|
|
$this->slotLocks = array();
|
|
|
|
|
|
|
|
$this->saveTransaction();
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
|
|
|
|
$this->isAcquired = true;
|
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
if ($new_status == DrydockLeaseStatus::STATUS_ACTIVE) {
|
|
|
|
$this->didActivate();
|
|
|
|
}
|
|
|
|
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function isAcquiredLease() {
|
|
|
|
return $this->isAcquired;
|
|
|
|
}
|
|
|
|
|
2015-09-21 13:46:24 +02:00
|
|
|
public function activateOnResource(DrydockResource $resource) {
|
|
|
|
$expect_status = DrydockLeaseStatus::STATUS_ACQUIRED;
|
|
|
|
$actual_status = $this->getStatus();
|
|
|
|
if ($actual_status != $expect_status) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to activate a lease which has the wrong status: status '.
|
|
|
|
'must be "%s", actually "%s".',
|
|
|
|
$expect_status,
|
|
|
|
$actual_status));
|
|
|
|
}
|
|
|
|
|
|
|
|
if ($resource->getStatus() == DrydockResourceStatus::STATUS_PENDING) {
|
|
|
|
// TODO: Be stricter about this?
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to activate a lease on a pending resource.'));
|
|
|
|
}
|
|
|
|
|
|
|
|
$this->openTransaction();
|
|
|
|
|
|
|
|
$this
|
|
|
|
->setStatus(DrydockLeaseStatus::STATUS_ACTIVE)
|
|
|
|
->save();
|
|
|
|
|
|
|
|
DrydockSlotLock::acquireLocks($this->getPHID(), $this->slotLocks);
|
|
|
|
$this->slotLocks = array();
|
|
|
|
|
|
|
|
$this->saveTransaction();
|
|
|
|
|
|
|
|
$this->isActivated = true;
|
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
$this->didActivate();
|
|
|
|
|
2015-09-21 13:46:24 +02:00
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function isActivatedLease() {
|
|
|
|
return $this->isActivated;
|
|
|
|
}
|
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
public function canRelease() {
|
|
|
|
if (!$this->getID()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch ($this->getStatus()) {
|
|
|
|
case DrydockLeaseStatus::STATUS_RELEASED:
|
|
|
|
return false;
|
|
|
|
default:
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-23 22:54:27 +02:00
|
|
|
public function scheduleUpdate($epoch = null) {
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
PhabricatorWorker::scheduleTask(
|
|
|
|
'DrydockLeaseUpdateWorker',
|
|
|
|
array(
|
|
|
|
'leasePHID' => $this->getPHID(),
|
2015-09-23 22:54:27 +02:00
|
|
|
'isExpireTask' => ($epoch !== null),
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
),
|
|
|
|
array(
|
|
|
|
'objectPHID' => $this->getPHID(),
|
2015-09-23 22:54:27 +02:00
|
|
|
'delayUntil' => $epoch,
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
));
|
|
|
|
}
|
|
|
|
|
|
|
|
private function didActivate() {
|
|
|
|
$viewer = PhabricatorUser::getOmnipotentUser();
|
|
|
|
$need_update = false;
|
|
|
|
|
|
|
|
$commands = id(new DrydockCommandQuery())
|
|
|
|
->setViewer($viewer)
|
|
|
|
->withTargetPHIDs(array($this->getPHID()))
|
|
|
|
->withConsumed(false)
|
|
|
|
->execute();
|
|
|
|
if ($commands) {
|
|
|
|
$need_update = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ($need_update) {
|
|
|
|
$this->scheduleUpdate();
|
|
|
|
}
|
2015-09-23 22:54:27 +02:00
|
|
|
|
|
|
|
$expires = $this->getUntil();
|
|
|
|
if ($expires) {
|
|
|
|
$this->scheduleUpdate($expires);
|
|
|
|
}
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
}
|
|
|
|
|
2013-12-26 19:41:36 +01:00
|
|
|
|
|
|
|
/* -( PhabricatorPolicyInterface )----------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
public function getCapabilities() {
|
|
|
|
return array(
|
|
|
|
PhabricatorPolicyCapability::CAN_VIEW,
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
PhabricatorPolicyCapability::CAN_EDIT,
|
2013-12-26 19:41:36 +01:00
|
|
|
);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getPolicy($capability) {
|
2013-12-27 22:15:19 +01:00
|
|
|
if ($this->getResource()) {
|
|
|
|
return $this->getResource()->getPolicy($capability);
|
|
|
|
}
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
|
|
|
|
// TODO: Implement reasonable policies.
|
|
|
|
|
2013-12-27 22:15:19 +01:00
|
|
|
return PhabricatorPolicies::getMostOpenPolicy();
|
2013-12-26 19:41:36 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
public function hasAutomaticCapability($capability, PhabricatorUser $viewer) {
|
2013-12-27 22:15:19 +01:00
|
|
|
if ($this->getResource()) {
|
|
|
|
return $this->getResource()->hasAutomaticCapability($capability, $viewer);
|
|
|
|
}
|
|
|
|
return false;
|
2013-12-26 19:41:36 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
public function describeAutomaticCapability($capability) {
|
|
|
|
return pht('Leases inherit policies from the resources they lease.');
|
|
|
|
}
|
|
|
|
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|