Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
<?php
|
|
|
|
|
2013-11-27 22:32:51 +01:00
|
|
|
final class DrydockResource extends DrydockDAO
|
|
|
|
implements PhabricatorPolicyInterface {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
|
|
|
|
protected $id;
|
|
|
|
protected $phid;
|
2013-12-03 01:09:07 +01:00
|
|
|
protected $blueprintPHID;
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
protected $status;
|
|
|
|
|
|
|
|
protected $type;
|
|
|
|
protected $name;
|
|
|
|
protected $attributes = array();
|
|
|
|
protected $capabilities = array();
|
|
|
|
protected $ownerPHID;
|
|
|
|
|
2015-09-21 13:42:04 +02:00
|
|
|
private $blueprint = self::ATTACHABLE;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
private $isAllocated = false;
|
2015-09-21 13:46:24 +02:00
|
|
|
private $isActivated = false;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
private $activateWhenAllocated = false;
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
private $slotLocks = array();
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
|
2015-01-13 20:47:05 +01:00
|
|
|
protected function getConfiguration() {
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
return array(
|
|
|
|
self::CONFIG_AUX_PHID => true,
|
|
|
|
self::CONFIG_SERIALIZATION => array(
|
|
|
|
'attributes' => self::SERIALIZATION_JSON,
|
|
|
|
'capabilities' => self::SERIALIZATION_JSON,
|
|
|
|
),
|
2014-09-18 20:15:49 +02:00
|
|
|
self::CONFIG_COLUMN_SCHEMA => array(
|
|
|
|
'name' => 'text255',
|
|
|
|
'ownerPHID' => 'phid?',
|
|
|
|
'status' => 'uint32',
|
|
|
|
'type' => 'text64',
|
|
|
|
),
|
|
|
|
self::CONFIG_KEY_SCHEMA => array(
|
|
|
|
'key_phid' => null,
|
|
|
|
'phid' => array(
|
|
|
|
'columns' => array('phid'),
|
2014-10-01 16:53:50 +02:00
|
|
|
'unique' => true,
|
2014-09-18 20:15:49 +02:00
|
|
|
),
|
|
|
|
),
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
) + parent::getConfiguration();
|
|
|
|
}
|
|
|
|
|
|
|
|
public function generatePHID() {
|
2014-07-24 00:05:46 +02:00
|
|
|
return PhabricatorPHID::generateNewPHID(DrydockResourcePHIDType::TYPECONST);
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
public function getAttribute($key, $default = null) {
|
|
|
|
return idx($this->attributes, $key, $default);
|
|
|
|
}
|
|
|
|
|
2013-12-04 22:16:33 +01:00
|
|
|
public function getAttributesForTypeSpec(array $attribute_names) {
|
|
|
|
return array_select_keys($this->attributes, $attribute_names);
|
|
|
|
}
|
|
|
|
|
2012-03-27 05:54:26 +02:00
|
|
|
public function setAttribute($key, $value) {
|
|
|
|
$this->attributes[$key] = $value;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
public function getCapability($key, $default = null) {
|
|
|
|
return idx($this->capbilities, $key, $default);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getInterface(DrydockLease $lease, $type) {
|
|
|
|
return $this->getBlueprint()->getInterface($this, $lease, $type);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getBlueprint() {
|
2015-09-21 13:42:04 +02:00
|
|
|
return $this->assertAttached($this->blueprint);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function attachBlueprint(DrydockBlueprint $blueprint) {
|
|
|
|
$this->blueprint = $blueprint;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
public function setActivateWhenAllocated($activate) {
|
|
|
|
$this->activateWhenAllocated = $activate;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
public function needSlotLock($key) {
|
|
|
|
$this->slotLocks[] = $key;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2015-09-21 13:46:24 +02:00
|
|
|
public function allocateResource() {
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
if ($this->getID()) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to allocate a resource which has already been persisted. '.
|
|
|
|
'Only new resources may be allocated.'));
|
|
|
|
}
|
|
|
|
|
|
|
|
$expect_status = DrydockResourceStatus::STATUS_PENDING;
|
|
|
|
$actual_status = $this->getStatus();
|
|
|
|
if ($actual_status != $expect_status) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to allocate a resource from the wrong status. Status must '.
|
|
|
|
'be "%s", actually "%s".',
|
|
|
|
$expect_status,
|
|
|
|
$actual_status));
|
|
|
|
}
|
|
|
|
|
|
|
|
if ($this->activateWhenAllocated) {
|
|
|
|
$new_status = DrydockResourceStatus::STATUS_OPEN;
|
|
|
|
} else {
|
|
|
|
$new_status = DrydockResourceStatus::STATUS_PENDING;
|
|
|
|
}
|
|
|
|
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
$this->openTransaction();
|
|
|
|
|
|
|
|
$this
|
|
|
|
->setStatus($new_status)
|
|
|
|
->save();
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
DrydockSlotLock::acquireLocks($this->getPHID(), $this->slotLocks);
|
|
|
|
$this->slotLocks = array();
|
|
|
|
|
|
|
|
$this->saveTransaction();
|
|
|
|
|
|
|
|
$this->isAllocated = true;
|
Implement a rough AlmanacService blueprint in Drydock
Summary:
Ref T9253. Broadly, this realigns Allocator behavior to be more consistent and straightforward and amenable to intended future changes.
This attempts to make language more consistent: resources are "allocated" and leases are "acquired".
This prepares for (but does not implement) optimistic "slot locking", as discussed in D10304. Although I suspect some blueprints will need to perform other locking eventually, this does feel like a good fit for most of the locking blueprints need to do.
In particular, I've made the blueprint operations on `$resource` and `$lease` objects more purposeful: they need to invoke an activator on the appropriate object to be implemented correctly. Before they invoke this activator method, they configure the object. In a future diff, this configuration will include specifying slot locks that the lease or resource must acquire. So the API will be something like:
$lease
->setActivateWhenAcquired(true)
->needSlotLock('x')
->needSlotLock('y')
->acquireOnResource($resource);
In the common case where slot locks are a good fit, I think this should make correct blueprint implementation very straightforward.
This prepares for (but does not implement) resources and leases which need significant setup steps. I've basically carved out two modes:
- The "activate immediately" mode, as here, immediately opens the resource or activates the lease. This is appropriate if little or no setup is required. I expect many leases to operate in this mode, although I expect many resources will operate in the other mode.
- The "allocate now, activate later" mode, which is not fully implemented yet. This will queue setup workers when the allocator exits. Overall, this will work very similarly to Harbormaster.
- This new structure makes it acceptable for blueprints to sleep as long as they want during resource allocation and lease acquisition, so long as they are not waiting on anything which needs to be completed by the queue. Putting a `sleep(15 * 60)` in your EC2Blueprint to wait for EC2 to bring a machine up will perform worse than using delayed activation, but won't deadlock the queue or block any locks.
Overall, this flow is more similar to Harbormaster's flow. Having consistency between Harbormaster's model and Drydock's model is good, and I think Harbormaster's model is also simply much better than Drydock's (what exists today in Drydock was implemented a long time ago, and we had more support and infrastructure by the time Harbormaster was implemented, as well as a more clearly defined problem).
The particular strength of Harbormaster is that objects always (or almost always, at least) have a single, clearly defined writer. Ensuring objects have only one writer prevents races and makes reasoning about everything easier.
Drydock does not currently have a clearly defined single writer, but this moves us in that direction. We'll probably need more primitives eventually to flesh this out, like Harbormaster's command queue for messaging objects which you can't write to.
This blueprint was originally implemented in D13843. This makes a few changes to the blueprint itself:
- A bunch of code from that (e.g., interfaces) doesn't exist yet.
- I let the blueprint have multiple services. This simplifies the code a little and seems like it costs us nothing.
This also removes `bin/drydock create-resource`, which no longer makes sense to expose. It won't get locking, leasing, etc., correct, and can not be made correct.
NOTE: This technically works but doesn't do anything useful yet.
Test Plan: Used `bin/drydock lease --type host` to acquire leases against these blueprints.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Subscribers: Mnkras
Maniphest Tasks: T9253
Differential Revision: https://secure.phabricator.com/D14117
2015-09-21 13:43:53 +02:00
|
|
|
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function isAllocatedResource() {
|
|
|
|
return $this->isAllocated;
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|
|
|
|
|
2015-09-21 13:46:24 +02:00
|
|
|
public function activateResource() {
|
|
|
|
if (!$this->getID()) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to activate a resource which has not yet been persisted.'));
|
|
|
|
}
|
|
|
|
|
|
|
|
$expect_status = DrydockResourceStatus::STATUS_PENDING;
|
|
|
|
$actual_status = $this->getStatus();
|
|
|
|
if ($actual_status != $expect_status) {
|
|
|
|
throw new Exception(
|
|
|
|
pht(
|
|
|
|
'Trying to activate a resource from the wrong status. Status must '.
|
|
|
|
'be "%s", actually "%s".',
|
|
|
|
$expect_status,
|
|
|
|
$actual_status));
|
|
|
|
}
|
|
|
|
|
|
|
|
$this->openTransaction();
|
|
|
|
|
|
|
|
$this
|
|
|
|
->setStatus(DrydockResourceStatus::STATUS_OPEN)
|
|
|
|
->save();
|
|
|
|
|
|
|
|
DrydockSlotLock::acquireLocks($this->getPHID(), $this->slotLocks);
|
|
|
|
$this->slotLocks = array();
|
|
|
|
|
|
|
|
$this->saveTransaction();
|
|
|
|
|
|
|
|
$this->isActivated = true;
|
|
|
|
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function isActivatedResource() {
|
|
|
|
return $this->isActivated;
|
|
|
|
}
|
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
public function canRelease() {
|
|
|
|
switch ($this->getStatus()) {
|
|
|
|
case DrydockResourceStatus::STATUS_CLOSED:
|
|
|
|
case DrydockResourceStatus::STATUS_DESTROYED:
|
|
|
|
return false;
|
|
|
|
default:
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
2015-09-21 13:42:04 +02:00
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
public function scheduleUpdate() {
|
|
|
|
PhabricatorWorker::scheduleTask(
|
|
|
|
'DrydockResourceUpdateWorker',
|
|
|
|
array(
|
|
|
|
'resourcePHID' => $this->getPHID(),
|
|
|
|
),
|
|
|
|
array(
|
|
|
|
'objectPHID' => $this->getPHID(),
|
|
|
|
));
|
|
|
|
}
|
2015-09-21 13:42:04 +02:00
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
private function didActivate() {
|
|
|
|
$viewer = PhabricatorUser::getOmnipotentUser();
|
Implement optimistic "slot locks" in Drydock
Summary:
See discussion in D10304. There's a lot of context there, but the general idea is:
- Blueprints should manage locks in a granular way during the actual allocation/acquisition phase.
- Optimistic "slot locks" might a pretty good primitive to make that easy to implement and reason about in most cases.
The way these locks work is that you just pick some name for the lock (like the PHID of a resource) and say that it needs to be acquired for the allocation/acquisition to work:
```
...
->needSlotLock("mylock(PHID-XYZQ-...)")
...
```
When you fire off the acquisition or allocation, it fails unless it could acquire the slot with that name. This is really simple (no explicit lock management) and a pretty good fit for most of the locking that blueprints and leases need to do.
If you need to do limit-based locks (e.g., maximum of 3 locks) you could acquire a lock like this:
```
mylock(whatever).slot(2)
```
Blueprints generally only contend with themselves, so it's normally OK for them to pick whatever strategy works best for them in naming locks.
This may not work as well if you have a huge number of slots (e.g., 100TB you want to give out in 1MB chunks), or other complex needs for locks (like you have to synchronize access to some external resource), but slot locks don't need to be the only mechanism that blueprints use. If they run into a problem that slot locks aren't a good fit for, they can use something else instead. For now, slot locks seem like a good fit for the problems we currently face and most of the problems I anticipate facing.
(The release workflows have other race issues which I'm not addressing here. They work fine if nothing races, but aren't race-safe.)
Test Plan:
To create a race where the same binding is allocated as a resource twice:
- Add `sleep(10)` near the beginning of `allocateResource()`, after the free bindings are loaded but before resources are allocated.
- (Comment out slot lock acquisition if you have this patch.)
- Run `bin/drydock lease ...` in two windows, within 10 seconds of one another.
This will reliably double-allocate the binding because both blueprints see a view of the world where the binding is free.
To verify the lock works, un-comment it (or apply this patch) and run the same test again. Now, the lock fails in one process and only one resource is allocated.
Reviewers: hach-que, chad
Reviewed By: hach-que, chad
Differential Revision: https://secure.phabricator.com/D14118
2015-09-21 13:45:25 +02:00
|
|
|
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
$need_update = false;
|
|
|
|
|
|
|
|
$commands = id(new DrydockCommandQuery())
|
|
|
|
->setViewer($viewer)
|
|
|
|
->withTargetPHIDs(array($this->getPHID()))
|
|
|
|
->withConsumed(false)
|
|
|
|
->execute();
|
|
|
|
if ($commands) {
|
|
|
|
$need_update = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ($need_update) {
|
|
|
|
$this->scheduleUpdate();
|
|
|
|
}
|
2012-11-27 21:48:03 +01:00
|
|
|
}
|
|
|
|
|
2013-11-27 22:32:51 +01:00
|
|
|
|
|
|
|
/* -( PhabricatorPolicyInterface )----------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
public function getCapabilities() {
|
|
|
|
return array(
|
|
|
|
PhabricatorPolicyCapability::CAN_VIEW,
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
PhabricatorPolicyCapability::CAN_EDIT,
|
2013-11-27 22:32:51 +01:00
|
|
|
);
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getPolicy($capability) {
|
|
|
|
switch ($capability) {
|
|
|
|
case PhabricatorPolicyCapability::CAN_VIEW:
|
Add a command queue to Drydock to manage lease/resource release
Summary:
Ref T9252. Broadly, Drydock currently races on releasing objects from the "active" state. To reproduce this:
- Scatter some sleep()s pretty much anywhere in the release code.
- Release several times from web UI or CLI in quick succession.
Resources or leases will execute some release code twice or otherwise do inconsistent things.
(I didn't chase down a detailed reproduction scenario for this since inspection of the code makes it clear that there are no meaningful locks or mechanisms preventing this.)
Instead, add a Harbormaster-style command queue to resources and leases. When something wants to do a release, it adds a command to the queue and schedules a worker. The workers acquire a lock, then try to consume commands from the queue.
This guarantees that only one process is responsible for writes to active resource/leases.
This is the last major step to giving resources and leases a single writer during all states:
- Resource, Unsaved: AllocatorWorker
- Resource, Pending: ResourceWorker (Possible rename to "Allocated?")
- Resource, Open: This diff, ResourceUpdateWorker. (Likely rename to "Active").
- Resource, Closed/Broken: Future destruction worker. (Likely rename to "Released" / "Broken"; maybe remove "Broken").
- Resource, Destroyed: No writes.
- Lease, Unsaved: Whatever wants the lease.
- Lease, Pending: AllocatorWorker
- Lease, Acquired: LeaseWorker
- Lease, Active: This diff, LeaseUpdateWorker.
- Lease, Released/Broken: Future destruction worker (Maybe remove "Broken"?)
- Lease, Expired: No writes. (Likely rename to "Destroyed").
In most phases, we can already guarantee that there is a single writer without doing any extra work. This is more complicated in the "Active" case because the release buttons on the web UI, the release tools on the CLI, the lease requestor itself, the garbage collector, and any other release process cleaning up related objects may try to effect a release. All of these could race one another (and, in many cases, race other processes from other phases because all of these get to act immediately) as this code is currently written. Using a queue here lets us make sure there's only a single writer in this phase.
One thing which is notable is that whatever acquires a lease **can not write to it**! It is never the writer once it queues the lease for activation. It can not write to any resources, either. And, likewise, Blueprints can not write to resources while acquiring or releasing leases.
We may need to provide a mechinism so that blueprints and/or resource/lease holders get to attach some storage to resources/leases for bookkeeping. For example, a blueprint might need to keep some kind of cache on a resource to help it manage state. But I think we can cross that bridge when we come to it, and nothing else would need to write to this storage so it's technically straightforward to introduce such a mechanism if we need one.
Test Plan:
- Viewed buttons in web UI, checked enabled/disabled states.
- Clicked the buttons.
- Saw commands show up in the command queue.
- Saw some daemon stuff get scheduled.
- Ran CLI tools, saw commands get consumed and resources/leases release.
Reviewers: hach-que, chad
Reviewed By: chad
Maniphest Tasks: T9252
Differential Revision: https://secure.phabricator.com/D14143
2015-09-23 16:42:08 +02:00
|
|
|
case PhabricatorPolicyCapability::CAN_EDIT:
|
|
|
|
// TODO: Implement reasonable policies.
|
2013-11-27 22:32:51 +01:00
|
|
|
return PhabricatorPolicies::getMostOpenPolicy();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
public function hasAutomaticCapability($capability, PhabricatorUser $viewer) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function describeAutomaticCapability($capability) {
|
|
|
|
return null;
|
|
|
|
}
|
Drydock Rough Cut
Summary:
Rough cut of Drydock. This is very basic and doesn't do much of use yet (it
//does// allocate EC2 machines as host resources and expose interfaces to them),
but I think the overall structure is more or less reasonable.
== Interfaces
Vision: Applications interact with Drydock resources through DrydockInterfaces,
like **command**, **filesystem** and **httpd** interfaces. Each interface allows
applications to perform some kind of operation on the resource, like executing
commands, reading/writing files, or configuring a web server. Interfaces have a
concrete, specific API:
// Filesystem Interface
$fs = $lease->getInterface('filesystem'); // Constants, some day?
$fs->writeFile('index.html', 'hello world!');
// Command Interface
$cmd = $lease->getInterface('command');
echo $cmd->execx('uptime');
// HTTPD Interface
$httpd = $lease->getInterface('httpd');
$httpd->restart();
Interfaces are mostly just stock, although installs might add new interfaces if
they expose different ways to interact with resources (for instance, a resource
might want to expose a new 'MongoDB' interface or whatever).
Currently: We have like part of a command interface.
== Leases
Vision: Leases keep track of which resources are in use, and what they're being
used for. They allow us to know when we need to allocate more resources (too
many sandcastles on the existing hosts, e.g.) and when we can release resources
(because they are no longer being used). They also give applications something
to hold while resources are being allocated.
// EXAMPLE: How this should work some day.
$allocator = new DrydockAllocator();
$allocator->setResourceType('sandcastle');
$allocator->setAttributes(
array(
'diffID' => $diff->getID(),
));
$lease = $allocator->allocate();
$diff->setSandcastleLeaseID($lease->getID());
// ...
if ($lease->getStatus() == DrydockLeaseStatus::STATUS_ACTIVE) {
$sandcastle_link = $lease->getInterface('httpd')->getURI('/');
} else {
$sandcastle_link = 'Still building your sandcastle...';
}
echo "Sandcastle for this diff: ".$sandcastle_link;
// EXAMPLE: How this actually works now.
$allocator = new DrydockAllocator();
$allocator->setResourceType('host');
// NOTE: Allocation is currently synchronous but will be task-driven soon.
$lease = $allocator->allocate();
Leases are completely stock, installs will not define new lease types.
Currently: Leases exist and work but are very very basic.
== Resources
Vision: Resources represent some actual thing we've put somewhere, whether it's
a host, a block of storage, a webroot, or whatever else. Applications interact
through resources by acquiring leases to them, and then getting interfaces
through these leases. The lease acquisition process has a side effect of
allocating new resources if a lease can't be acquired on existing resources
(e.g., the application wants storage but all storage resources are full) and
things are configured to autoscale.
Resources may themselves acquire leases in order to allocate. For instance, a
storage resource might first acquire a lease to a host resource. A 'test
scaffold' resource might lease a storage resource and a mysql resource.
Not all resources are auto-allocate: the entry-level version of Drydock is that
you manually allocate a couple boxes and configure them through the web console.
Then, e.g., 'storage' / 'webroot' resources allocate on top of them, but the
host pool itself does not autoscale.
Resources are completely stock, they are abstract shells representing any
arbitrary thing.
Currently: Resource exist ('host' only) but are very very basic.
== Blueprints
Vision: Blueprints contain instructions for building interfaces to, (possibly)
allocating, updating, managing, and destroying a specific type of resource in a
specific location. One way to think of them is that they are scripts for
creating and deleting resources. For example, the LocalHost, RemoteHost and
EC2Host blueprints can all manage 'host' resources.
Eventually, we will support more types of resources (storage, webroot,
sandcastle, test scaffold, phacility deployment) and more providers for resource
types, some of which will be in the Phabricator mainline and some of which will
be custom.
Blueprints are very custom and specific to application types, so installs will
define new blueprints if they are making significant use of Drydock.
Currently: They exist but have few capabilities. The stock blueprints do nearly
nothing useful. There is a technically functional blueprint for host allocation
in EC2.
== Allocator
This is just the actual code to execute the lease acquisition process.
Test Plan: Ran "drydock_control.php" script, it allocated a machine in EC2,
acquired a lease on it, interfaced with it, and then released the lease. Ran it
again, got a fresh lease on the existing resource.
Reviewers: btrahan, jungejason
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D1454
2012-01-11 20:18:40 +01:00
|
|
|
}
|