Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
<?php
|
|
|
|
|
2015-01-04 20:33:18 +01:00
|
|
|
final class DiffusionMercurialServeSSHWorkflow
|
|
|
|
extends DiffusionMercurialSSHWorkflow {
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
|
|
|
|
protected $didSeeWrite;
|
|
|
|
|
2015-01-15 21:42:07 +01:00
|
|
|
protected function didConstruct() {
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
$this->setName('hg');
|
|
|
|
$this->setArguments(
|
|
|
|
array(
|
|
|
|
array(
|
|
|
|
'name' => 'repository',
|
|
|
|
'short' => 'R',
|
|
|
|
'param' => 'repo',
|
|
|
|
),
|
|
|
|
array(
|
|
|
|
'name' => 'stdio',
|
|
|
|
),
|
|
|
|
array(
|
|
|
|
'name' => 'command',
|
|
|
|
'wildcard' => true,
|
|
|
|
),
|
|
|
|
));
|
|
|
|
}
|
|
|
|
|
Prepare SSH connections for proxying
Summary:
Ref T7034.
In a cluster environment, when a user connects with a VCS request over SSH (like `git pull`), the receiving server may need to proxy it to a server which can actually satisfy the request.
In order to proxy the request, we need to know which repository the user is interested in accessing.
Split the SSH workflow into two steps:
# First, identify the repository.
# Then, execute the operation.
In the future, this will allow us to put a possible "proxy the whole thing somewhere else" step in the middle, mirroring the behavior of Conduit.
This is trivially easy in `git` and `hg`. Both identify the repository on the commmand line.
This is fiendishly complex in `svn`, for the same reasons that hosting SVN was hard in the first place. Specifically:
- The client doesn't tell us what it's after.
- To get it to tell us, we have to send it a server capabilities string //first//.
- We can't just start an `svnserve` process and read the repository out after a little while, because we may need to proxy the request once we figure out the repository.
- We can't consume the client protocol frame that tells us what the client wants, because when we start the real server request it won't know what the client is after if it never receives that frame.
- On the other hand, we must consume the second copy of the server protocol frame that would be sent to the client, or they'll get two "HELLO" messages and not know what to do.
The approach here is straightforward, but the implementation is not trivial. Roughly:
- Start `svnserve`, read the "hello" frame from it.
- Kill `svnserve`.
- Send the "hello" to the client.
- Wait for the client to send us "I want repository X".
- Save the message it sent us in the "peekBuffer".
- Return "this is a request for repository X", so we can proxy it.
Then, to continue the request:
- Start the real `svnserve`.
- Read the "hello" frame from it and throw it away.
- Write the data in the "peekBuffer" to it, as though we'd just received it from the client.
- State of the world is normal again, so we can continue.
Also fixed some other issues:
- SVN could choke if `repository.default-local-path` contained extra slashes.
- PHP might emit some complaints when executing the commit hook; silence those.
Test Plan: Pushed and pulled repositories in SVN, Mercurial and Git.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7034
Differential Revision: https://secure.phabricator.com/D11541
2015-01-28 19:18:07 +01:00
|
|
|
protected function identifyRepository() {
|
2013-11-11 21:19:06 +01:00
|
|
|
$args = $this->getArgs();
|
|
|
|
$path = $args->getArg('repository');
|
Prepare SSH connections for proxying
Summary:
Ref T7034.
In a cluster environment, when a user connects with a VCS request over SSH (like `git pull`), the receiving server may need to proxy it to a server which can actually satisfy the request.
In order to proxy the request, we need to know which repository the user is interested in accessing.
Split the SSH workflow into two steps:
# First, identify the repository.
# Then, execute the operation.
In the future, this will allow us to put a possible "proxy the whole thing somewhere else" step in the middle, mirroring the behavior of Conduit.
This is trivially easy in `git` and `hg`. Both identify the repository on the commmand line.
This is fiendishly complex in `svn`, for the same reasons that hosting SVN was hard in the first place. Specifically:
- The client doesn't tell us what it's after.
- To get it to tell us, we have to send it a server capabilities string //first//.
- We can't just start an `svnserve` process and read the repository out after a little while, because we may need to proxy the request once we figure out the repository.
- We can't consume the client protocol frame that tells us what the client wants, because when we start the real server request it won't know what the client is after if it never receives that frame.
- On the other hand, we must consume the second copy of the server protocol frame that would be sent to the client, or they'll get two "HELLO" messages and not know what to do.
The approach here is straightforward, but the implementation is not trivial. Roughly:
- Start `svnserve`, read the "hello" frame from it.
- Kill `svnserve`.
- Send the "hello" to the client.
- Wait for the client to send us "I want repository X".
- Save the message it sent us in the "peekBuffer".
- Return "this is a request for repository X", so we can proxy it.
Then, to continue the request:
- Start the real `svnserve`.
- Read the "hello" frame from it and throw it away.
- Write the data in the "peekBuffer" to it, as though we'd just received it from the client.
- State of the world is normal again, so we can continue.
Also fixed some other issues:
- SVN could choke if `repository.default-local-path` contained extra slashes.
- PHP might emit some complaints when executing the commit hook; silence those.
Test Plan: Pushed and pulled repositories in SVN, Mercurial and Git.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7034
Differential Revision: https://secure.phabricator.com/D11541
2015-01-28 19:18:07 +01:00
|
|
|
return $this->loadRepositoryWithPath($path);
|
|
|
|
}
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
|
Prepare SSH connections for proxying
Summary:
Ref T7034.
In a cluster environment, when a user connects with a VCS request over SSH (like `git pull`), the receiving server may need to proxy it to a server which can actually satisfy the request.
In order to proxy the request, we need to know which repository the user is interested in accessing.
Split the SSH workflow into two steps:
# First, identify the repository.
# Then, execute the operation.
In the future, this will allow us to put a possible "proxy the whole thing somewhere else" step in the middle, mirroring the behavior of Conduit.
This is trivially easy in `git` and `hg`. Both identify the repository on the commmand line.
This is fiendishly complex in `svn`, for the same reasons that hosting SVN was hard in the first place. Specifically:
- The client doesn't tell us what it's after.
- To get it to tell us, we have to send it a server capabilities string //first//.
- We can't just start an `svnserve` process and read the repository out after a little while, because we may need to proxy the request once we figure out the repository.
- We can't consume the client protocol frame that tells us what the client wants, because when we start the real server request it won't know what the client is after if it never receives that frame.
- On the other hand, we must consume the second copy of the server protocol frame that would be sent to the client, or they'll get two "HELLO" messages and not know what to do.
The approach here is straightforward, but the implementation is not trivial. Roughly:
- Start `svnserve`, read the "hello" frame from it.
- Kill `svnserve`.
- Send the "hello" to the client.
- Wait for the client to send us "I want repository X".
- Save the message it sent us in the "peekBuffer".
- Return "this is a request for repository X", so we can proxy it.
Then, to continue the request:
- Start the real `svnserve`.
- Read the "hello" frame from it and throw it away.
- Write the data in the "peekBuffer" to it, as though we'd just received it from the client.
- State of the world is normal again, so we can continue.
Also fixed some other issues:
- SVN could choke if `repository.default-local-path` contained extra slashes.
- PHP might emit some complaints when executing the commit hook; silence those.
Test Plan: Pushed and pulled repositories in SVN, Mercurial and Git.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7034
Differential Revision: https://secure.phabricator.com/D11541
2015-01-28 19:18:07 +01:00
|
|
|
protected function executeRepositoryOperations() {
|
|
|
|
$repository = $this->getRepository();
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
$args = $this->getArgs();
|
|
|
|
|
|
|
|
if (!$args->getArg('stdio')) {
|
2015-05-22 09:27:56 +02:00
|
|
|
throw new Exception(pht('Expected `%s`!', 'hg ... --stdio'));
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if ($args->getArg('command') !== array('serve')) {
|
2015-05-22 09:27:56 +02:00
|
|
|
throw new Exception(pht('Expected `%s`!', 'hg ... serve'));
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
}
|
|
|
|
|
2015-01-28 23:41:24 +01:00
|
|
|
if ($this->shouldProxy()) {
|
|
|
|
$command = $this->getProxyCommand();
|
|
|
|
} else {
|
|
|
|
$command = csprintf(
|
|
|
|
'hg -R %s serve --stdio',
|
|
|
|
$repository->getLocalPath());
|
|
|
|
}
|
Add "phd.user" with `sudo` hooks for SSH/HTTP writes
Summary:
Ref T2230. When fully set up, we have up to three users who all need to write into the repositories:
- The webserver needs to write for HTTP receives.
- The SSH user needs to write for SSH receives.
- The daemons need to write for "git fetch", "git clone", etc.
These three users don't need to be different, but in practice they are often not likely to all be the same user. If for no other reason, making them all the same user requires you to "git clone httpd@host.com", and installs are likely to prefer "git clone git@host.com".
Using three different users also allows better privilege separation. Particularly, the daemon user can be the //only// user with write access to the repositories. The webserver and SSH user can accomplish their writes through `sudo`, with a whitelisted set of commands. This means that even if you compromise the `ssh` user, you need to find a way to escallate from there to the daemon user in order to, e.g., write arbitrary stuff into the repository or bypass commit hooks.
This lays some of the groundwork for a highly-separated configuration where the SSH and HTTP users have the fewest privileges possible and use `sudo` to interact with repositories. Some future work which might make sense:
- Make `bin/phd` respect this (require start as the right user, or as root and drop privileges, if this configuration is set).
- Execute all `git/hg/svn` commands via sudo?
Users aren't expected to configure this yet so I haven't written any documentation.
Test Plan:
Added an SSH user ("dweller") and gave it sudo by adding this to `/etc/sudoers`:
dweller ALL=(epriestley) SETENV: NOPASSWD: /usr/bin/git-upload-pack, /usr/bin/git-receive-pack
Then I ran git pushes and pulls over SSH via "dweller@localhost". They successfully interacted with the repository on disk as the "epriestley" user.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7589
2013-11-18 17:58:35 +01:00
|
|
|
$command = PhabricatorDaemon::sudoCommandAsDaemonUser($command);
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
|
2013-12-03 00:46:03 +01:00
|
|
|
$future = id(new ExecFuture('%C', $command))
|
|
|
|
->setEnv($this->getEnvironment());
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
|
Add "phd.user" with `sudo` hooks for SSH/HTTP writes
Summary:
Ref T2230. When fully set up, we have up to three users who all need to write into the repositories:
- The webserver needs to write for HTTP receives.
- The SSH user needs to write for SSH receives.
- The daemons need to write for "git fetch", "git clone", etc.
These three users don't need to be different, but in practice they are often not likely to all be the same user. If for no other reason, making them all the same user requires you to "git clone httpd@host.com", and installs are likely to prefer "git clone git@host.com".
Using three different users also allows better privilege separation. Particularly, the daemon user can be the //only// user with write access to the repositories. The webserver and SSH user can accomplish their writes through `sudo`, with a whitelisted set of commands. This means that even if you compromise the `ssh` user, you need to find a way to escallate from there to the daemon user in order to, e.g., write arbitrary stuff into the repository or bypass commit hooks.
This lays some of the groundwork for a highly-separated configuration where the SSH and HTTP users have the fewest privileges possible and use `sudo` to interact with repositories. Some future work which might make sense:
- Make `bin/phd` respect this (require start as the right user, or as root and drop privileges, if this configuration is set).
- Execute all `git/hg/svn` commands via sudo?
Users aren't expected to configure this yet so I haven't written any documentation.
Test Plan:
Added an SSH user ("dweller") and gave it sudo by adding this to `/etc/sudoers`:
dweller ALL=(epriestley) SETENV: NOPASSWD: /usr/bin/git-upload-pack, /usr/bin/git-receive-pack
Then I ran git pushes and pulls over SSH via "dweller@localhost". They successfully interacted with the repository on disk as the "epriestley" user.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7589
2013-11-18 17:58:35 +01:00
|
|
|
$io_channel = $this->getIOChannel();
|
2015-01-04 20:33:18 +01:00
|
|
|
$protocol_channel = new DiffusionMercurialWireClientSSHProtocolChannel(
|
Enable Mercurial reads and writes over SSH
Summary:
Ref T2230. This is substantially more complicated than Git, but mostly because Mercurial's protocol is a like 50 ad-hoc extensions cobbled together. Because we must decode protocol frames in order to determine if a request is read or write, 90% of this is implementing a stream parser for the protocol.
Mercurial's own parser is simpler, but relies on blocking reads. Since we don't even have methods for blocking reads right now and keeping the whole thing non-blocking is conceptually better, I made the parser nonblocking. It ends up being a lot of stuff. I made an effort to cover it reasonably well with unit tests, and to make sure we fail closed (i.e., reject requests) if there are any parts of the protocol I got wrong.
A lot of the complexity is sharable with the HTTP stuff, so it ends up being not-so-bad, just very hard to verify by inspection as clearly correct.
Test Plan:
- Ran `hg clone` over SSH.
- Ran `hg fetch` over SSH.
- Ran `hg push` over SSH, to a read-only repo (error) and a read-write repo (success).
Reviewers: btrahan, asherkin
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7553
2013-11-11 21:18:27 +01:00
|
|
|
$io_channel);
|
|
|
|
|
|
|
|
$err = id($this->newPassthruCommand())
|
|
|
|
->setIOChannel($protocol_channel)
|
|
|
|
->setCommandChannelFromExecFuture($future)
|
|
|
|
->setWillWriteCallback(array($this, 'willWriteMessageCallback'))
|
|
|
|
->execute();
|
|
|
|
|
|
|
|
// TODO: It's apparently technically possible to communicate errors to
|
|
|
|
// Mercurial over SSH by writing a special "\n<error>\n-\n" string. However,
|
|
|
|
// my attempt to implement that resulted in Mercurial closing the socket and
|
|
|
|
// then hanging, without showing the error. This might be an issue on our
|
|
|
|
// side (we need to close our half of the socket?), or maybe the code
|
|
|
|
// for this in Mercurial doesn't actually work, or maybe something else
|
|
|
|
// is afoot. At some point, we should look into doing this more cleanly.
|
|
|
|
// For now, when we, e.g., reject writes for policy reasons, the user will
|
|
|
|
// see "abort: unexpected response: empty string" after the diagnostically
|
|
|
|
// useful, e.g., "remote: This repository is read-only over SSH." message.
|
|
|
|
|
|
|
|
if (!$err && $this->didSeeWrite) {
|
|
|
|
$repository->writeStatusMessage(
|
|
|
|
PhabricatorRepositoryStatusMessage::TYPE_NEEDS_UPDATE,
|
|
|
|
PhabricatorRepositoryStatusMessage::CODE_OKAY);
|
|
|
|
}
|
|
|
|
|
|
|
|
return $err;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function willWriteMessageCallback(
|
|
|
|
PhabricatorSSHPassthruCommand $command,
|
|
|
|
$message) {
|
|
|
|
|
|
|
|
$command = $message['command'];
|
|
|
|
|
|
|
|
// Check if this is a readonly command.
|
|
|
|
|
|
|
|
$is_readonly = false;
|
|
|
|
if ($command == 'batch') {
|
|
|
|
$cmds = idx($message['arguments'], 'cmds');
|
|
|
|
if (DiffusionMercurialWireProtocol::isReadOnlyBatchCommand($cmds)) {
|
|
|
|
$is_readonly = true;
|
|
|
|
}
|
|
|
|
} else if (DiffusionMercurialWireProtocol::isReadOnlyCommand($command)) {
|
|
|
|
$is_readonly = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!$is_readonly) {
|
|
|
|
$this->requireWriteAccess();
|
|
|
|
$this->didSeeWrite = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we're good, return the raw message data.
|
|
|
|
return $message['raw'];
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|