Summary: Depends on D19796. Simplify some timing code by using phutil_microseconds_since() instead of duplicate casting and arithmetic.
Test Plan: Grepped for `1000000` to find these. Pulled, pushed, made a conduit call. This isn't exhaustive but it should be hard for these to break in a bad way since they're all just diagnostic.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D19797
Summary:
Depends on D19779. Ref T13216. The push logs currently record the "hostWait", which is roughly "locking + subprocess cost". We also record locking separately, so we can figure out "subprocess cost" alone by subtracting the lock costs.
However, the subprocess (normally `git receive-pack`) runs hooks, and we don't have an easy way to figure out how much time was spent doing actual `git` stuff vs spent doing commit hook processing. This would have been useful in diagnosing at least one recent issue.
Track at least a rough hook cost and record it in the push logs.
Test Plan: Pushed to a repository, saw a reasonable hook cost appear in the database table.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13216
Differential Revision: https://secure.phabricator.com/D19780
Summary:
Depends on D19778. Ref T13216. See PHI943, PHI889, et al.
We currently have a push log and a pull log, but do not separately log intracluster synchronization events. We've encountered several specific cases where having this kind of log would be helpful:
- In PHI943, an install was accidentally aborting locks early. Having timing information in the sync log would let us identify this more quickly.
- In PHI889, an install hit an issue with `MaxStartups` configuration in `sshd`. A log would let us identify when this is an issue.
- In PHI889, I floated a "push the linux kernel + fetch timeout" theory. A sync log would let us see sync/fetch timeouts to confirm this being a problem in practice.
- A sync log will help us assess, develop, test, and monitor intracluster routing sync changes (likely those in T13211) in the future.
Some of these events are present in the pull log already, but only if they make it as far as running a `git upload-pack` subprocess (not the case with `MaxStartups` problems) -- and they can't record end-to-end timing.
No UI yet, I'll add that in a future change.
Test Plan:
- Forced all operations to synchronize by adding `|| true` to the version check.
- Pulled, got a sync log in the database.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13216
Differential Revision: https://secure.phabricator.com/D19779
Summary:
Ref T13216. When a repository is clustered, we run this cleanup code (to tell the repository to update, and log some timing information) on both nodes. Currently, we do slightly too much work, which is unnecessary and can be a bit confusing to human readers.
The double update message doesn't hurt anything, but there's no reason to write it twice.
Likewise, the second timing information update query doesn't do anything: there's no PushEvent object with the right identifier, so it just updates nothing. We don't need to run it, and it's confusing that we do.
Instead, only do these writes if we're actually the final node with the repository on it.
Test Plan: Added some logging, saw double writes/updates before the change and no doubles afterwards, with no other behavioral changes.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13216
Differential Revision: https://secure.phabricator.com/D19778
Summary:
See <https://hackerone.com/reports/435648>. We currently use 80-bit TOTP keys. The RFC suggests 128 as a minimum and recommends 160.
The math suggests that doing the hashing for an 80-bit key is hard (slightly beyond the reach of a highly motivated state actor, today) but there's no reason not to use 160 bits instead to put this completely out of reach.
See some additional discussion on the HackerOne report about enormous key sizes, number of required observations, etc.
Test Plan: Added a new 160-bit TOTP factor to Google Authenticator without issue.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D19792
Summary:
Ref T13216. We occasionally receive HackerOne reports concerned that you can select your username as a password. I suspect very few users actually do this and that this is mostly a compliance/checklist sort of issue, not a real security issue.
On this install, we have about 41,000 user accounts. Of these, 100 have their username as a password (account or VCS). A substantial subset of these are either explicitly intentional ("demo", "bugmenot") or obvious test accounts ("test" in name, or name is a nonsensical string of gibberish, or looks like "tryphab" or similar) or just a bunch of numbers (?), or clearly a "researcher" doing this on purpose (e.g., name includes "pentest" or "xss" or similar).
So I'm not sure real users are actually very inclined to do this, and we can't really ever stop them from picking awful passwords anyway. But we //can// stop researchers from reporting that this is an issue.
Don't allow users to select passwords which contain words in a blocklist: their username, real name, email addresses, or the install's domain name. These words also aren't allowed to contain the password (that is, neither your password nor your username may be a substring of the other one). We also do a little normalization to try to split apart email addresses, domains, and real names, so I can't have "evan1234" as my password.
Test Plan:
- Added unit tests and made them pass.
- Tried to set my password to a bunch of variations of my username / email / domain name / real name / etc, got rejected.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13216
Differential Revision: https://secure.phabricator.com/D19776
Summary:
Ref T13216. See PHI948. When you use the remarkup hint button to embed a meme with no text, you get `{meme src=X}`.
If the source is a GIF, we currently split the source apart into frame-by-frame images, process them, and stitch them back together. The end result is the same image we started with, but this process can be slow/expensive, and may timeout for sufficiently large GIFs.
Instead: when there's no text, just return the original image data.
Test Plan:
- Used `{meme src=X}` with no text, got an image faster.
- Used `{meme src=X, above=...}` to add text, got an attempt to add text (which didn't get very far locally since I don't have GD configured).
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13216
Differential Revision: https://secure.phabricator.com/D19777
Summary:
Depends on D19773. See <https://hackerone.com/reports/434116>. You can currently vote for invalid options by submitting, e.g., `vote[]=12345`.
By doing this, you can see the responses, which is sort of theoretically a security problem? This is definitely a bug, regardless.
Instead, only allow users to vote for options which are actually part of the poll.
Test Plan:
- Tried to vote for invalid options by editing the form to `vote[]=12345` (got error).
- Tried to vote for invalid options by editing the radio buttons on a plurality poll into checkboxes, checking multiple boxes, and submitting (got error).
- Voted in approval and plurality polls the right way, from the main web UI and from the embed (`{V...}`) UI.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D19774
Summary:
See <https://hackerone.com/reports/434116>. Slowvote has a piece of Javascript that attempts to let you vote on `{V123}` polls inline.
It does not work: nothing ever triggers it (nothing renders a control with a `slowvote-option` sigil).
At least for now, just remove it. It has a completely separate pathway in the controller and both pathways are buggy, so this makes fixing them easier.
Test Plan: Voted in plurality and approval polls via Slowvote and the embedded widget.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D19773
Summary:
Ref T13210. Ref T11908. Add some basic test coverage for the new "%R" introduced in D19764, then convert LiskDAO to implement the "Database + Table Ref" interface.
To move forward, we need to convert all of these (where `%T` is not a table alias):
```counterexample
qsprintf($conn, '... %T ...', $thing->getTableName());
```
...to these:
```
qsprintf($conn, '... %R ...', $thing);
```
The new code is a little simpler (no `->getTableName()` call) which is sort of nice. But we also have a //lot// of `%T` so this is probably going to take a while.
(I'll hold this until after the release cut.)
Test Plan:
- Ran unit tests.
- Browsed around and edited some objects without issues. This change causes a reasonably large percentage of our total queries to use the new code since the LiskDAO builtin queries are some of the most commonly-constructed queries, although there are still ~700 callsites which need to be examined for possible conversion.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210, T11908
Differential Revision: https://secure.phabricator.com/D19765
Summary:
See <https://discourse.phabricator-community.org/t/diffusionrequest-regex-error/2057/>.
The intent of `[\d-,]` is "digits, hyphen, and comma" but `[x-y]` means "character range x-y".
Specify `[\d,-]` instead to disambiguate the hyphen as "literal hyphen", not a character range marker.
Test Plan: I can't reproduce the original error as reported, but browsed around Diffusion for a bit.
Reviewers: amckinley, avivey
Reviewed By: avivey
Differential Revision: https://secure.phabricator.com/D19770
Summary:
Fixes T13208. See that task for details.
The `clone $query` line is safe if `$query` is a builtin query (like "open").
However, if it's a saved query we clone not only the query parameters but the ID, too. Then when we `save()` the query later, we overwrite the original query.
So this would happen in the database. First, you run a query and save it as the workboard default (query key "abc123"):
| 123 | abc123 | {"...xxx..."} |
Then we `clone` it and change the parameters, and `save()` it. But that causes an `UPDATE ... WHERE id = 123` and the table now looks like this:
| 123 | def456 | {"...yyy..."} |
What we want is to create a new query instead, with an `INSERT ...`:
| 123 | abc123 | {"...xxx..."} |
| 124 | def456 | {"...yyy..."} |
Test Plan:
- Followed reproduction steps from above.
- With just the new `save()` guard, hit the guard error.
- With the `newCopy()`, got a new copy of the query and "View as Query" remained functional without overwriting the original query row.
- Ran migration, saw an affected board get fixed.
Reviewers: amckinley, joshuaspence
Reviewed By: joshuaspence
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13208
Differential Revision: https://secure.phabricator.com/D19768
Summary:
Fixes T12145. Ref T13210. See PHI570. See PHI536.
Currently, when you give Drydock an Almanac host pool with more than one host, it never voluntarily builds a second host resource: there is no way to say "maximum X working copies per host" (only "maximum X global working copies") to make the first host overflow, and the allocator tries to pack resources as tightly as possible.
If you can force it to allocate the 2nd..Nth host, things will work reasonably well from there (it will spread working copies across the hosts randomly), but tricking it is very hard, especially before D19761.
To deal with this, give blueprints a new behavior around "supplemental allocations". The idea here is that a blueprint may decide that it would prefer to allocate a fresh new resource instead of allowing an otherwise valid acquisition to occur.
These supplemental allocations follow all the normal allocation rules (they can't exceed limits or actually replace existing resources), so they can only happen if there's free space in the resource pool. But a blueprint can elect for a supplemental allocation to provide a "grow the pool" hint.
The only useful policies here are probably "true" (immediately use all resources, like Almanac) or "false" (pack resources as efficiently as possible) but some other policies //might// be useful (perhaps "start growing the pool when we're getting a bit full even if we aren't at the limit yet, since our workload is bursty").
Then, give Almanac host resources a "true" policy (always allocate supplemental resources) so they use all hosts once a similar number of concurrent jobs arrive.
One aspect of this approach is that we only do supplemental resources if the normal allocation algorithm already decided that the best resource to acquire was part of the same blueprint. I started with an approach like "look at all the blueprints and see if any of them want to be greedy", but then a not-very-desirable blueprint would end up filling up its whole pool before we skipped the supplemental allocation part and ended up picking a different resource. That felt a bit silly and this feels a little cleaner and more focused.
Test Plan:
- Without changing the Almanac blueprint policy, allocated hosts. Got A, A, A, A, ... (second host never used).
- Changed the Almanac policy.
- Allocated hosts, got A, B, random mix of A and B.
- Destroyed B. Destroyed all leases on A. Allocated. Got A. This tests the "don't build a supplemental resource if there are no leases on the natural resource".
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: yelirekim, PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210, T12145
Differential Revision: https://secure.phabricator.com/D19762
Summary:
Ref T13210. Ref T12145. The "Almanac Host" blueprint currently hands out new leases on a given host even if the binding has been disabled.
Although there are some more complicated cases here (e.g., involving cleanup of the existing resource and existing leases), this one seems clear cut: if the binding has been disabled, we should stop handing out new leases on it.
Test Plan:
- Created a service with two hosts.
- Requested a lease, got host A.
- Requested more leases, always got host A (we never build a new host when we don't have to, and we currently never have to).
- Disabled the binding to host A.
- Requested a lease.
- Before patch: got host A.
- After patch: got host B.
- Also disabled the other binding to host B, requested a lease, got an indefinite wait for resources (which is expected and reasonable).
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: yelirekim, PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210, T12145
Differential Revision: https://secure.phabricator.com/D19761
Summary: Ref T13210. See PHI841. This mirrors D19509 for Differential.
Test Plan: Called `transaction.search` on a commit with a bunch of audit activity, got appropriate labels in the results.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19760
Summary:
Ref T13210. The comment action dropdown for audits has a heavy checkmark next to "Accept" and a heavy "X" next to "Raise Concern".
We previously removed similar marks in Differential in D19405 and that seems to have gone fine. For consistency, remove these too.
Test Plan: Viewed the comment action dropdown, no longer saw checkmark and X-mark.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19759
Summary:
Ref T13210. See PHI930. This translation is wrong: the parameter is a comma-separated list as a string, but the USEnglish translation provides alternatives. We can't select among alternatives based on a random string (it isn't a plurality value to let us select "chair" vs "chairs", and isn't a gender value to let us select "his profile" vs "her profile") so we get an error.
But the string itself is also misleading, since "bin/phd log --id A --id B --id C" will say "none of these are valid" if //any// of them are invalid.
Instead, just tell the user explicitly about the first problem.
Test Plan:
- Ran `bin/phd log --id` with good (got logs) and bad IDs (got sensible error).
- Ran `bin/phd log` with any logs (got logs) and (simluated) without any logs (got error).
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19755
Summary:
See T13212 for some context and discussion on this being revived.
See T11694 for original context.
Add a query constraint for lease owners and implement the conduit search method for Drydock leases.
Ref T11694. Fixes T13212.
Test Plan:
- Called the API method from conduit and browsed lease queries from the UI.
- Used the new "ownerPHIDs" constraint via API console.
{F5963044}
Reviewers: yelirekim, amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, epriestley
Maniphest Tasks: T11694, T13212
Differential Revision: https://secure.phabricator.com/D16594
Summary:
Depends on D19753. Ref T13210. This is a small optimization that saves us from waiting up to 15 seconds for a yield.
When there are no Working Copy resources and a new lease comes in, we'll allocate one and yield until it activates.
If activating it (SSH'ing and running `git clone`) takes less than 15 seconds, the resource will activate (say, at T+4) but the lease won't update again for a while (say, until T+15). This leaves us with a pointless wait (in this example, we're sitting around for 9 seconds when we could move forward).
To improve this a little bit, let resources wake up the lease update tasks that triggered allocation after they activate. In the best case, that task runs ~15 seconds sooner. In the worst case, the awaken is just a no-op.
With a more-full queue, this has a smaller effect (it's likely something else will run and be able to use the resource in those 9 seconds).
With already-activated resources, this has no effect (when resources are already activated, we can lease immediately).
Test Plan:
- Cleaned up all working copy resources.
- Requested a new "A" working copy.
- Before patch: got a working copy after 17-18 seconds, most of which was spent yielded.
- After patch: got a working copy after 3-4 seconds.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19754
Summary:
Depends on D19752. Ref T13210. If resources take a long time to reclaim/destroy (normally, more than 15 seconds) a single new lease may update several times during the reclaim/destroy process and end up reclaiming multiple resources.
Instead: after a lease triggers a reclaim, prevent it from triggering another reclaim as long as the resource it is reclaiming hasn't finished its reclaim/destroy cycle. Basically, each lease only gets to destroy one resource at a time.
Test Plan:
- Added a `sleep(120)` to `destroyResource()` to simulate a long reclaim/destroy cycle.
- Allocated A, A, A working copies. Leased a B working copy.
- Before patch: saw "B" lease destroy all three "A" working copies after ~0, ~15, and ~30 seconds, then build a new "B" resource after ~120 seconds (when the first reclaim/destroy finished).
- After patch: saw "B" lease destroy one "A" working copy after ~0 seconds, then wait patiently until it finished up, then build a new "B" resource.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19753
Summary:
Depends on D19751. Ref T13210. When Drydock needs to reclaim an existing unused resource in order to build a new resource to satisfy a lease, the lease which triggered the reclaim currently gets thrown back into the pool with a 15-second yield.
If the queue is pretty empty and the reclaim is quick, this can mean that we spend up to 15 extra seconds just waiting for the lease update task to get another shot at building a resource (the resource reclaim may complete in a second or two, but nothing else happens until the yield ends).
Instead, when a lease triggers a reclaim, have the reclaim reawaken the lease task when it completes. In the best case, this saves us 15 seconds of waiting. In other cases (the task already completed some other way, the resource gets claimed before the lease gets to it), it's harmless.
Test Plan:
- Allocated A, A, A working copies with limit 3. Leased a B working copy.
- Before patch: allocation took ~32 seconds.
- After patch: allocation takes ~17 seconds (i.e., about 15 seconds less).
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19752
Summary:
Depends on D19750. See T13210. The `bin/drydock lease` command makes it easier to request ad-hoc leases, but currently takes lease attributes in the form `--attributes x=y,a=b`.
This was okay for all leases at the time, but doesn't really work for modern WorkingCopy resources since they take a `repositories.map` which has a dictionary as a value. You can't specify that with `repositories.map=...`.
Instead, point `--attributes` at a JSON file or use `--attributes -` to read from stdin.
Test Plan: Used `--attributes` with a file and stdin to allocate working copy leases with repositories.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Differential Revision: https://secure.phabricator.com/D19751
Summary:
Ref T13210. We currently "git reset --hard HEAD" during working copy leasing, mostly by convention/familiarity.
However, this command does not work in an empty repository, because there is no HEAD yet.
The command "git reset --hard" appears to have the same meaning and effect in all cases, except that it also works correctly in an empty repository.
The manual suggests that omitting HEAD should be the same as specifying HEAD, too:
> The <tree-ish>/<commit> defaults to HEAD in all forms.
Test Plan: Successfully leased a working copy for an empty repository using Drydock.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: yelirekim, PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19750
Summary:
Ref T13210. See PHI944. When parsing certain large diffs (the case in PHI944 is an 2.5-million line JSON diff), we spend ~66% of runtime and ~80% of memory doing copy detection (the little yellow bar which shows up to give you a hint that code was moved around within a diff).
This is pretty much pointless and copy hints are almost certainly never useful on large changes. Instead, just bail if the change is larger than some arbitrary "probably too big for copy hints to ever be useful" threshold (here, 65535 lines).
Test Plan:
Roughly, ran this against a 2.5 million line JSON diff:
```
$changes = id(new ArcanistDiffParser())->parseDiff($raw_diff);
$diff = DifferentialDiff::newFromRawChanges($viewer, $changes);
```
Before the changes, it took 20s + 2.5GB RAM to parse. After the changes, it took 7s + 500MB RAM to parse.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19748
Summary:
Ref T13210. See PHI937. This function datasource isn't quite implemented correctly: it doesn't resolve `package(project)` properly, since the logic only handles users.
This blames back to D14013, where it looks like `packages(..)` was added mostly as a general nice-to-have as part of a larger modernization change.
Test Plan: Ran a `packages(project)` query in Differential, got accurate results (previously: no results).
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19747
Summary:
Depends on D19734. Ref T13202. Ref T13109. Ref T10884. See PHI905. See PHI889. We currently rank cluster nodes in three cases:
# when performing a write, we can go to any node (D19734 should make our ranking good);
# when performing a read, we can go to any node (currently random, but T10884 discusses ideas to improve our ranking);
# when performing an internal synchronization before a read or a write, we must go to an up-to-date node.
Currently, case (3) is not-exactly-deterministic but not random, and we won't spread intracluster traffic acrosss the cluster evenly if, say, half of it is up to date and half of it is still synchronizing. For a given write, I believe all nodes will tend to synchronize from whichever node first received the write today.
Instead, shuffle the list and synchronize from any up-to-date node.
(I think we could improve upon this only by knowing which nodes actually have load and selecting the least-loaded -- doable, but not trivial.)
Test Plan: Poked at it locally, will deploy to `secure`. This is hard to measure/test terribly convincingly.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202, T13109, T10884
Differential Revision: https://secure.phabricator.com/D19735
Summary:
Ref T13109. Ref T13202. See PHI905. See PHI889. When we receive a write to a repository cluster, we currently send it to a random writable node.
Instead, we can prefer:
- the node currently holding the write lock; or
- any node which is already up to date.
These should simply be better nodes to take writes in all cases. The write lock is global for the repository, so there's no scaling benefit to spreading writes across different nodes, and these particular nodes will be able to accept the write more quickly.
Test Plan:
- This is observable by using `fprintf(STDERR, "%s\n", ...)` in the logic, then running `git push`. I'd like to pull this routing logic out of `PhabricatorRepository` at some point, probably into a dedicated `ClusterURIQuery` sort of class, but that is a larger change.
- Added some `fprintf(...)` stuff around which nodes were being selected.
- Added a `sleep(10)` after grabbing the write lock.
- In one window, pushed. Then pushed in a second window.
- Saw the second window select the lock holder as the write target based on it currently holding the lock.
- Without a concurrent push, saw pushes select up-to-date nodes based on their up-to-date-ness.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: joshuaspence, timhirsh
Maniphest Tasks: T13202, T13109
Differential Revision: https://secure.phabricator.com/D19734
Summary:
See PHI920. Ref T13210. Since the HTML is just:
```
<a>View Inline</a><span>filename.txt</span>
```
..double-clicking "filename.txt" in email selects "Inlinefilename.txt".
Add a space to stop this. At least in Safari, a space between the tags is not sufficient (perhaps because the parent is a `<div>`?). I couldn't find an authoritative-seeming source on what the rules for this actually are and adding a space here fixes the issue without changing the visual rendering, so just put it here.
Test Plan:
- Made an inline.
- Used `bin/mail show-outbound --id ... --dump-html` to dump the HTML.
- Double-clicked the filename.
{F5929186}
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19742
Summary:
Depends on D19743. Ref T13210. Since this command can easily dump a bunch of binary data (or just a huge long blob of nonsense) to stdout, default to requiring "--output <file>".
Using `--output -` will print to stdout.
Test Plan: Ran with: no `--output`, `--output file`, `--output -`, `--output - --overwrite`. Got sensible results or errors in all cases.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19744
Summary: Ref T13210. See PHI806. This enables basic export of revisions into flat data formats. This isn't too fancy, but just covers the basics since the driving use case isn't especially concerned about getting all the fields and details.
Test Plan: Exported some revisions into JSON, got sensible output.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19743
Summary: Depends on D19738. Ref T13210. Currently, when you use "--overwrite", we just //append// the new content. Instead, actually overwrite the file.
Test Plan: Used `--overwrite`, saw an actual clean overwrite instead of an append.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19739
Summary:
Ref T13210. Minor usability improvements to "bin/bulk export":
- Allow `--class task` to work (previously, only `--class ManiphestTaskSearchEngine` worked).
- If you run `--query jXIlzQyOYHPU`, don't require `--class`, since the query identifies the class on its own.
- Allow users to call `--query A --query B --query C` and get a union of all results.
Test Plan:
- Ran `--class task`, `--query A --query B`, `--query X` (with no `--class`), got good results.
- Ran various flavors of bad combinations (queries from different engines, invalid engines, query and class differing, ambiguous/invalid `--class` name) and got sensible errors.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13210
Differential Revision: https://secure.phabricator.com/D19738
Summary:
Ref T13202. See PHI900. Fixes T12814. Pholio currently builds HTML comments in an older way that can dump a bunch of over-escaped HTML into mail bodies.
Update the logic to be more similar to the Differential rendering logic and stop over-escaping things.
The result isn't perfect, but is dramatically less broken.
Test Plan: {F5919860}
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202, T12814
Differential Revision: https://secure.phabricator.com/D19733
Summary:
Ref T13202. See PHI906. This is a reasonable capability which we support in some other applications already.
(The only real reason not to support this is that it creates some clutter in the UI, but I think we're generally in better shape now than we were in the past, and we could make this UI collapse/fold at some point.)
Test Plan: Ran queries with a minimum date, a maximum date, both, and neither. Saw appropriate results in all cases.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19732
Summary:
Ref T13202. See <https://discourse.phabricator-community.org/t/2fa-input-box-isnt-hinted-as-a-password-so-browsers-suggest-auto-fills/1959>.
If browsers are autofilling this, I think browser behavior here is bad, but behavior is probably better on the balance if we hint this as `autocomplete="off"` and this is a minor concesssion.
Test Plan:
- I couldn't immediately get any browser to try to autofill this field (perhaps I've disabled autofill, or just not enabled it aggressively?), but this change didn't break anything.
- After the change, answered a TOTP prompt normally.
- After the change, inspected page content and saw `autocomplete="off"` on the `<input />` node.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19722
Summary:
Ref T13202. See PHI903 and PHI894. When a bot leaves 100 inline comments on the same file and the revision has a 30-member recipient list, we currently highlight the file 3000 times when building mail.
Instead, engage the parse cache so we highlight it once and reuse the cache 2,999 times.
Test Plan:
- Added debugging code to stop after mail generation and show cache hits/misses.
- Left a bunch of inlines in a file.
- Ran the worker with `bin/worker execute --id ...`.
- Before change: saw 4 comments x 2 recipients = 8 cache misses
- After change: saw 8 cache hits (cache already filled from web UI rendering)
- Removed debugging code, ran worker to completion.
- Used `bin/mail show-outbound --id ... --dump-html` to inspect resulting mail, saw no functional differences.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19721
Summary: Ref D18268. This typo cancelled itself out, and I can't find any other callers.
Test Plan: arc unit
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D19718
Summary:
Now on the blame page, identities get `avatar.png` and there are little tooltips that show a few characters of the committer identity string.
Also add a default icon for repo identities.
Test Plan: Loaded some blame pages for files touched by users with and without repo identities attached.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D19587
Summary:
Ref T13202. See PHI889. If the lock log is enabled, we can try to offer more details about lock holders.
When we fail to acquire a lock:
- check for recent acquisitions and suggest that this is a bottleneck issue;
- if there are no recent acquisitions, check for the last acquisition and print details about it (what process, how long ago, whether or not we believe it was released).
Test Plan:
- Enabled the lock log.
- Changed the lock wait time to 1 second.
- Added a `sleep(10)` after grabbing the lock.
- In one window, ran a Conduit call or a `git fetch`.
- In another window, ran another operation.
- Got useful/sensible errors for both ssh and web lock holders, for example:
> PhutilProxyException: Failed to acquire read lock after waiting 1 second(s). You may be able to retry later. (This lock was most recently acquired by a process (pid=12609, host=orbital-3.local, sapi=apache2handler, controller=PhabricatorConduitAPIController, method=diffusion.rawdiffquery) 3 second(s) ago. There is no record of this lock being released.)
> PhutilProxyException: Failed to acquire read lock after waiting 1 second(s). You may be able to retry later. (This lock was most recently acquired by a process (pid=65251, host=orbital-3.local, sapi=cli, argv=/Users/epriestley/dev/core/lib/phabricator/bin/ssh-exec --phabricator-ssh-device local.phacility.net --phabricator-ssh-key 2) 2 second(s) ago. There is no record of this lock being released.)
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19702
Summary:
Ref T13202. See PHI889. Update the read and write locks to the modern parameterized verison, which handles hashing/normalization and can store better logs.
This parameterized mode was added in D19173 and has been used successfully for some time, but not all locks have switched over to it yet.
Test Plan:
- Added an `fprintf(STDERR, $full_name)` to the lock code.
- Pulled a repository.
- Saw sensible lock name on stdout before "acquired read lock...".
- Additional changes in this patch series will vet this more completely.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19701
Summary:
Ref T13202. In D19660, I added comments to Phriction and tweaked some CSS.
One of these tweaks was getting rid of an extra border which was rendering under the comment area. However, I took off too much and ended up removing borders from other applications.
I think we don't actually need this `setNoBorder()` stuff after all -- a later change was sufficient to stop the actual border I was trying to get rid of from rendering. So this mostly just reverts part of D19660.
This rendering still isn't perfect, but I'm fine leaving that for another day for now.
Test Plan:
- Viewed comment areas in Phriction. Saw correct number of borders (1).
- Viewed comment areas in Maniphest. Saw correct number of borders (1).
- Grepped for extraneous removed classs, no hits.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19684
Summary:
Depends on D19682. Ref T13202. We currently fatal when trying to render a timeline if:
- an install is fresh, so there are no pages yet, and you look at "/w/"; or
- you're looking at a Phriction page which doesn't exist (yet) like "/w/aadsflknadsflnf/".
Rendering a timeline and comment area doesn't make sense in these cases, so don't render them.
Test Plan: Hit both cases described above, got "new/empty page" prompts instead of fatals.
Reviewers: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19683
Summary:
Ref T13202. See PHI881. These stories have bad rendering methods, but they didn't previously render into the timeilne (since Phriction documents didn't have a timeline).
Update the rendering to work.
The rendered outcome isn't great (it isn't very clear or explicit about exactly what moved where), but I'll fix that in a followup. This is a net improvement since it doesn't fatal the page, at least.
Test Plan:
- Moved page "X" to "Y".
- Viewed the old page "X".
- Before patch: bad timeline story would fatal rendering.
- After patch: story renders, at least, just not great.
Reviewers: amckinley
Maniphest Tasks: T13202
Differential Revision: https://secure.phabricator.com/D19682
Summary:
Depends on D19672. Ref T13197. See PHI873. This writes a trivial log when we begin acting on a working copy and makes it look reasonable in the UI.
This is mostly just to prove that logging works properly.
Test Plan: {F5887697}
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13197
Differential Revision: https://secure.phabricator.com/D19673