Summary: See PHI1319. Ref T13291. Bump the remarkup cache version, since the old JIRA / Asana rules may exist in the partial cached representation of remarkup blocks from older versions.
Test Plan: Typed some comments with various formatting, saw remarkup work fine.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13291
Differential Revision: https://secure.phabricator.com/D20619
Summary:
Ref T13328. Currently, we read from `mysqldump` something like this:
```
until (done) {
for (100 ms) {
mysqldump > in-memory-buffer;
}
in-memory-buffer > disk;
}
```
This general structure isn't great. In this use case, where we're streaming a large amount of data from a source to a sink, we'd prefer to have a "select()"-like way to interact with futures, so our code is called after every read (or maybe once some small buffer fills up, if we want to do the writes in larger chunks).
We don't currently have this (`FutureIterator` can wake up every X milliseconds, or on future exit, but, today, can not wake for readable futures), so we may buffer an arbitrary amount of data into memory (however much data `mysqldump` can write in 100ms).
Reduce the update frequency from 100ms to 10ms, and limit the buffer size to 32MB. This effectively imposes an artificial 3,200MB/sec limit on throughput, but hopefully that's fast enough that we'll have a "wake on readable" mechanism by the time it's a problem.
Test Plan:
- Replaced `mysqldump` with `cat /dev/zero` as the source command, to get fast input.
- Ran `bin/storage dump` with `var_dump()` on the buffer size.
- Before change: saw arbitrarily large buffers (300MB+).
- After change: saw consistent maximum buffer size of 32MB.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13328
Differential Revision: https://secure.phabricator.com/D20617
Summary: Ref T13321. The daemons no longer write PID files, so we no longer need to pass any of this stuff to them.
Test Plan: Grepped for affected symbols.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13321
Differential Revision: https://secure.phabricator.com/D20608
Summary:
Fixes T13304. Shell pipes and redirects do not have robust behavior when errors occur. We provide "--compress" and "--output" flags as robust alternatives, but do not currently recommend their use.
- Recommend their use, since their error handling behavior is more robust in the face of issues like full disks.
- If "--compress" is provided but won't work because the "zlib" extension is missing, raise an explicit error. I believe this extension is very common and this error should be rare. If that turns out to be untrue, we could take another look at this.
- Also, verify some flag usage sooner so we can exit with an error faster if you mistype a "bin/storage dump" command.
Test Plan: Read documentation, hit affected error cases, did a dump and spot-checked that it came out sane looking.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13304
Differential Revision: https://secure.phabricator.com/D20572
Summary:
See downstream <https://phabricator.wikimedia.org/T902>. Currently, timezones are rendered with their raw internal names (like `America/Los_Angeles`) which include underscores.
Replacing underscores with spaces is a more human-readable (and perhaps meaningfully better for things like screen readers, although this is pure speculation).
There's some vague argument against this, like "administrators may need to set a raw internal value in `phabricator.timezone` and this could mislead them", but we already give a pretty good error message if you do this and could improve hinting if necessary.
Test Plan: Viewed timezone list in {nav Settings} and the timezone "reconcile" dialog, saw a more-readable "Los Angeles".
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20559
Summary:
Ref T13294. An install is interested in a way to easily answer audit-focused questions like "what edits were made to any Herald rule in Q1 2019?".
We can answer this kind of question with a more granular version of feed that focuses on being exhaustive rather than being human-readable.
This starts a rough version of it and deals with the two major tricky pieces: transactions are in a lot of different tables; and paging across them is not trivial.
To solve "lots of tables", we just query every table. There's a little bit of sleight-of-hand to get this working, but nothing too awful.
To solve "paging is hard", we order by "<dateCreated, phid>". The "phid" part of this order doesn't have much meaning, but it lets us put every transaction in a single, stable, global order and identify a place in that ordering given only one transaction PHID.
Test Plan: {F6463076}
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13294
Differential Revision: https://secure.phabricator.com/D20531
Summary:
See PHI985. I think we pretty much need to start applying language-specific rules, but we can apply at least one more relatively language-agnostic rule: don't match lines which are indented 3+ levels.
In C++, we may have symbols like this:
```
class X {
public:
int m() { ... }
}
```
..but I believe no mainstream language puts symbol definitions 3+ levels deep.
Also clean up some of the tab handling very slightly.
Test Plan: Tests pass, looked at some C++ code and got slightly better (but still not great) matches.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20479
Summary:
Depends on D20380. Ref T8093. When prototypes are enabled, inject a (hopefully?) no-op proxy into the Git wire protocol.
This proxy decodes "git upload-pack" and allows the list of references to be rewritten, in a similar way to how we already proxy the Subversion protocol to rewrite URIs and proxy the Mercurial protocol to distinguish between read and write operations.
The piece we care about comes at the beginning, and looks like this:
```
<frame-length><ref-hash> <ref-name>\0<server-capabilities>\n
<frame-length><ref-hash> <ref-name>\n
<frame-length><ref-hash> <ref-name>\n
...
<0000>
```
We can add, remove, or modify this section to make it appear that the server has different refs than the refs that exist on disk.
Things I have tried:
- `git ls-remote`
- `git ls-remote` where the server hides some refs.
- `git fetch` where the fetch is a no-op.
Things I have not tried:
- `git fetch` where the fetch is not a no-op.
- Tricking things into doing protocol v2. Or: I tried this, I wasn't successful. In v2, additional "\0" tricks are used to hide data in the capabilities, I think?
- `git ls-remote` where we rewrite/hide the first ref in the list, and need to move the capabilities frame elsewhere.
- `git ls-remote` where the server has no refs at all, or we remove every ref.
So the "interesting" piece of this works, but it almost certainly needs some cleanup to survive interaction with the real world.
Test Plan: See above.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T8093
Differential Revision: https://secure.phabricator.com/D20381
Summary:
Ref T8093. Support dumping the protocol bytes to a side channel logfile, as a precursor to parsing the protocol and rewriting protocol frames to virtualize refs.
The protocol itself is mostly ASCII text so the raw protocol bytes are pretty comprehensible.
Test Plan:
{F6363221}
{F6363222}
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T8093
Differential Revision: https://secure.phabricator.com/D20380
Summary:
Depends on D20411. Ref T13272. Dashboards and panels have new indexes (Ferret and usage edges) that need a rebuild.
For large datasets like commits we have the "activity" flow in T11932, but realistically these rebuilds won't take more than a few minutes on any realistic install so we should be able to just queue them up as migrations.
Let migrations insert a job to basically run `bin/search index --type SomeObjectType`, then do that for dashboards and panels.
(I'll do Herald rules in a followup too, but I want to tweak one indexing thing there.)
Test Plan: Ran the migration, ran `bin/phd debug task`, saw everything get indexed with no manual intervention.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13272
Differential Revision: https://secure.phabricator.com/D20412
Summary:
See PHI1182. Ref T13266. The recent fixes didn't quite cover the case where you have a query, but order by something other than relevance, and page forward.
Refine the tests around building/selecting these columns and paging values a little bit to be more specific about what they care about.
Test Plan:
Executed queries, then went to "Next Page", for:
- query text, non-relevance order.
- query text, relevance order.
- no query text, non-relevance order.
- no query text, relevance order.
Also, made an API call similar to the one in PHI1182.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13266
Differential Revision: https://secure.phabricator.com/D20354
Summary: See PHI1180. Currently, when we failover to a replica, we may not log the failure. Failovers are serious business and bad news, so emit a log even if we are able to connect to the replica.
Test Plan:
Configured a bogus master and a good replica:
```
$ ./bin/mail list-outbound
[2019-03-29 16:26:09] PHLOG: 'Retrying (attempt 1) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:26:19] PHLOG: 'Retrying (attempt 2) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:26:29] EXCEPTION: (PhutilProxyException) Failed to connect to master database ("local_config"), failing over into read-only mode. {>} (AphrontConnectionQueryException) Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out. at [<phutil>/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:362]
<...snip backtrace...>
3945 Voided email rP04f9e72cbd10: Don't subscribe bots implicitly when they act on objects, or when they are…
3946 Voided email rPdf53d72e794c: Allow "Move Tasks to Column..." to prompt for MFA
3947 Voided email rP492b03628f19: Fix a typo in Drydock "Land" operations
3948 Voided email rPb469a5134ddd: Allow "SMTP" and "Sendmail" mailers to have "Message-ID" behavior configured in…
3949 Voided email rPa6fd8f04792d: When performing complex edits, pause sub-editors before they publish to…
...
```
Configured a bogus master and a bogus replica:
```
$ ./bin/mail list-outbound
[2019-03-29 16:26:57] PHLOG: 'Retrying (attempt 1) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:27:07] PHLOG: 'Retrying (attempt 2) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:27:27] PHLOG: 'Retrying (attempt 1) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.3 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:27:37] PHLOG: 'Retrying (attempt 2) after connection failure ("AphrontConnectionQueryException", #2002): Attempt to connect to root@127.0.0.3 failed with error #2002: Operation timed out.' at [/Users/epriestley/dev/core/lib/libphutil/src/aphront/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:124]
[2019-03-29 16:27:47] EXCEPTION: (PhabricatorClusterStrandedException) Unable to establish a connection to any database host (while trying "local_config"). All masters and replicas are completely unreachable.
AphrontConnectionQueryException: Attempt to connect to root@127.0.0.2 failed with error #2002: Operation timed out. at [<phabricator>/src/infrastructure/storage/lisk/PhabricatorLiskDAO.php:177]
<...snip backtrace...>
```
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20351
Summary:
Ref T13091. The Ferret "rank" column is a function of the query text and looks something like `SELECT ..., 2 + 2 AS rank, ...`.
You can't apply conditions to this kind of dynamic column with a WHERE clause: you get a slightly unhelpful error like "column rank unknown in where clause". You must use HAVING:
```
mysql> SELECT 2 + 2 AS x WHERE x = 4;
ERROR 1054 (42S22): Unknown column 'x' in 'where clause'
mysql> SELECT 2 + 2 AS x HAVING x = 4;
+---+
| x |
+---+
| 4 |
+---+
1 row in set (0.00 sec)
```
Add a flag to paging column definitions to let them specify that they must be applied with HAVING, then apply the whole paging clause with HAVING if any column requires HAVING.
Test Plan:
- In Maniphest, ran a fulltext search matching more than 100 results, ordered by "Relevance", then clicked "Next Page".
- Before patch: query with `... WHERE rank > 123 OR ...` caused MySQL error because `rank` is not a WHERE-able column.
- After patch: query builds as `... HAVING rank > 123 OR ...`, pages properly, no MySQL error.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13091
Differential Revision: https://secure.phabricator.com/D20298
Summary:
See downstream <https://phabricator.wikimedia.org/T210482>.
On mobile, the task graph can take up most of the screen. Hide it on devices. Keep it on the standalone view if you're really dedicated and willing to rotate your phone or whatever to see the lines.
Test Plan: Dragged window real narrow, saw graph hide.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20313
Ref T13266. We never page these queries, and previously never reached the
"nextPage()" method. The call order changed recently and this method is now
reachable. For now, just no-op it rather than throwing.
Summary:
Ref T13091. In Differential, if you provide a query and "Sort by: Relevance", we build a query like this:
```
((SELECT revision.* FROM ... ORDER BY rank) UNION ALL (SELECT revision.* FROM ... ORDER BY rank)) ORDER BY rank
```
The internal "ORDER BY rank" is technically redundant (probably?), but doesn't hurt anything, and makes construction easier.
The problem is that the outer "ORDER BY rank" at the end, which attempts to order the results of the two parts of the UNION, can't actually order them, since `rank` wasn't selected.
(The column isn't actually "rank", which //is// selected -- it's the document modified/created subcolumns, which are not.)
To fix this, actually select the fulltext columns into the result set.
Test Plan:
- Ran a non-empty fulltext query in Differential with "Bucket: Required Action" selected so the UNION construction fired.
- Ran normal queries in Maniphest and global search.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13091
Differential Revision: https://secure.phabricator.com/D20297
Summary:
Ref T13091. If you "Order By: Relevance" but don't actually specify a query, we currently raise a bare exception.
This operation is sort of silly/pointless, but it seems like it's probably best to just return the results for the other constraints in the fallback order (usually, by ID). Alternatively, we could raise a non-bare exception here ("You need to provide a fulltext query to order by relevance.")
Test Plan: Queried tasks by relevance with no actual query text.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13091
Differential Revision: https://secure.phabricator.com/D20296
Summary:
Ref T13259. Currently, visiting a page that executes a query with an invalid cursor raises a bare exception that escapes to top level.
Catch this a little sooner and tailor the page a bit.
Test Plan: Visited `/maniphest/?after=335234234223`, saw a nicer exception page.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13259
Differential Revision: https://secure.phabricator.com/D20295
Summary:
Ref T13259. Currently, queries set a flag and return a partial result set when they overheat. This is mostly okay:
- It's very unusual for queries to overheat if they don't have a real viewer.
- Overheating is rare in general.
- In most cases where queries can overheat, the context is a SearchEngine UI, which handles this properly.
In T13259, we hit a case where a query with an omnipotent viewer can overheat: if you have more than 1,000 consecutive commits in the database with invalid `repositoryID` values, we'll overheat and bail out. This is pretty bad, since we don't process everything.
Change this beahvior:
- Throw by default, so this stuff doesn't slip through the cracks.
- Handle the SearchEngine case explicitly ("it's okay to overheat, we'll handle it").
- Make `QueryIterator` disable overheating behavior: if we're iterating over all objects, we want to hit the whole table even if most of it is garbage.
There are some cases where this might cause new exception behavior that we don't necessarily want. For example, in Owners, each package shows "recent commits in this package". If you can't see the first 1,000 recent commits, you'd previously get a slow page with no results. Now you'll probably get an exception.
If these crop up, I think the best approach for now is to deal with them on a case-by-case basis and see how far we get. In the "Owners" case, it might be good to query by repositories you can see first, then query by commits in the package in those repositories. That should give us a better outcome than any generic behavior we could implement.
Test Plan:
- Added 100000 to all repositoryID values for commits on my local install.
- Before making changes, ran `bin/repository rebuild-identities --all --trace`. Saw the script process 1,000 rows and exit silently.
- Applied the first part ("throw by default") and ran `bin/repository rebuild-identities`. Saw the script process 1,000 rows, then raise an exception.
- Applied the second part ("disable for queryiterator") and ran the script again. Saw the script process all 15,000 rows without issues (although none are valid and none actually load).
- Viewed Diffusion, saw appropriate NUX / "overheated" UIs.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13259
Differential Revision: https://secure.phabricator.com/D20294
Summary:
Depends on D20292. Ref T13259. This converts the rest of the `getPagingValueMap()` callsites to operate on internal cursors instead.
These are pretty one-off for the most part, so I'll annotate them inline.
Test Plan:
- Grouped tasks by project, sorted by title, paged through them, saw consistent outcomes.
- Queried edges with "edge.search", paged through them using the "after" cursor.
- Poked around the other stuff without catching any brokenness.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13259
Differential Revision: https://secure.phabricator.com/D20293
Summary:
Depends on D20291. Ref T13259. Move all the simple cases (where paging depends only on the partial object and does not depend on keys) to a simple wrapper.
This leaves a smaller set of more complex cases where we care about external data or which keys were requested that I'll convert in followups.
Test Plan: Poked at things, but a lot of stuff is still broken until everything is converted.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13259
Differential Revision: https://secure.phabricator.com/D20292
Summary:
Ref T13259.
(NOTE) This is "infrastructure/guts" only and breaks some stuff in Query subclasses. I'll fix that stuff in a followup, it's just going to be a larger diff that's mostly mechanical.
When a user clicks "Next Page" on a tasks view and gets `?after=100`, we want to show them the next 100 //visible// tasks. It's possible that tasks 1-100 are visible, but tasks 101-788 are not, and the next visible task is 789.
We load task ID `100` first, to make sure they can actually see it: you aren't allowed to page based on objects you can't see. If we let you, you could use "order=title&after=100", plus creative retitling of tasks, to discover the title of task 100: create tasks named "A", "B", etc., and see which one is returned first "after" task 100. If it's "D", you know task 100 must start with "C".
Assume the user can see task 100. We run a query like `id > 100` to get the next 100 tasks.
However, it's possible that few (or none) of these tasks can be seen. If the next visible task is 789, none of the tasks in the next page of results will survive policy filtering.
So, for queries after the initial query, we need to be able to page based on tasks that the user can not see: we want to be able to issue `id > 100`, then `id > 200`, and so on, until we overheat or find a page of results (if 789-889 are visible, we'll make it there before overheating).
Currently, we do this in a not-so-great way:
- We pass the external cursor (`100`) directly to the subquery.
- We query for that object using `getPagingViewer()`, which is a piece of magic that returns the real viewer on the first page and the omnipotent viewer on the 2nd..nth page. This is very sketchy.
- The subquery builds paging values based on that object (`array('id' => 100)`).
- We turn the last result from the subquery back into an external cursor (`200`) and save it for the next time.
Note that the last step happens BEFORE policy (and other) filtering.
The problems with this are:
- The phantom-schrodinger's-omnipotent-viewer thing isn't explicity bad, but it's sketchy and generally not good. It feels like it could easily lead to a mistake or bug eventually.
- We issue an extra query each time we page results, to convert the external cursor back into a map (`100`, `200`, `300`, etc).
- In T13259, there's a new problem: this only works if the object is filtered out for policy reasons and the omnipotent viewer can still see it. It doesn't work if the object is filtered for some other reason.
To expand on the third point: in T13259, we hit a case where 100+ consecutive objects are broken (they point to a nonexistent `repositoryID`). These objects get filtered unconditionally. It doesn't matter if the viewer is omnipotent or not.
In that case: we set the next external cursor from the raw results (e.g., `200`). Then we try to load it (using the omnipotent viewer) to turn it into a map of values for paging. This fails because the object isn't loadable, even as the omnipotent viewer.
---
To fix this stuff, the new approach steps back a little bit. Primarily, I'm separating "external cursors" from "internal cursors".
An "External Cursor" is a string that we can pass in `?after=X` URIs. It generally identifies an object which the user can see.
An "Internal Cursor" is a raw result from `loadPage()`, i.e. before policy filtering. Usually, (but not always) this is a `LiskDAO` object that doesn't have anything attached yet and hasn't been policy filtered.
We now do this, broadly:
- Convert the external cursor to an internal cursor.
- Execute the query using internal cursors.
- If necessary, convert the last visible result back into an external cursor at the very end.
This fixes all the problems:
- Sketchy Omnipotent Viewer: We no longer ever use an omnipotent viewer. (We pick cursors out of the result set earlier, instead.)
- Too Many Queries: We only issue one query at the beginning, when going from "external" to "internal". This query is generally unavoidable since we need to make sure the viewer can see the object and that it's a real / legitimate object. We no longer have to query an extra time for each page.
- Total Failure on Invalid Objects: we now page directly with objects out of `loadPage()`, before any filtering, so we can page over invisible or invalid objects without issues.
This change switches us over to internal/external cursors, and makes simple cases (ID-based ordering) work correctly. It doesn't work for complex cases yet since subclasses don't know how to get paging values out of an internal cursor yet. I'll update those in a followup.
Test Plan: For now, poked around a bit. Some stuff is broken, but normal ID-based lists load correctly and page properly. See next diff for a more detailed test plan.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13259
Differential Revision: https://secure.phabricator.com/D20291
Summary:
See PHI1063. See PHI1114. Ref T13253. Currently, you can't `bin/worker execute` an archived task and can't `bin/worker retry` a successful task.
Although it's good not to do these things by default (particularly, retrying a successful task will double its effects), there are plenty of cases where you want to re-run something for testing/development/debugging and don't care that the effect will repeat (you're in a dev environment, the effect doesn't matter, etc).
Test Plan: Ran `bin/worker execute/retry` against archived/successful tasks. Got prompted to add more flags, then got re-execution.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13253
Differential Revision: https://secure.phabricator.com/D20246
Summary:
See <https://discourse.phabricator-community.org/t/unhandled-exception-muting-completed-bulk-jobs/2449>. Bulk Jobs have an "edge" table but currently do not support edge transactions. Add support.
This stops "Mute Notifications" from fataling.
The action probably doesn't do what the reporting user expects (it stops edits to the job object from sending notifications; it does not stop the edits the job performs from sending notifications) but I think this change puts us in a better place no matter what, even if we eventually clarify or remove this behavior.
Test Plan: Clicked "Mute Notifications" on a bulk job, got an effect instead of a fatal.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20226
Summary:
See <https://discourse.phabricator-community.org/t/traceback-rendering-task-query-in-dashboard/2450/>.
It looks like this blames to D19126, which added some more complex constraint logic but overlooked "range" constraints, which are handled separately.
Test Plan:
- Added a custom "date" field to Maniphest with `"search": true`.
- Executed a range query against the field.
Then:
- Before: Warnings about undefined indexes in the log.
- After: No such warnings.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: jbrownEP
Differential Revision: https://secure.phabricator.com/D20225
Summary:
Ref T5401. Depends on D20201. Add timestamps to worker tasks to track task creation, and pass that through to archive tasks. This lets us measure the total time the task spent in the queue, not just the duration it was actually running.
Also displays this information in the daemon status console; see screenshot: {F6225726}
Test Plan:
Stopped daemons, ran `bin/search index --all --background` to create lots of tasks, restarted daemons, observed expected values for `dateCreated` and `epochArchived` in the archive worker table.
Also tested the changes to `unarchiveTask` by forcing a search task to permanently fail and then `bin/worker retry`ing it.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin, epriestley, PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T5401
Differential Revision: https://secure.phabricator.com/D20200
Summary:
Depends on D20196. See PHI985. When empty, the "moved/copied" gutter currently renders with the same background color as the rest of the line. This can be misleading because it makes code look more indented than it is, especially if you're unfamiliar with the tool:
{F6225179}
If we remove this misleading coloration, we get a white gap. This is more clear, but looks a little odd:
{F6225181}
Instead, give this gutter a subtle background fill in all casses, to make it more clear that it's a separate gutter region, not a part of the text diff:
{F6225183}
Test Plan: See screenshots. Copied text from a diff, added/removed inlines, etc.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20197
Summary:
Ref T13161. Ref T12822. See PHI870. Long ago, the web was simple. You could leave your doors unlocked, you knew all your neighbors, crime hadn't been invented yet, and `<th>3</th>` was a perfectly fine way to render a line number cell containing the number "3".
But times have changed!
- In PHI870, this isn't good for screenreaders. We can't do much about this, so switch to `<td>`.
- In D19349 / T13105 and elsewhere, this `::after { content: attr(data-n); }` approach seems like the least bad general-purpose approach for preventing line numbers from being copied. Although Differential needs even more magic beyond this in the two-up view, this is likely good enough for the one-up view, and is consistent with other views (paste, harbormaster logs, general source display) where this technique is sufficient on its own.
The chance this breaks //something// is pretty much 100%, but we've got a week to figure out what it breaks. I couldn't find any issues immediately.
Test Plan:
- Created, edited, deleted inlines in 1-up and 2-up views.
- Replied, keyboard-navigated, keyboard-replied, drag-selected, poked and prodded everything.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13161, T12822
Differential Revision: https://secure.phabricator.com/D20188
Summary:
Depends on D20185. Ref T13161. Fixes T6791.
See some discusison in T13161. I want to move to a world where:
- whitespace changes are always shown, so users writing YAML and Python are happy without adjusting settings;
- the visual impact of indentation-only whitespace changes is significanlty reduced, so indentation changes are easy to read and users writing Javascript or other flavors of Javascript are happy.
D20181 needs a little more work, but generally tackles these visual changes and lets us always show whitespace changes, but show them in a very low-impact way when they're relatively unimportant.
However, a second aspect to this is how the diff is "aligned". If this file:
```
A
```
..is changed to this file:
```
X
A
Y
Z
```
...diff tools will generally produce this diff:
```
+ X
A
+ Y
+ Z
```
This is good, and easy to read, and what humans expect, and it will "align" in two-up like this:
```
1 X
1 A 2 A
3 Y
4 Z
```
However, if the new file looks like this instead:
```
X
A'
Y
Z
```
...we get a diff like this:
```
- A
+ X
+ A'
+ Y
+ Z
```
This one aligns like this:
```
1 A
1 X
2 A'
3 Y
4 Z
```
This is correct if `A` and `A'` are totally different lines. However, if `A'` is pretty much the same as `A` and it just had a whitespace change, human viewers would prefer this alignment:
```
1 X
1 A 2 A'
3 Y
4 Z
```
Note that `A` and `A'` are different, but we've aligned them on the same line. `diff`, `git diff`, etc., won't do this automatically, and a `.diff` doesn't have a way to say "these lines are more or less the same even though they're different", although some other visual diff tools will do this.
Although `diff` can't do this for us, we can do it ourselves, and already have the code to do it, because we already nearly did this in the changes removed in D20185: in "Ignore All" or "Ignore Most" mode, we pretty much did this already.
This mostly just restores a bit of the code from D20185, with some adjustments/simplifications. Here's how it works:
- Rebuild the text of the old and new files from the diff we got out of `arc`, `git diff`, etc.
- Normalize the files (for example, by removing whitespace from each line).
- Diff the normalized files to produce a second diff.
- Parse that diff.
- Take the "alignment" from the normalized diff (whitespace removed) and the actual text from the original diff (whitespace preserved) to build a new diff with the correct text, but also better diff alignment.
Originally, we normalized with `diff -bw`. I've replaced that with `preg_replace()` here mostly just so that we have more control over things. I believe the two behaviors are pretty much identical, but this way lets us see more of the pipeline and possibly add more behaviors in the future to improve diff quality (e.g., normalize case? normalize text encoding?).
Test Plan:
{F6217133}
(Also, fix a unit test.)
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13161, T6791
Differential Revision: https://secure.phabricator.com/D20187
Summary:
Depends on D20181. Depends on D20182. Fixes T3498. Ref T13161. My claim, at least, is that D20181 can be tweaked to be good enough to throw away this "feature" completely.
I think this feature was sort of a mistake, where the ease of access to `diff -bw` shaped behavior a very long time ago and then the train just ran a long way down the tracks in the same direction.
Test Plan: Grepped for `whitespace`, deleted almost everything. Poked around the UI a bit. I'm expecting the whitespace changes to get some more iteration this week so I not being hugely pedantic about testing this stuff exhaustively.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13161, T3498
Differential Revision: https://secure.phabricator.com/D20185
Summary:
Ref T13249. Ref T11738. See PHI985. Currently, we have a crude heuristic for guessing what line in a source file provides the best context.
We get it wrong in a lot of cases, sometimes selecting very silly lines like "{". Although we can't always pick the same line a human would pick, we //can// pile on heuristics until this is less frequently completely wrong and perhaps eventually get it to work fairly well most of the time.
Pull the logic for this into a separate standalone class and make it testable to prepare for adding heuristics.
Test Plan: Ran unit tests, browsed various files in the web UI and saw as-good-or-better context selection.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13249, T11738
Differential Revision: https://secure.phabricator.com/D20171
Summary:
Ref T13248. This will probably need quite a bit of refinement, but we can reasonably allow subtype definitions to adjust custom field behavior.
Some places where we use fields are global, and always need to show all the fields. For example, on `/maniphest/`, where you can search across all tasks, you need to be able to search across all fields that are present on any task.
Likewise, if you "export" a bunch of tasks into a spreadsheet, we need to have columns for every field.
However, when you're clearly in the scope of a particular task (like viewing or editing `T123`), there's no reason we can't hide fields based on the task subtype.
To start with, allow subtypes to override "disabled" and "name" for custom fields.
Test Plan:
- Defined several custom fields and several subtypes.
- Disabled/renamed some fields for some subtypes.
- Viewed/edited tasks of different subtypes, got desired field behavior.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13248
Differential Revision: https://secure.phabricator.com/D20161
Summary:
Ref T13253. Fixes T6615. See that task for discussion.
- Remove three keys which serve no real purpose: `dataID` doesn't do anything for us, and the two `leaseOwner` keys are unused.
- Rename `leaseOwner_2` to `key_owner`.
- Fix an issue where `dataID` was nullable in the active table and non-nullable in the archive table.
In practice, //all// workers have data, so all workers have a `dataID`: if they didn't, we'd already fatal when trying to move tasks to the archive table. Just clean this up for consistency, and remove the ancient codepath which imagined tasks with no data.
Test Plan:
- Ran `bin/storage upgrade`, inspected tables.
- Ran `bin/phd debug taskmaster`, worked through a bunch of tasks with no problems.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13253, T6615
Differential Revision: https://secure.phabricator.com/D20175
Summary:
Depends on D20175. Ref T12425. Ref T13253. Currently, importing commits can stall search index rebuilds, since index rebuilds use an older priority from before T11677 and weren't really updated for D16585.
In general, we'd like to complete all indexing tasks before continuing repository imports. A possible exception is if you rebuild an entire index with `bin/search index --rebuild-the-world`, but we could queue those at a separate lower priority if issues arise.
Test Plan: Ran some search indexing through the queue.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13253, T12425
Differential Revision: https://secure.phabricator.com/D20177
Summary:
Ref T6703. When we import external data from a third-party install to a Phacility instance, we must link instance accounts to central accounts: either existing central accounts, or newly created central accounts that we send invites for.
During this import, or when users register and claim those new accounts, we do a write from `admin.phacility.com` directly into the instance database to link the accounts.
This is pretty sketchy, and should almost certainly just be an internal API instead, particularly now that it's relatively stable.
However, it's what we use for now. The process has had some issues since the introduction of `%R` (combined database name and table refrence in queries), and now needs to be updated for the new `providerConfigPHID` column in `ExternalAccount`.
The problem is that `%R` isn't doing the right thing. We have code like this:
```
$conn = new_connection_to_instance('turtle');
queryf($conn, 'INSERT INTO %R ...', $table);
```
However, the `$table` resolves `%R` using the currently-executing-process information, not anything specific to `$conn`, so it prints `admin_user.user_externalaccount` (the name of the table on `admin.phacility.com`, where the code is running).
We want it to print `turtle_user.user_externalaccount` instead: the name of the table on `turtle.phacility.com`, where we're actually writing.
To force this to happen, let callers override the namespace part of the database name.
Long term: I'd plan to rip this out and replace it with an API call. This "connect directly to the database" stuff is nice for iterating on (only `admin` needs hotfixes) but very very sketchy for maintaining.
Test Plan: See next diff.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T6703
Differential Revision: https://secure.phabricator.com/D20167
Summary: Ref T13250. See D20149. Mostly: clarify semantics. Partly: remove magic "null" behavior.
Test Plan: Poked around, but mostly just inspection since these are pretty much one-for-one.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: yelirekim
Maniphest Tasks: T13250
Differential Revision: https://secure.phabricator.com/D20154
Summary:
Ref T13250. Ref T13249. Some remarkup rules, including `{image ...}` and `{meme ...}`, may cache URIs as objects because the remarkup cache is `serialize()`-based.
URI objects with `query` cached as a key-value map are no longer valid and can raise `__toString()` fatals.
Bump the cache version to purge them out of the cache.
Test Plan: See PHI1074.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13250, T13249
Differential Revision: https://secure.phabricator.com/D20160
Summary: See D20136. This method is sort of inherently bad because it is destructive for some inputs (`x=1&x=2`) and had "PHP-flavored" behavior for other inputs (`x[]=1&x[]=2`). Move to explicit `...AsMap` and `...AsPairList` methods.
Test Plan: Bit of an adventure, see inlines in a minute.
Reviewers: amckinley
Reviewed By: amckinley
Differential Revision: https://secure.phabricator.com/D20141
Summary:
Depends on D20106. Ref T6703. Since I plan to change the `ExternalAccount` table, these migrations (which rely on `save()`) will stop working.
They could be rewritten to use raw queries, but I suspect few or no installs are affected. At least for now, just make them safe: if they would affect data, fatal and tell the user to perform a more gradual upgrade.
Also remove an `ALTER IGNORE TABLE` (this syntax was removed at some point) and fix a `%Q` when adjusting certain types of primary keys.
Test Plan: Ran `bin/storage upgrade --no-quickstart --force --namespace test1234` to get a complete migration since the beginning of time.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T6703
Differential Revision: https://secure.phabricator.com/D20107
Summary:
Ref T13244. See D20080. Rather than randomly jittering service calls, we can give each host a "metronome" that ticks every 60 seconds to get load to spread out after one cycle.
For example, web001 ticks (and makes a service call) when the second hand points at 0:17, web002 at 0:43, web003 at 0:04, etc.
For now I'm just planning to seed the metronomes randomly based on hostname, but we could conceivably give each host an assigned offset some day if we want perfectly smooth service call rates.
Test Plan: Ran unit tests.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13244
Differential Revision: https://secure.phabricator.com/D20087
Summary:
See T13240. Ref T13242. When we're issuing a query that will raise policy exceptions (i.e., give the user a "You Shall Not Pass" dialog if they can not see objects it loads), don't do space filtering in MySQL: when objects are filtered out in MySQL, we can't distinguish between "bad/invalid ID/object" and "policy filter", so we can't raise a policy exception.
This leads to cases where viewing an object shows "You Shall Not Pass" if you can't see it for any non-Spaces reason, but "404" if the reason is Spaces.
There's no product reason for this, it's just that `spacePHID IN (...)` is important for non-policy-raising queries (like a list of tasks) to reduce how much application filtering we need to do.
Test Plan:
Before:
```
$ git pull
phabricator-ssh-exec: No repository "spellbook" exists!
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
After:
```
$ git pull
phabricator-ssh-exec: [You Shall Not Pass: Unknown Object (Repository)] This object is in a space you do not have permission to access.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13242
Differential Revision: https://secure.phabricator.com/D20042
Summary:
Ref T920. Ref T13235. This adds a `Future`, similar to `TwilioFuture`, for interacting with Amazon's SNS service.
Also updates the documentation.
Also makes the code consistent with the documentation by accepting a `media` argument.
Test Plan: Clicked the "send test message" button from the Settings UI.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Maniphest Tasks: T13235, T920
Differential Revision: https://secure.phabricator.com/D19982
Summary:
Depends on D19955. Ref T920. Ref T5969. Update Postmark to accept new Message objects. Also:
- Update the inbound whitelist.
- Add a little support for `media` configuration.
- Add a service call timeout (see T5969).
- Drop the needless word "Implementation" from the Adapter class tree. I could call these "Mailers" instead of "Adapters", but then we get "PhabricatorMailMailer" which feels questionable.
Test Plan: Used `bin/mail send-test` to send mail via Postmark with various options (mulitple recipients, text vs html, attachments).
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T5969, T920
Differential Revision: https://secure.phabricator.com/D19956
Summary:
Ref T920. Over time, mail has become much more complex and I think considering "mail", "sms", "postcards", "whatsapp", etc., to be mostly-the-same is now a more promising avenue than building separate stacks for each one.
Throw away all the standalone SMS code, including the Twilio config options. I have a separate diff that adds Twilio as a mail adapter and functions correctly, but it needs some more work to bring upstream.
This permanently destroys the `sms` table, which no real reachable code ever wrote to. I'll call this out in the changelog.
Test Plan:
- Grepped for `SMS` and `Twilio`.
- Ran storage upgrade.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T920
Differential Revision: https://secure.phabricator.com/D19939
Summary:
Depends on D19906. Ref T13222. This isn't going to win any design awards, but make the "wait" and "answered" elements a little more clear.
Ideally, the icon parts could be animated Google Authenticator-style timers (but I think we'd need to draw them in a `<canvas />` unless there's some clever trick that I don't know) or maybe we could just have the background be like a "water level" that empties out. Not sure I'm going to actually write the JS for either of those, but the UI at least looks a little more intentional.
Test Plan:
{F6070914}
{F6070915}
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13222
Differential Revision: https://secure.phabricator.com/D19908
Summary:
Depends on D19903. Ref T13222. This was a Facebook-specific thing from D6202 that I believe no other install ever used, and I'm generally trying to move away from the old "event" system (the more modern modular/engine patterns generally replace it).
Just drop support for this. Since the constant is being removed, anything that's actually using it should break in an obvious way, and I'll note this in the changelog.
There's no explicit replacement but I don't think this hook is useful for anything except "being Facebook in 2013".
Test Plan:
- Grepped for `TYPE_AUTH_WILLLOGIN`.
- Logged in.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13222
Differential Revision: https://secure.phabricator.com/D19904
Summary:
Depends on D19919. Ref T11351. This method appeared in D8802 (note that "get...Object" was renamed to "get...Transaction" there, so this method was actually "new" even though a method of the same name had existed before).
The goal at the time was to let Harbormaster post build results to Diffs and have them end up on Revisions, but this eventually got a better implementation (see below) where the Harbormaster-specific code can just specify a "publishable object" where build results should go.
The new `get...Object` semantics ultimately broke some stuff, and the actual implementation in Differential was removed in D10911, so this method hasn't really served a purpose since December 2014. I think that broke the Harbormaster thing by accident and we just lived with it for a bit, then Harbormaster got some more work and D17139 introduced "publishable" objects which was a better approach. This was later refined by D19281.
So: the original problem (sending build results to the right place) has a good solution now, this method hasn't done anything for 4 years, and it was probably a bad idea in the first place since it's pretty weird/surprising/fragile.
Note that `Comment` objects still have an unrelated method with the same name. In that case, the method ties the `Comment` storage object to the related `Transaction` storage object.
Test Plan: Grepped for `getApplicationTransactionObject`, verified that all remaining callsites are related to `Comment` objects.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T11351
Differential Revision: https://secure.phabricator.com/D19920