Summary:
Ref T11613. In D16503/T11598 I refined the setup flow to improve messaging for early-stage setup issues, but failed to fully untangle things.
We sometimes still try to access a cache which uses configuration before we build configuration, which causes an error.
Instead, store "are we in flight / has setup ever worked?" in a separate cache which doesn't use the cache namespace. This stops us from trying to read config before building config.
Test Plan:
Hit bad extension error with a fake extension, got a proper setup help page:
{F1812803}
Solved the error, reloaded, broke things again, got a "friendly" page:
{F1812805}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11613
Differential Revision: https://secure.phabricator.com/D16542
Summary: Fixes T11607.
Test Plan:
- Made a comment using `{key ...}`.
- Used `bin/mail show-outbound --id X --dump-html > test.html` to review HTML:
{F1805304}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11607
Differential Revision: https://secure.phabricator.com/D16523
Summary:
Fixes T11583.
- When users run `bin/storage upgrade` for the first time on a new install, we currently give them a prompt which feels rough and which they can only reasonably ever answer "yes" to.
- We generally use cautionary language ("found issues with schema") in this workflow. Adjustments are now routine, so use more neutral and progress-oriented language ("found adjustments to apply").
Test Plan:
- Ran `bin/storage upgrade --namesapce kappa123`, got an adjustment using neutral language without prompting.
- Dropped a key, ran `bin/storage upgrade`, got normal workflow (but with more neutral language).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11583
Differential Revision: https://secure.phabricator.com/D16509
Summary: Fixes T11593. We ask for a list of values when searching for custom "link" fields, but don't handle it correctly when actually construcitng a query.
Test Plan:
Added this custom field:
```
{
"mycompany.target-version": {
"name": "Target Version",
"type": "link",
"search": true
}
}
```
Set a task to "beta". Let daemons index it. Queried for:
```
constraints: {
"custom.mycompany.target-version": [
"beta"
]
}
```
Got just one result back.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11593
Differential Revision: https://secure.phabricator.com/D16508
Summary: Ref T11589. Provide a way for scripts to say "just continue if database config fails", and use it in `bin/config` and `bin/storage`.
Test Plan:
- Broke database config.
- Ran `bin/config`, worked fine.
- Ran `bin/storage`, got helpful "set up the database" message.
- Ran `bin/repository`, got fatal.
- Ran normal site with valid/invalid config, got proper feedback.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16502
Summary:
Ref T11589. This runs:
- preflight checks (critical checks: PHP version stuff, extensions);
- configuration;
- normal checks.
The PHP checks are split into critical ("bad version") and noncritical ("sub-optimal config").
I tidied up the extension checks slightly, we realistically depend on `cURL` nowadays.
Test Plan:
- Faked a preflight failure.
- Hit preflight check.
- Got expected error screen.
- Loaded normal pages.
- Hit a normal setup check.
- Used DarkConsole "Startup" tab to verify that preflight checks take <1ms to run (we run them on every page without caching, at least for now, but they only do trivial checks like PHP versions).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16500
Summary: Caught one of these while reviewing docs, grepped for the other one.
Test Plan: `grep`, reading
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16498
If the namespace is something like "test_example" we currently fail to
renamespace the dump.
(Cowboy committing this since this is currently blocking a data export.)
Test Plan:
- Renamespaced a local dump, examined the output, saw 60 create / 60 use, reimported it.
- Will export in production.
Auditors: chad
Summary:
Fixes T11577. When we connect to a host and try to select a database which does not exist, we currently treat it as though the host wasn't reachable.
This isn't correct, and prevents storage from being initialized while already in cluster mode, since the "config" database won't exist yet the first time we connect.
Instead, distinguish between `AphrontSchemaQueryException` (thrown on connection if the requested database is not present) and other errors.
Test Plan:
- Put Phabricator into cluster database mode (`cluster.databases = ...`).
- Swapped `storage.default-namespace` to force initialization of a new install.
- Ran `bin/storage upgrade`.
- Before patch: Immediate fatal about unreachablility.
- After patch: Database initialized.
- Also ran initialization steps in tranditional single-host mode (`cluster.databases` empty, `mysql.host` configured).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11577
Differential Revision: https://secure.phabricator.com/D16489
Summary:
Ref T10867 for original use case. This workflow provides a plausible way for administrators to stop the daemons when performing upgrades or maintenance, then bring those daemons back up without resulting in the failure of builds that were running at the time.
On our organization's phab install, builds are running 24/7. The majority of these builds last for at least several minutes, and contain build steps which fail if interrupted and then resumed, as happens when turning daemons on and off.
Instead of allowing these build steps to resume execution as normal, this workflow will instruct active builds to restart their entire build process instead of just resuming whichever step they were on.
Test Plan:
contrived a build plan which would fail if resumed partway through:
- lease a working copy
- command `touch restart_{build.id}`
- command `test -e restart_{build.id} && rm restart_{build.id} && sleep 60`
followed old procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/phd start`
- saw the builds fail
followed new procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/harbormaster restart --active`
- `./bin/phd start`
- saw the builds pass
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Maniphest Tasks: T10867
Differential Revision: https://secure.phabricator.com/D16485
Summary: Ref T11132, significantly cleans up the Config app, new layout, icons, spacing, etc. Some minor todos around re-designing "issues", mobile support, and maybe another pass at actual Group pages.
Test Plan: Visit and test every page in the config app, set new items, resolve setup issues, etc.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16468
Summary: Ref T11132. This gets rid of the red bar for admins and instead shows a new menu item next to notifications/chat if there are unresolved configuration issues. Menu goes away if there are no issues. May move this later into the bell icon, but think think might be the right place to start especially for NUX and updates. Maybe limit the number of items?
Test Plan:
Tested with some, lots, and no config issues.
{F1790156}
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16461
Summary: Previously, the chatbot docs instructed users to get certificates for the conduit API and put the cert in a `conduit.cert` config key. In order to get the chatbot to work, I needed to instead get an API key and put it in the `conduit.token` config entry.
Test Plan: Doc fix. Tried the new documented way and it worked.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D16443
Summary: Fixes T11508. The config entry `remarkup.ignored-object-names` already contains a blacklist of object names that should be ignored in the web UI. This change makes that blacklist also apply to the chatbot. This makes it possible to have a chatbot ignore things like V1, V2, Q1 and any other phrases the user may not want to generate links to objects.
Test Plan: Create objects (tasks, slowvotes, etc.) then mention the object names in chat (with the bot running). The bot should respond with helpful links to the given objects. Then add the object names to the blacklist through the config web UI. This apparently triggers the bot to restart itself. Then mention the object names in chat again. The bot should no longer respond with links because those object names have been added to the blacklist regex.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: epriestley
Maniphest Tasks: T11508
Differential Revision: https://secure.phabricator.com/D16442
Summary:
Ref T11522. This tries to reduce the cost of rewriting a repository by making handles smarter about rewritten commits.
When a handle references an unreachable commit, try to load a rewrite hint for the commit. If we find one, change the handle name to "OldHash > NewHash" to provide a strong hint that the commit was rewritten and that copy/pasting the old hash (say, to the CLI) won't work.
I think this notation isn't totally self-evident, but users can click it to see the big error message on the page, and it's at least obvious that something weird is going on, which I think is the important part.
Some possible future work:
- Not sure this ("Recycling Symbol") is the best symbol? Seems sort of reasonable but mabye there's a better one.
- Putting this information directly on the hovercard could help explain what this means.
Test Plan: {F1780719}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11522
Differential Revision: https://secure.phabricator.com/D16437
Summary: Switches over to new property UI boxes, splits core and apps into separate pages. Move Versions into "All Settings". I think there is some docs I likely need to update here as well.
Test Plan: Click on each item in the sidebar, see new headers.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D16429
Summary:
Fixes T11490. Currently, this query can not use a key and the table size may be quite large.
Adjust the query so it can use a key for both selection and ordering, and add that key.
Test Plan: Ran `EXPLAIN` on the old query in production, then added the key and ran `EXPLAIN` on the new query. Saw key in use, and "rows" examined drop from 29,273 to 15.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11490
Differential Revision: https://secure.phabricator.com/D16423
Summary:
Ref T6996. Depends on D16407. This does the same stuff as D16407, but for `bin/storage renamespace`. In particular:
- Support writing directly to a file (so we can get good errors on failure).
- Support in-process compression.
Also add support for reading out of a `storage dump` subprocess, so we don't have to do a dump-to-disk + renamespace + compress dance and can just stream out of MySQL directly to a compressed file on disk.
This is used in the second stage of instance exports (see T7148).
It would be nice to share more code with `bin/storage dump`, and possibly to just make this a flag for it, although we still do need to do the file-based version when importing (vs exporting). I figured that was better left for another time.
Test Plan:
Ran `bin/storage renamespace --live --output x --from A --to B --compress --overwrite` and similar commands.
Verified that a compressed, renamespaced dump came out of the other end.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6996
Differential Revision: https://secure.phabricator.com/D16410
Summary:
Ref T6996. If you do this kind of thing in the shell, you don't get a good error by default if the `dump` command fails:
```
$ bin/storage dump | gzip > output.sql.gz
```
This can be worked around with some elaborate bash tricks, but they're really clunky and uninintuitive.
We also need to do this in several places (while writing backups; while performing exports), and I don't want to copy clunky bash tricks all over the codebase.
Instead, provide `--output` and `--compress` flags which just do this processing inside `bin/storage dump`. It will fail appropriately if any of the underlying operations fail. This also makes the write a little safer (refuses to overwrite) and the code more reusable.
Test Plan:
- Did three dumps, with no flags, `--output`, and `--output --compress`.
- Verified all three took similar amounts of time and were identical except for "Date Exported" timestamps in comments (except that the compressed one was compressed).
- Used `gunzip` to examine the compressed one, verified it was really compressed.
- Faked a write error, saw properly command behavior (clean up file + exit with error).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6996
Differential Revision: https://secure.phabricator.com/D16407
Summary:
The cluster synchronization code runs either actively (before returning a response to `git clone`, for example) or passively (routinely, as the daemons update reposiories).
The active sync runs as the web user (if running `git clone http://...`) or the VCS user (if running `git clone ssh://...`). But the passive sync runs as the daemon user.
All of these sync processes need to run actual commands as the daemon user (`git fetch ...`).
For the active ones, we must `sudo`.
For the passive ones, we're already the right user. We run the same code, and end up trying to sudo to ourselves, which `sudo` isn't happy about by default.
Depending on how `sudo` is configured and which users things are running as this might work anyway, but it's silly and if it doesn't work it requires you to go make non-obvious, weird config changes that are unintuitive and somewhat nonsensical. This is probably worse on the balance than adding a bit of complexity to the code.
Instead, test which user we're running as. If it's already the right user, don't sudo.
Test Plan:
- Ran `bin/repository update --trace` as daemon user, saw no more `sudo`.
- Ran a `git clone` to make sure that didn't break.
Reviewers: chad, avivey
Reviewed By: avivey
Differential Revision: https://secure.phabricator.com/D16391
Summary:
Ref T11458. Depends on D16388. Currently, we're very aggressive about closing connections in the taskmaster daemons.
This can end up taking up a lot of resources. In particular, because the outgoing port for outbound connections normally can not be reused for 60 seconds after a connection closes, we may exhaust outbound ports on the host if there's a big queue full of stuff that's being processed very quickly.
At a minimum, we //always// are holding open a `worker` connection, which we always need again right away. So even in the best case we end up opening/closing this about once per second and each daemon takes up about ~60 outbound ports when it should take up ~1.
So, make two adjustments:
- First, only close connections which we haven't issued a query on in the last 60 seconds. This should prevent us from closing connections that we'll need again immediately in most cases. In the worst case, we shouldn't be eating up any extra ports under default TCP behavior.
- Second, explicitly close connections. We were relying on implicit/GC behavior (maybe as a holdover from very long ago, before we got connection wrappers in place?), which probably did about the same thing but isn't as predictable and can't be profiled or instrumented.
Test Plan:
This is somewhat difficult to test completely convincingly in isolation since the problem behavior depends on production scales and the workload, and to some degree on configuration.
I tested that this stuff baiscally works by adding logging to connect/close and running the daemons, verifying that they churned connections a lot before this change (e.g., ~1/s even at no load) and churn rarely afterward (e.g., almost never at no load).
I ran some workload through them to make sure I didn't completely break anything.
The best real test is just seeing how production responds. Current inbound/outbound connections on `secure001` are 1,200:
```
secure001 $ netstat -t | grep :mysql | wc -l
1164
```
Current outbound from `repo001` are 18,600:
```
repo001 $ netstat -t | grep :mysql | wc -l
18663
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11458
Differential Revision: https://secure.phabricator.com/D16389
Summary:
Fixes T11446. We can raise the misleading error:
> No valid databases are configured!
...when a valid master is configured but unreachable.
Instead, more carefully raise either "nothing is configured" or "nothing is reachable".
Test Plan: Configured only a master, artificially severed it, got "nothing is reachable" instead of "nothing is configured".
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11446
Differential Revision: https://secure.phabricator.com/D16386
Summary:
Fixes T11453. Currently, commit message summaries are limited to 80 bytes. This may only be 20-40 characters for CJK languages or langauges with Cyrillic script.
Increase storage size to 255, then truncate to the shorter of 255 bytes or 80 glyphs. This preserves the same behavior for latin languages, but is less tight for Russian, etc.
Some minor additional changes:
- Provide a way to ask "how much data fits in this column?" so we don't have to duplicate column lengths across summary checks or UI errors like "title too long".
- Remove the `text80` datatype, since no other columns use it and we have no use cases (or likely use cases) for it.
Test Plan:
- Made a commit with a Cyrillic title, saw reasonable summarization in UI:
{F1757522}
- Added and ran unit tests.
- Grepped for removed `SUMMARY_MAX_LENGTH` constant.
- Grepped for removed `text80` data type.
Reviewers: avivey, chad
Reviewed By: avivey
Subscribers: avivey
Maniphest Tasks: T11453
Differential Revision: https://secure.phabricator.com/D16385
Summary:
Ref T11404. This improves things by about 10%:
- Use `PhutilClassMapQuery`, which has slightly better caching.
- Do a little less work to generate pretty error messages.
- Make the "disabled" code a little faster (and sort of clearer, too?) by doing less fancy stuff.
These are pretty minor adjustments and not the sort of optimizations I'd make normally, but this code gets called ~100x (once per revision) and generates ~10 fields normally, so even small savings can amount to something.
(I also want to try to make `arc` faster in the next update, and improving Conduit performance helps with that.)
Test Plan: Ran `differential.revision.search`, saw cost drop from ~195ms to ~170ms locally.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16355
Summary:
Ref T11404. On my system, this improves performance by 10-15% for `differential.revision.search`.
`PhutilTypeSpec` provides high quality typechecking and is great for user-facing things that need good error messages.
However, it's also a bit slow, and pointless here (the API is internal and it only has one possible option).
I think I added this after writing `checkMap` just because I wanted to use it more often. My desire is sated after finding many reasonable ways to use it to give users high-quality error messages about things like configuration files.
Test Plan: Profiled `differential.revision.search` before and after change, saw wall time drop from ~220ms to ~195ms.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16354
Summary:
Ref T11404. Depends on D16350.
Currently, custom fields can issue "N+1" queries in some cases, so querying 100 revisions issues 100 extra queries.
This affects all `*.search` endpoints for objects with custom fields, and some older endpoints (notably `differential.query`).
This change bulk loads "normal" custom fields, which gets rid of some of these queries. Instead of loading fields for each object, we build a big list of all fields and load them all at once.
The next change will tackle the remaining inefficient edge queries.
Test Plan:
- Configured a custom field with normal database storage in Differential.
- Ran `differential.query`, looking at custom fields in results for correctness.
- Ran `differential.revision.search`, looking at custom fields in results for correctness.
- In both cases, observed queries drop from `3N` to `2N` (all the "normal" custom field stuff got bulk loaded).
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16351
Summary:
Ref T11404. Currently, SearchEngineAttachments can bulk-load data but SearchEngineExtensions can not.
This leads to poor performance of custom fields. See T11404 for discussion.
This changes the API to support a bulk load + format pattern like the one Attachments use. The next change will use it to bulk-load custom field data.
Test Plan:
- Ran `differential.query`, `differential.revision.search` as a sanity check.
- No behavioral changes are expected
- See next revision.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16350
Summary: Fixes T11392. If some tasks are restricted, we only have PHIDs for them, not objects. Just use the PHIDs instead.
Test Plan: {F1741335}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11392
Differential Revision: https://secure.phabricator.com/D16345
Summary:
Ref T4788. This gives us a new level of graceful degradation, so now we show:
- Zero through 100 connected tasks: whole graph.
- More than 100 connnected tasks, but fewer than 100 direct parents/subtasks: just parents and subtasks, with "..." to hint that the graph is cut off.
- More than 100 parents and children: just the "sorry, too much stuff" error message.
Test Plan: {F1740882}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16344
Summary:
Fixes T11386. Ref T4788.
- Apparently fix weird strikethrough effect? Spooky!
- Provide a little icon hint in the left column about which tasks are direct parents/children, vs just reachable somehow. I don't think this is super useful/important, but seems maybe nice?
Test Plan: {F1740779}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788, T11386
Differential Revision: https://secure.phabricator.com/D16342
Summary:
Ref T8116. Partially scavenged from D14152. This roughs in a new Packages application for Arcanist extensions and third-party applications, and adds a "Publisher" object.
A "Publisher" represents an individual or entity who is publishing a package, like "Phacility". It's explicitly //not// necessarily the original author -- just the primary entity vouching for the safety of the code.
A publisher just has a name and a unique key for now. For example, Phacility might have "Phacility" and "phacility", respectively.
Unique keys are immutable, e.g., the package "phacility/arcanist" will always be exactly the same package by exactly the same publisher.
Test Plan: {F1731621}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T8116
Differential Revision: https://secure.phabricator.com/D16314
Summary:
`mysql` has the magic feature of ignoring port arguments and using the socket when connecting to localhost.
This flag makes it not do that.
Test Plan: `./bin/storage shell`, execute `status`, see `Connection: localhost via TCP/IP`.
Reviewers: joshuaspence, #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D16317
Summary:
Ref T4788. One install has some particularly impressive task graphs which are thousands of nodes large.
The current graph is pretty broken in these cases. For now, just render a "too big to show" message. In the future, I'd plan to finesse this (e.g., show parents/children, show links to parents/children, etc).
Test Plan:
- Viewed a normal task.
- Set limit to 3, viewed a task with graph size 6, saw an error message.
- Viewed a revision stack graph (unaffected).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16295
Summary:
Fixes T8911. This corrects several issues which could crop up if a calendar event query matched more results than the query limit:
- The desired order was not applied by the SearchEngine -- it applies the first builtin order instead. Provide a proper builtin order.
- When we generate ghosts, we can't do limiting in the database because we may select and then immediately discard a large number of parent events which are outside of the query range.
- For now, just don't limit results to get the behavior correct.
- This may need to be refined eventually to improve performance.
- When trimming events, we could trim parents and fail to generate ghosts from them. Separate parent events out first.
- Try to simplify some logic.
Test Plan: An "Upcoming" dashboard panel with limit 10 and the main Calendar "Upcoming Events" UI now show the same results.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T8911
Differential Revision: https://secure.phabricator.com/D16289
Summary:
Ref T9275. Swaps Calendar over to modular transactions. Theoretically, this has almost no effect on anything.
Ref T10633. I didn't actually do anything here yet, but this gets us ready to put timestamps in email.
Test Plan: Created and edited a bunch of events, nothing seemed catastrophically broken.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9275, T10633
Differential Revision: https://secure.phabricator.com/D16286
Summary:
Ref T11309. In that task, a user misunderstood two parts of this error:
- They took "exception" to mean "unexpected failure", when it was intended to mean "rare circumstance".
- They intereted the internal ID number of a commit to mean that Phabricator was malfunctioning.
Make the language of this condition more direct, explaining what the situation means in greater detail.
Additionally, we would previously re-throw this exception, which would make the daemon exit, wait a moment, and restart. This was normal and expected.
When //unexpected// failures occur, it's important do to this: it prevents a daemon failing in a loop from causing too many side effects (e.g., limit of 1 email per 5 seconds instead of thousands per second).
When expected, permanent failures occur, we do not need to do this: the task will not be retried. I just did it because it was slightly more consistent ("failures restart daemons") and we had few permanent failure types at the time.
We have more now, and restarting the daemons generates some additional logs which have the potential to confuse. Cycling the daemon also (intentionally) reduces the rate at which we process tasks, which can be bad for permanent failures like "deleted commit" because users can delete a huge number of commits and possibly clog up the queue with cycle-after-failure actions.
Test Plan:
Tried to process a deleted commit, saw a new message:
```
2016-07-11 9:30:22 AM [STDE] <VERB> PhabricatorTaskmasterDaemon Task 1428658 was cancelled: Commit "R55:6c46b7d0fb82a859ca3f87a95dc8dcceef8088c9" (with internal ID "282161") is no longer reachable from any branch, tag, or ref in this repository, so it will not be imported. This usually means that the branch the commit was on was deleted or overwritten.
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11309
Differential Revision: https://secure.phabricator.com/D16268
Summary:
Ref T3554. Makes `bin/worker cancel --class <classname>` work (cancel all tasks with that type).
This is useful in development if your queue is full of a bunch of gunk, and a need has occasionally arisen in production environments (usually "one option is cancel everything and move on").
Test Plan: Ran `bin/worker cancel` to cancel blocks of tasks by class name.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T3554
Differential Revision: https://secure.phabricator.com/D16267
Summary:
Fixes T11274. When task titles are long, we currently wrap stuff and the trace graph renders real weird.
Instead, prevent taks/revision titles from wrapping/overflowing.
(This works in a slightly weird way, and `text-overflow: ellipsis;` has no apparent effect on any of the containers.)
Test Plan: {F1712394}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11274
Differential Revision: https://secure.phabricator.com/D16233
Summary:
Ref T5267. Two general changes:
- Make string extraction use a cache, so that it doesn't take several minutes every time you change something. Minor updates now only take a few seconds (like `arc liberate` and similar).
- Instead of dumping a sort-of-template file out, write out to a cache (`src/.cache/i18n_strings.json`). I'm planning to add more steps to read this cache and do interesting things with it (emit translatewiki strings, generate or update standalone translation files, etc).
Test Plan:
- Ran `bin/i18n extract`.
- Ran it again, saw it go a lot faster.
- Changed stuff, ran it, saw it only look at new stuff.
- Examined caches.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16227
Summary: Fixes T11159. We get two different values here (`NULL` and `0`) with different meanings.
Test Plan:
- Ran `STOP SLAVE;`.
- Saw this:
{F1710181}
- Ran `START SLAVE;`.
- Back to normal.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11159
Differential Revision: https://secure.phabricator.com/D16225
Summary: Ref T4788. It's not easy to tell at a glance which objects are open vs closed. Try to make that a bit more clear. This could probably use some more tweaking.
Test Plan: {F1708330}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16219
Summary:
Ref T4788. As it turns out, our tasks are very tightly connected.
Instead of loading every parent/child task, then every parent/child of those tasks, etc., etc., only load tasks in the "same direction" that we're already heading.
For example, we load children of children, but not parents of children. And we load parents of parents, but not children of parents.
Basically we only go "up" and "down" now, but not "out" as much. This should reduce the gigantic multiple-thousand-node graphs currently shown in the UI.
I still discover the whole graph for revisiosn, because I think it's probably more useful and always much smaller. That might need adjustment too, though.
Test Plan: Seems fine locally??
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16218
Summary:
Ref T4788. This fixes all the bugs I was immediately able to catch:
- "Directory-Like" graph shapes could draw too many vertical lines.
- "Reverse-Directory-Like" graph shapes could draw too few vertical lines.
- Terminated, branched graph shapes drew the very last line to the wrong place.
This covers the behavior with tests, so we should be able to fix more stuff later without breaking anything.
Test Plan:
- Added failing tests and made them pass.
{F1708158}
{F1708159}
{F1708160}
{F1708161}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16216
Summary: Ref T4788. This seems reasonable locally, but not sure how it will feel on real data. Might need some tweaks, or might just be a terrible idea.
Test Plan: {F1708059}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16214
Summary:
Ref T4788. This separates the revision graph view into a base class with core logic and a revision class with Differential-specific logic, so I can subclass it in Maniphest, etc., and try using it in other applications to show similar graphs.
Not sure if we'll stick with it, but even if we don't this makes the code a bit cleaner and gets custom rendering logic out of the RevisionViewController, which is nice.
Test Plan: Viewed revisions, saw the stack UI completely unchanged.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16213