Summary:
Ref T11114. Recent changes broke the links to jump to inline comments from the previews because they get hooked up with JS.
Restore the linking behavior.
Test Plan: Clicked "View" on an inline comment preview, jumped to that comment.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11114
Differential Revision: https://secure.phabricator.com/D17131
Summary: Ref T11114. This comments nearly working on EditEngine. Only significant issue I caught is that the "View" link doesn't render properly because it depends on JS which is tricky to hook up. I'll clean that up in a future diff.
Test Plan: {F2279201}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11114
Differential Revision: https://secure.phabricator.com/D17116
Summary: Ref T11114. This field just stores the value of "Auditors" so you can trigger auditors explicitly later on if you want.
Test Plan: Created and edited revisions with "Auditors".
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11114
Differential Revision: https://secure.phabricator.com/D17070
Summary:
Ref T11114. This replaces the old edit controller with a new one based entirely on EditEngine.
This removes the CustomFieldEditEngineExtension hack for Differential, since remaining field types are fairly straightforward and work with existing EditEngine support, as far as I can tell.
Test Plan:
- Created a revision via web diffs.
- Updated a revision via web diffs.
- Edited a revision via web.
- Edited nonstandard custom fields ("Blame Revision", "JIRA Issues").
- Created a revision via CLI.
- Updated a revision via CLI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11114
Differential Revision: https://secure.phabricator.com/D17054
Summary: Ref T11114. This doesn't really support anything yet, but technically works if you manually go to `/editpro/`.
Test Plan: {F2117302}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11114
Differential Revision: https://secure.phabricator.com/D17043
Summary:
I'm about 90% sure this fixes the intermittent test failure on `testObjectSubscribersPolicyRule()` or whatever.
We use `spl_object_hash()` to identify objects when passing hints about policy changes to policy rules. This is hacky, and I think it's the source of the unit test issue.
Specifically, `spl_object_hash()` is approximately just returning the memory address of the object, and two objects can occasionally use the same memory address (one gets garbage collected; another uses the same memory).
If I replace `spl_object_hash()` with a static value like "zebra", the test failure reproduces.
Instead, sneak an object ID onto a runtime property. This is at least as hacky but shouldn't suffer from the same intermittent failure.
Test Plan: Ran `arc unit --everything`, but I never got a reliable repro of the issue in the first place, so who knows.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D17029
Summary:
Ref T11939. IPv4 addresses can normally only be written in one way, but IPv6 addresses have several formats.
For example, the addresses "FFF::", "FfF::", "fff::", "0ffF::", "0fFf:0::", and "0FfF:0:0:0:0:0:0:0" are all the same address.
Normalize all addresses before writing them to logs, etc, so we store the most-preferred form ("fff::", above).
Test Plan:
Ran an SSH clone over IPv6:
```
$ git fetch ssh://local@::1/diffusion/26/locktopia.git
```
It worked; verified that address read out of `SSH_CLIENT` sensibly.
Faked my remote address as a non-preferred-form IPv6 address using `preamble.php`.
Failed to login, verified that the preferred-form version of the address appeared in the user activity log.
Made IPv6 requests over HTTP:
```
$ curl -H "Host: local.phacility.com" "http://[::1]/"
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11939
Differential Revision: https://secure.phabricator.com/D16987
Summary:
Ref T11939. Depends on D16984. Now that CIDRLists can contain IPv6 addresses, blacklist all of the reserved IPv6 space.
This reserved blacklist is used to prevent users from accessing internal services via "Import Calendar" or "Add Macro".
They can't actually reach IPv6 addresses via these mechanisms yet because we need to do more work to support outbound IPv6 requests, but make sure reserved IPv6 space is blacklisted already when that support eventaully arrives.
Also, clean up some error messages (e.g., for trying to hit a bad URI in "Add Macro").
Test Plan:
- Loaded pages with default blacklist.
- Tried to make requests into IPv6 space.
- Currently, this is impossible because of `parse_url()` and `gethostynamel()` calls.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11939
Differential Revision: https://secure.phabricator.com/D16986
Summary:
Ref T11922. After updating to HEAD of `master`, you need to manually rebuild the index. We don't do this during `bin/storage upgrade` because it can take a very long time (`secure.phabricator.com` took roughly an hour) and can happen while Phabricator is running.
However, if we don't warn users about this they'll just get a broken index unless they go read the changelog (or file an issue, then we tell them to go read the changelog).
This adds a very simple table for notes to administrators so we can write a "you need to go rebuild the index" note, then adds one.
Administrators clear the note by completing the activity and running `bin/config done reindex`. This isn't automatic because there are various strategies you can use to approach the issue, which I'll discuss in greater detail in the linked documentation.
Also, fix an issue where `bin/storage upgrade --apply <patch>` could try to re-mark an already-applied patch as applied.
Test Plan:
- Ran storage ugrades.
- Got instructions to rebuild search index.
- Cleared instructions with `bin/config done reindex`.
Reviewers: chad
Reviewed By: chad
Subscribers: avivey
Maniphest Tasks: T11922
Differential Revision: https://secure.phabricator.com/D16965
Summary: Found these in the `secure` error logs: one bad call, one bad column.
Test Plan: Searched for empty string. Double-checked method name.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16948
Summary:
Ref T11741. On recent-enough versions of MySQL, we would prefer to use InnoDB for fulltext indexes instead of MyISAM.
Allow `bin/storage adjust` to read actual and expected table engines, and apply adjustments as necessary.
We have one existing bad table that uses the wrong engine, `metamta_applicationemail`. This change corrects that table.
Test Plan:
- Ran `bin/storage upgrade`.
- Saw the adjustment phase apply this change properly:
```
>>>[463] <query> ALTER TABLE `local_metamta`.`metamta_applicationemail` COLLATE = 'utf8mb4_bin', ENGINE = 'InnoDB'
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11741
Differential Revision: https://secure.phabricator.com/D16941
Summary:
Ref T11741. InnoDB uses a stopwords table instead of a stopwords file.
During `storage upgrade`, synchronize the table from the stopwords file on disk.
Test Plan:
- Ran `storage upgrade`.
- Ran `select * from stopwords`, saw stopwords.
- Added some garbage to the table.
- Ran `storage upgrade`, saw it remove it.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11741
Differential Revision: https://secure.phabricator.com/D16940
Summary:
Ref T11044.
- Use shorter lock names. Fixes T11916.
- These granular exceptions now always raise as a more generic "Cluster" exception, even for a single host, because there's less special code around running just one database.
Test Plan:
- Configured bad `mysql.port`, ran `bin/storage upgrade`, got a more helpful error message.
- Ran `bin/storage upgrade --trace`, saw shorter lock names.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044, T11916
Differential Revision: https://secure.phabricator.com/D16924
Summary: Fixes T11845. Users can still embed a text panel on the home page to give it some ambiance.
Test Plan: Wrote an autoplay video as a comment, saw it in feed. Before change: autoplay. After change: no auto play. On task: still autoplay.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11845
Differential Revision: https://secure.phabricator.com/D16920
Summary:
Ref T11044. Few issues here:
- The `PhutilProxyException` is missing an argument (hit this while in read-only mode).
- The `$ref_key` is unused.
- When you add a new master to an existing cluster, we can incorrectly apply `.php` patches which we should not reapply. Instead, mark them as already-applied.
Test Plan:
- Poked this locally, but will initialize `secure004` as an empty master to be sure.
Reviewers: chad, avivey
Reviewed By: avivey
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16916
Summary: Ref T11044. Fixes T11672. In T11672, persistent connections seem to work fine, but they can require `max_connections` and other settings to be raised. Since most users don't need them, make them an advanced option.
Test Plan: Configured persistent connections, loaded some pages, observed persistent connections get used.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044, T11672
Differential Revision: https://secure.phabricator.com/D16913
Summary:
Ref T11044. Sometimes we have a sequence of patches like this:
- `01.newtable.sql`: Adds a new table to Files.
- `02.movedata.php`: Moves some data that used to live in Tokens to the new table.
This is fairly rare, but not unheard of. More commonly, we can have this sequence:
- `03.newtable.sql`: Add a new column to Phame.
- `04.setvalue.php`: Copy data into that new column.
In both cases, when applying database-by-database, we can get into trouble.
- In the first case, if Files is on a different master, we'll try to move data into a new table before creating the table.
- In the second case, if Phame is on a different master, the PHP patch will connect to it before we add the new column.
In either case, we try to interact with tables or columns which don't exist yet.
Instead, apply each patch in order, to all databases which need it. So we'll apply `01.newtable.sql` EVERYWHERE first, then move on.
In the case of PHP patches, we also now only apply them once, since they never make schema changes. It should normally be OK to apply them more than once safely, but this is a little faster and a little safer if we ever make a mistake.
Test Plan:
- Ran `bin/storage upgrade` on single-host and clustered setups.
- Initialized new storage on single-host and clustered setups.
- Upgraded again after initialization.
- Ran with `--apply`.
- Ran with `--dry-run`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16912
Summary:
Ref T11044. This was old Facebook cruft for reading configuration from SMC (and maybe doing some other questionable things). See D183.
(See also D175 for discussion of this from 2011.)
In modern Phabricator, you can subclass `SiteConfig` to provide dynamic configuration, and we do so in the Phacility cluster. This lets you change any config, and change in response to requests (e.g., for instancing) and is generally more powerful than this mechanism was.
This configuration provider theoretically let you roll your own replication or partitioning, but in practice I believe no one ever did, and no one ever could have anyway without more support in the upstream (for migrations, read-after-write, etc).
Test Plan:
- Grepped for removed option.
- Browsed around with clustering off.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16911
Summary:
Ref T11044. One popular tool in a modern operations environment is Puppet. The primary purpose of this tool is to randomly revert hosts to older or different configurations.
Introducing an element of chaotic unpredictability into operations trains staff to be on high alert at all times, rather than lulled into complacency by predictability or consistency.
When Puppet reverts a Phabricator host's configuration to an older version, we might start writing data to a lot of crazy places where it shouldn't go. This will create a big sticky mess that is virtually impossible to undo, mostly because we'll get two files with ID 123 or two tasks with ID 456 or whatever else and good luck with that.
Instead, after changing the partition layout, require `bin/storage partition` to be run. This writes a copy of the config everywhere.
Then, when we start serving web requests, make sure every database has the exact same config. This will foil Puppet by refusing to run requests on hosts it has reverted.
Test Plan:
- Changed partition configuration.
- Ran Phabricator.
- FOILED!
- Ran `bin/storage partition` to sync config.
- Things worked again.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16910
Summary:
Ref T11044. Fixes T10931. This option has essentially never been useful for anything, and we've picked the best implementation for a long time (MySQLi if available, MySQL if not).
I am not aware of any reason to ever set this manually. If someone comes up with some bizarre but legitimate use case that I haven't thought of, we can modularize it.
Test Plan: Browsed around. Grepped for `mysql.implementation`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10931, T11044
Differential Revision: https://secure.phabricator.com/D16909
Summary:
I frequently run into a situation where I want to kill tasks that have accumulated a lot of failures regardless of what class they are. Or I'll want to kill every worker of a certain class but only if it has failed at least once. This change allows me to run `./bin/worker cancel --class <MYCLASS> --min-failure-count 5` to only kill tasks with at least 5 failed attempts.
The `--min-failure-count N` argument can be used by itself as well as with `--class CLASSNAME`. I don't think it makes sense for it to work with `--id ID`, but I'm not dead set on that or anything.
Test Plan: I ran the worker management workflow with and without the `--min-failure-count` argument and it worked as expected.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley, yelirekim
Differential Revision: https://secure.phabricator.com/D16906
Summary:
Fixes T10759. Fixes T11817. This runs all the general sanity/configuration checks on all the active servers.
None of these warnings are very important, and this doesn't change any logical stuff.
Depends on D16904.
Test Plan: Painstakingly triggered each warning, verified that they rendered correctly and that messages told me which host was affected.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759, T11817
Differential Revision: https://secure.phabricator.com/D16905
Summary:
Ref T10759. Check master/replica status during startup.
After D16903, this also means that we check this status after a database comes back online after being unreachable.
If a master is replicating, fatal (since this can do a million kinds of bad things).
If a replica is not replicating, warn (this just means the replica is behind so some data is at risk).
Also: if your masters were actually configured properly (mine weren't until this change detected it), we would throw away patches as we applied them, so they would only apply to the //first// master. Instead, properly apply all migration patches to all masters.
Test Plan:
- Started Phabricator with a replicating master, got a fatal.
- Stopped replication on a replica, got a warning.
- With two non-replicating masters, upgraded storage.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16904
Summary:
Ref T10759. Currently, these checks run only against configured masters. Instead, check every host.
These checks also sort of cheat through restart during a recovery, when some hosts will be unreachable: they test for "disaster" by seeing if no masters are reachable, and just skip all the checks in that case.
This is bad for at least two reasons:
- After recent changes, it is possible that //some// masters are dead but it's still OK to start. For example, "slowvote" may have no master, but everything else is reachable. We can safely run without slowvote.
- It's possible to start during a disaster and miss important setup checks completely, since we skip them, get a clean bill of health, and never re-test them.
Instead:
- Test each host individually.
- Fundamental problems (lack of InnoDB, bad schema) are fatal on any host.
- If we can't connect, raise it as a //warning// to make sure we check it later. If you start during a disaster, we still want to make sure that schemata are up to date if you later recover a host.
In particular, I'm going to add these checks soon:
- Fatal if a "master" is replicating.
- Fatal if a "replica" is not replicating.
- Fatal if a database partition config differs from web partition config.
- When we let a database off with a warning because it's down, and later upgrade it to a fatal because we discover it is broken after it comes up again, fatal everything. Currently, we keep running if we "discover" the presence of new fatals after surviving setup checks for the first time.
Test Plan:
- Configured with multiple masters, intentionally broke one (simulating a disaster where one master is lost), saw Phabricator still startup.
- Tested individual setup checks by intentionally breaking them.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16902
Summary:
Ref T11044. I'm going to hold this until after the release cut, but I think it's good to go.
This allows installs to configure multiple masters in `cluster.databases` and partition applications across them (for example, put Maniphest on a dedicated database).
When we make a Maniphest connection we go look up which master we should be hitting first, then connect to it.
This has at least approximately been planned for many years, so the actual change is largely just making sure that your config makes sense.
Test Plan:
- Configured `db001.epriestley.com` and `db002.epriestley.com` as master/master.
- Partitioned applications between them.
- Interacted with various applications, saw writes go to the correct host.
- Viewed "Database Servers" and saw partitioning information.
- Ran schema upgrades.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16876
Summary: Ref T11034. Try to produce a roughly-one-sentence summary instead of a roughly-one-paragraph summary for the browse dialog.
Test Plan:
- Added unit tests, ran unit tests.
- Wrote a longer summary for a project, browsed to it, saw a shorter summary.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11034
Differential Revision: https://secure.phabricator.com/D16892
Summary:
Fixes T11809. Ref
- Explicitly document the summary icon hints -- I don't think these are too hard to figure out (and maybe this stuff should just go in the tooltips) but we can start here.
- Use color + shape to distinguish between "cancelled" and "declined", not just color (for users with vision accessibility issues).
- Translate a "minute(s)" string into sensible English.
- Use RSVP status on the month view green circle thing.
Test Plan:
- Read docs.
- Looked at month view.
- Read reminder mail.
- Viewed month view mobile view.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11809
Differential Revision: https://secure.phabricator.com/D16872
Summary: Depends on D16847. Ref T11044. This updates the remaining storage-related workflows from the CLI to accommodate multiple masters.
Test Plan:
- Configured multiple masters.
- Ran all `bin/storage` workflows.
- Ran `arc unit --everything`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16848
Summary:
Depends on D16115. Ref T11044. In the brave new world of multiple masters, we need to check the schemata on each master when looking for missing storage patches, keys, schema changes, etc.
This realigns all the "check out what's up with that schema" calls to work for multiple hosts, and updates the web UI to include a "Server" column and allow you to browse per-server.
This doesn't update `bin/storage`, so it breaks things on its own (and unit tests probably won't pass). I'll update that in the next change.
Test Plan: Configured local environment in cluster mode with multiple masters, saw both hosts' status reported in web UI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16847
Summary:
Ref T11044. This moves toward partitioned application databases:
- You can define multiple masters.
- Convert all the easily-convertible code to become multi-master aware.
This doesn't convert most of `bin/storage` or "Config > Database (Stuff)" yet, as both are quite involved. They still work for now, but only operate on the first master instead of all masters.
Test Plan: Configured multiple masters, browsed around, ran `bin/storage` commands, ran `bin/storage --host ...`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16115
Summary:
This has been replaced by `PolicyCodex` after D16830. Also:
- Rebuild Celerity map to fix grumpy unit test.
- Fix one issue on the policy exception workflow to accommodate the new code.
Test Plan:
- `arc unit --everything`
- Viewed policy explanations.
- Viewed policy errors.
Reviewers: chad
Reviewed By: chad
Subscribers: hach-que, PHID-OPKG-gm6ozazyms6q6i22gyam
Differential Revision: https://secure.phabricator.com/D16831
Summary:
Ref T5267. When extrating data from `pht()` calls, also extract the argument types and export them into the map so they can be used by consumers.
We recognize plurals (`phutil_count()`, `new PhutilNumber`) and genders (`phutil_person()`). We'll need to annotate the codebase for those, since they're currently runtime-only.
Test Plan:
Rebuilt extraction maps, got data like this (note "number" type annotation).
```
"Scaling pool \"%s\" up to %s daemon(s).": {
"uses": [
{
"file": "/daemon/PhutilDaemonOverseer.php",
"line": 378
}
],
"types": [
null,
"number"
]
},
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16823
Summary:
Ref T4788. I thought I implemented this, but actualy didn't.
When we're in the "mid-sized" fallback mode (graph has more than 100 nodes, but not more than than 100 parents/children), don't actually draw the graph. It's almost always uninteresting and huge.
Instead, this just renders a list of direct parents, then the task, then the direct children, which is pretty straightforward.
Test Plan: Set limit to 5, saw mid-sized fallback graph with no actual graph drawing.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16816
Summary: Ref T5267. Fix one minor bug (paths were not being resolved properly) and one minor string issue (missing `%d` in a string).
Test Plan: Extracted strings, got a cleaner result.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16808
Summary: Ref T7931. This is still quite rough, but should technically send vaguely-useful email as part of the standard trigger infrastructure.
Test Plan: Ran `bin/phd start`, created an event shortly, saw reminder email send in `bin/mail list-outbound`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T7931
Differential Revision: https://secure.phabricator.com/D16784
Summary: Fixes T11799. This string is varying on the first parameter, but should vary on the second parameter.
Test Plan: Ran `bin/garbage set-policy ...`, saw proper translation.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11799
Differential Revision: https://secure.phabricator.com/D16769
Summary:
Depends on D16755. Right now, we build a setup check map (to run preflight checks), then later load libraries.
This means any checks included in third-party libraries don't get added to the map, and no longer run.
(These are rare, but Phacility has a couple).
Instead, delete the caches after loading extra libraries.
Test Plan: With this and D16755, re-ran setup checks and saw Phacility setup checks run.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16756
Summary:
Fixes T11771. Adds a lock around each GC process so we don't try to, e.g., delete old files on two machines at once just because they're both running trigger daemons.
The other aspects of this daemon (actual triggers; nuance importers) already have separate locks.
Test Plan: Ran `bin/phd debug trigger --trace`, saw daemon acquire locks and collect garbage.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11771
Differential Revision: https://secure.phabricator.com/D16739
Summary: Ref T11766. When users run `git pull` or similar, log the operation in the pull log.
Test Plan: Performed SSH pulls, got a log in the database. Today, this event log is purely diagnostic and has no UI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11766
Differential Revision: https://secure.phabricator.com/D16738
Summary: Ref T11773. This is an initial first step toward a more complete solution, but should make the worst case much less bad: prior to this change, the worst case was "30 second exeuction timeout". After this patch, the worst case is "no results + explanatory message", which is strictly better.
Test Plan:
Made all feed stories fail policy checks, loaded home page.
- Before adding overheating: 9,600 queries / 20 seconds
- After adding overheating: 376 queries / 800ms
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11773
Differential Revision: https://secure.phabricator.com/D16735
Summary:
Ref T10747. Rough flow is:
- Run a query.
- Select a new "Export Events..." action.
- This lets you define an "Export", which has a unique URL you can paste into Google Calendar or Calendar.app or whatever.
Most of this does nothing yet but here's the boilerplate.
Test Plan: Doesn't do anything yet.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10747
Differential Revision: https://secure.phabricator.com/D16675
Summary:
Ref T11672. At low loads, this causes us to use more connections, which is pushing some installs over the default limits.
Rather than trying to walk users through changing `max_connections`, `open_files_limit`, `fs.file-max`, `ulimit`, etc., just put things back for now. After T11044 we should have headroom to use persistent connections within the default limits on all reasonable systems..
Test Plan: Loaded Phabricator, poked around.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11672
Differential Revision: https://secure.phabricator.com/D16591
Summary: Fixes T11676. Instead of trying to fit task titles to the display, truncate them and let the table scroll.
Test Plan:
Table now scrolls when cramped:
{F1843396}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11676
Differential Revision: https://secure.phabricator.com/D16583
Summary:
Fixes T11677. This makes two minor adjustments to the repository import daemons:
- The first step ("Message") now queues at a slightly-lower-than-default (for already-imported repositories) or very-low (for newly importing repositories) priority level.
- The other steps now queue at "default" priority level. This is actually what they already did, but without this change their behavior would be to inherit the priority level of their parents.
This has two effects:
- When adding new repositories to an existing install, they shouldn't block other things from happening anymore.
- The daemons will tend to start one commit and run through all of its steps before starting another commit. This makes progress through the queue more even and predictable.
- Before, they did ALL the message tasks, then ALL the change tasks, etc. This works fine but is confusing/uneven/less-predictable because each type of task takes a different amount of time.
Test Plan:
- Added a new repository.
- Saw all of its "message" steps queue at priority 4000.
- Saw followups queue at priority 2000.
- Saw progress generally "finish what you started" -- go through the queue one commit at a time, instead of one type of task at a time.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11677
Differential Revision: https://secure.phabricator.com/D16585
Summary:
Fixes T11675. This capability was erroneously (probably?) removed in D14766.
This search implementation (which uses exact match) probably isn't perfect for all cases of "text" fields, but empirically it seems to be what a significant number of users are after.
Test Plan:
Searched for a custom text field value.
{F1843383}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11675
Differential Revision: https://secure.phabricator.com/D16582
Summary:
Ref T11672. Depends on D16577. When establishing a connection from a webserver context, try to use persistent connections.
The hope is that this will fix outbound port exhaustion issues experienced on repository hosts handling large queue volumes.
Test Plan:
Added this to a page:
```lang=php
$tables = array(
new PhabricatorUser(),
new ManiphestTask(),
new DifferentialRevision(),
new PhabricatorRepository(),
new PhabricatorPaste(),
);
$ids = array();
foreach ($tables as $table) {
$conn = $table->establishConnection('r');
$cid = queryfx_one(
$conn,
'SELECT CONNECTION_ID() cid');
$ids[get_class($table)] = $cid['cid'];
}
var_dump($ids);
```
Reloaded the page a bunch of times and saw no reissued connections (the pool seems to keep a particular connection bound to a particular database), but did see connection reuse across requests.
That is, across reloads the same connection IDs appeared, but the same connection ID never appeared twice in the same request. This is what we want.
Also googled for issues with persistent connections, but everything I found was unconcerning and obscure (local variables and other very complex state that we don't use), and a bunch of the docs are reassuring (transactions, etc., get reset properly).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11672
Differential Revision: https://secure.phabricator.com/D16578
Summary:
Ref T11613. In D16503/T11598 I refined the setup flow to improve messaging for early-stage setup issues, but failed to fully untangle things.
We sometimes still try to access a cache which uses configuration before we build configuration, which causes an error.
Instead, store "are we in flight / has setup ever worked?" in a separate cache which doesn't use the cache namespace. This stops us from trying to read config before building config.
Test Plan:
Hit bad extension error with a fake extension, got a proper setup help page:
{F1812803}
Solved the error, reloaded, broke things again, got a "friendly" page:
{F1812805}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11613
Differential Revision: https://secure.phabricator.com/D16542
Summary: Fixes T11607.
Test Plan:
- Made a comment using `{key ...}`.
- Used `bin/mail show-outbound --id X --dump-html > test.html` to review HTML:
{F1805304}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11607
Differential Revision: https://secure.phabricator.com/D16523
Summary:
Fixes T11583.
- When users run `bin/storage upgrade` for the first time on a new install, we currently give them a prompt which feels rough and which they can only reasonably ever answer "yes" to.
- We generally use cautionary language ("found issues with schema") in this workflow. Adjustments are now routine, so use more neutral and progress-oriented language ("found adjustments to apply").
Test Plan:
- Ran `bin/storage upgrade --namesapce kappa123`, got an adjustment using neutral language without prompting.
- Dropped a key, ran `bin/storage upgrade`, got normal workflow (but with more neutral language).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11583
Differential Revision: https://secure.phabricator.com/D16509
Summary: Fixes T11593. We ask for a list of values when searching for custom "link" fields, but don't handle it correctly when actually construcitng a query.
Test Plan:
Added this custom field:
```
{
"mycompany.target-version": {
"name": "Target Version",
"type": "link",
"search": true
}
}
```
Set a task to "beta". Let daemons index it. Queried for:
```
constraints: {
"custom.mycompany.target-version": [
"beta"
]
}
```
Got just one result back.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11593
Differential Revision: https://secure.phabricator.com/D16508
Summary: Ref T11589. Provide a way for scripts to say "just continue if database config fails", and use it in `bin/config` and `bin/storage`.
Test Plan:
- Broke database config.
- Ran `bin/config`, worked fine.
- Ran `bin/storage`, got helpful "set up the database" message.
- Ran `bin/repository`, got fatal.
- Ran normal site with valid/invalid config, got proper feedback.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16502
Summary:
Ref T11589. This runs:
- preflight checks (critical checks: PHP version stuff, extensions);
- configuration;
- normal checks.
The PHP checks are split into critical ("bad version") and noncritical ("sub-optimal config").
I tidied up the extension checks slightly, we realistically depend on `cURL` nowadays.
Test Plan:
- Faked a preflight failure.
- Hit preflight check.
- Got expected error screen.
- Loaded normal pages.
- Hit a normal setup check.
- Used DarkConsole "Startup" tab to verify that preflight checks take <1ms to run (we run them on every page without caching, at least for now, but they only do trivial checks like PHP versions).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16500
Summary: Caught one of these while reviewing docs, grepped for the other one.
Test Plan: `grep`, reading
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16498
If the namespace is something like "test_example" we currently fail to
renamespace the dump.
(Cowboy committing this since this is currently blocking a data export.)
Test Plan:
- Renamespaced a local dump, examined the output, saw 60 create / 60 use, reimported it.
- Will export in production.
Auditors: chad
Summary:
Fixes T11577. When we connect to a host and try to select a database which does not exist, we currently treat it as though the host wasn't reachable.
This isn't correct, and prevents storage from being initialized while already in cluster mode, since the "config" database won't exist yet the first time we connect.
Instead, distinguish between `AphrontSchemaQueryException` (thrown on connection if the requested database is not present) and other errors.
Test Plan:
- Put Phabricator into cluster database mode (`cluster.databases = ...`).
- Swapped `storage.default-namespace` to force initialization of a new install.
- Ran `bin/storage upgrade`.
- Before patch: Immediate fatal about unreachablility.
- After patch: Database initialized.
- Also ran initialization steps in tranditional single-host mode (`cluster.databases` empty, `mysql.host` configured).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11577
Differential Revision: https://secure.phabricator.com/D16489
Summary:
Ref T10867 for original use case. This workflow provides a plausible way for administrators to stop the daemons when performing upgrades or maintenance, then bring those daemons back up without resulting in the failure of builds that were running at the time.
On our organization's phab install, builds are running 24/7. The majority of these builds last for at least several minutes, and contain build steps which fail if interrupted and then resumed, as happens when turning daemons on and off.
Instead of allowing these build steps to resume execution as normal, this workflow will instruct active builds to restart their entire build process instead of just resuming whichever step they were on.
Test Plan:
contrived a build plan which would fail if resumed partway through:
- lease a working copy
- command `touch restart_{build.id}`
- command `test -e restart_{build.id} && rm restart_{build.id} && sleep 60`
followed old procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/phd start`
- saw the builds fail
followed new procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/harbormaster restart --active`
- `./bin/phd start`
- saw the builds pass
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Maniphest Tasks: T10867
Differential Revision: https://secure.phabricator.com/D16485
Summary: Ref T11132, significantly cleans up the Config app, new layout, icons, spacing, etc. Some minor todos around re-designing "issues", mobile support, and maybe another pass at actual Group pages.
Test Plan: Visit and test every page in the config app, set new items, resolve setup issues, etc.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16468
Summary: Ref T11132. This gets rid of the red bar for admins and instead shows a new menu item next to notifications/chat if there are unresolved configuration issues. Menu goes away if there are no issues. May move this later into the bell icon, but think think might be the right place to start especially for NUX and updates. Maybe limit the number of items?
Test Plan:
Tested with some, lots, and no config issues.
{F1790156}
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16461
Summary: Previously, the chatbot docs instructed users to get certificates for the conduit API and put the cert in a `conduit.cert` config key. In order to get the chatbot to work, I needed to instead get an API key and put it in the `conduit.token` config entry.
Test Plan: Doc fix. Tried the new documented way and it worked.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D16443
Summary: Fixes T11508. The config entry `remarkup.ignored-object-names` already contains a blacklist of object names that should be ignored in the web UI. This change makes that blacklist also apply to the chatbot. This makes it possible to have a chatbot ignore things like V1, V2, Q1 and any other phrases the user may not want to generate links to objects.
Test Plan: Create objects (tasks, slowvotes, etc.) then mention the object names in chat (with the bot running). The bot should respond with helpful links to the given objects. Then add the object names to the blacklist through the config web UI. This apparently triggers the bot to restart itself. Then mention the object names in chat again. The bot should no longer respond with links because those object names have been added to the blacklist regex.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: epriestley
Maniphest Tasks: T11508
Differential Revision: https://secure.phabricator.com/D16442
Summary:
Ref T11522. This tries to reduce the cost of rewriting a repository by making handles smarter about rewritten commits.
When a handle references an unreachable commit, try to load a rewrite hint for the commit. If we find one, change the handle name to "OldHash > NewHash" to provide a strong hint that the commit was rewritten and that copy/pasting the old hash (say, to the CLI) won't work.
I think this notation isn't totally self-evident, but users can click it to see the big error message on the page, and it's at least obvious that something weird is going on, which I think is the important part.
Some possible future work:
- Not sure this ("Recycling Symbol") is the best symbol? Seems sort of reasonable but mabye there's a better one.
- Putting this information directly on the hovercard could help explain what this means.
Test Plan: {F1780719}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11522
Differential Revision: https://secure.phabricator.com/D16437
Summary: Switches over to new property UI boxes, splits core and apps into separate pages. Move Versions into "All Settings". I think there is some docs I likely need to update here as well.
Test Plan: Click on each item in the sidebar, see new headers.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D16429
Summary:
Fixes T11490. Currently, this query can not use a key and the table size may be quite large.
Adjust the query so it can use a key for both selection and ordering, and add that key.
Test Plan: Ran `EXPLAIN` on the old query in production, then added the key and ran `EXPLAIN` on the new query. Saw key in use, and "rows" examined drop from 29,273 to 15.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11490
Differential Revision: https://secure.phabricator.com/D16423
Summary:
Ref T6996. Depends on D16407. This does the same stuff as D16407, but for `bin/storage renamespace`. In particular:
- Support writing directly to a file (so we can get good errors on failure).
- Support in-process compression.
Also add support for reading out of a `storage dump` subprocess, so we don't have to do a dump-to-disk + renamespace + compress dance and can just stream out of MySQL directly to a compressed file on disk.
This is used in the second stage of instance exports (see T7148).
It would be nice to share more code with `bin/storage dump`, and possibly to just make this a flag for it, although we still do need to do the file-based version when importing (vs exporting). I figured that was better left for another time.
Test Plan:
Ran `bin/storage renamespace --live --output x --from A --to B --compress --overwrite` and similar commands.
Verified that a compressed, renamespaced dump came out of the other end.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6996
Differential Revision: https://secure.phabricator.com/D16410
Summary:
Ref T6996. If you do this kind of thing in the shell, you don't get a good error by default if the `dump` command fails:
```
$ bin/storage dump | gzip > output.sql.gz
```
This can be worked around with some elaborate bash tricks, but they're really clunky and uninintuitive.
We also need to do this in several places (while writing backups; while performing exports), and I don't want to copy clunky bash tricks all over the codebase.
Instead, provide `--output` and `--compress` flags which just do this processing inside `bin/storage dump`. It will fail appropriately if any of the underlying operations fail. This also makes the write a little safer (refuses to overwrite) and the code more reusable.
Test Plan:
- Did three dumps, with no flags, `--output`, and `--output --compress`.
- Verified all three took similar amounts of time and were identical except for "Date Exported" timestamps in comments (except that the compressed one was compressed).
- Used `gunzip` to examine the compressed one, verified it was really compressed.
- Faked a write error, saw properly command behavior (clean up file + exit with error).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6996
Differential Revision: https://secure.phabricator.com/D16407
Summary:
The cluster synchronization code runs either actively (before returning a response to `git clone`, for example) or passively (routinely, as the daemons update reposiories).
The active sync runs as the web user (if running `git clone http://...`) or the VCS user (if running `git clone ssh://...`). But the passive sync runs as the daemon user.
All of these sync processes need to run actual commands as the daemon user (`git fetch ...`).
For the active ones, we must `sudo`.
For the passive ones, we're already the right user. We run the same code, and end up trying to sudo to ourselves, which `sudo` isn't happy about by default.
Depending on how `sudo` is configured and which users things are running as this might work anyway, but it's silly and if it doesn't work it requires you to go make non-obvious, weird config changes that are unintuitive and somewhat nonsensical. This is probably worse on the balance than adding a bit of complexity to the code.
Instead, test which user we're running as. If it's already the right user, don't sudo.
Test Plan:
- Ran `bin/repository update --trace` as daemon user, saw no more `sudo`.
- Ran a `git clone` to make sure that didn't break.
Reviewers: chad, avivey
Reviewed By: avivey
Differential Revision: https://secure.phabricator.com/D16391
Summary:
Ref T11458. Depends on D16388. Currently, we're very aggressive about closing connections in the taskmaster daemons.
This can end up taking up a lot of resources. In particular, because the outgoing port for outbound connections normally can not be reused for 60 seconds after a connection closes, we may exhaust outbound ports on the host if there's a big queue full of stuff that's being processed very quickly.
At a minimum, we //always// are holding open a `worker` connection, which we always need again right away. So even in the best case we end up opening/closing this about once per second and each daemon takes up about ~60 outbound ports when it should take up ~1.
So, make two adjustments:
- First, only close connections which we haven't issued a query on in the last 60 seconds. This should prevent us from closing connections that we'll need again immediately in most cases. In the worst case, we shouldn't be eating up any extra ports under default TCP behavior.
- Second, explicitly close connections. We were relying on implicit/GC behavior (maybe as a holdover from very long ago, before we got connection wrappers in place?), which probably did about the same thing but isn't as predictable and can't be profiled or instrumented.
Test Plan:
This is somewhat difficult to test completely convincingly in isolation since the problem behavior depends on production scales and the workload, and to some degree on configuration.
I tested that this stuff baiscally works by adding logging to connect/close and running the daemons, verifying that they churned connections a lot before this change (e.g., ~1/s even at no load) and churn rarely afterward (e.g., almost never at no load).
I ran some workload through them to make sure I didn't completely break anything.
The best real test is just seeing how production responds. Current inbound/outbound connections on `secure001` are 1,200:
```
secure001 $ netstat -t | grep :mysql | wc -l
1164
```
Current outbound from `repo001` are 18,600:
```
repo001 $ netstat -t | grep :mysql | wc -l
18663
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11458
Differential Revision: https://secure.phabricator.com/D16389
Summary:
Fixes T11446. We can raise the misleading error:
> No valid databases are configured!
...when a valid master is configured but unreachable.
Instead, more carefully raise either "nothing is configured" or "nothing is reachable".
Test Plan: Configured only a master, artificially severed it, got "nothing is reachable" instead of "nothing is configured".
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11446
Differential Revision: https://secure.phabricator.com/D16386
Summary:
Fixes T11453. Currently, commit message summaries are limited to 80 bytes. This may only be 20-40 characters for CJK languages or langauges with Cyrillic script.
Increase storage size to 255, then truncate to the shorter of 255 bytes or 80 glyphs. This preserves the same behavior for latin languages, but is less tight for Russian, etc.
Some minor additional changes:
- Provide a way to ask "how much data fits in this column?" so we don't have to duplicate column lengths across summary checks or UI errors like "title too long".
- Remove the `text80` datatype, since no other columns use it and we have no use cases (or likely use cases) for it.
Test Plan:
- Made a commit with a Cyrillic title, saw reasonable summarization in UI:
{F1757522}
- Added and ran unit tests.
- Grepped for removed `SUMMARY_MAX_LENGTH` constant.
- Grepped for removed `text80` data type.
Reviewers: avivey, chad
Reviewed By: avivey
Subscribers: avivey
Maniphest Tasks: T11453
Differential Revision: https://secure.phabricator.com/D16385
Summary:
Ref T11404. This improves things by about 10%:
- Use `PhutilClassMapQuery`, which has slightly better caching.
- Do a little less work to generate pretty error messages.
- Make the "disabled" code a little faster (and sort of clearer, too?) by doing less fancy stuff.
These are pretty minor adjustments and not the sort of optimizations I'd make normally, but this code gets called ~100x (once per revision) and generates ~10 fields normally, so even small savings can amount to something.
(I also want to try to make `arc` faster in the next update, and improving Conduit performance helps with that.)
Test Plan: Ran `differential.revision.search`, saw cost drop from ~195ms to ~170ms locally.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16355
Summary:
Ref T11404. On my system, this improves performance by 10-15% for `differential.revision.search`.
`PhutilTypeSpec` provides high quality typechecking and is great for user-facing things that need good error messages.
However, it's also a bit slow, and pointless here (the API is internal and it only has one possible option).
I think I added this after writing `checkMap` just because I wanted to use it more often. My desire is sated after finding many reasonable ways to use it to give users high-quality error messages about things like configuration files.
Test Plan: Profiled `differential.revision.search` before and after change, saw wall time drop from ~220ms to ~195ms.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16354
Summary:
Ref T11404. Depends on D16350.
Currently, custom fields can issue "N+1" queries in some cases, so querying 100 revisions issues 100 extra queries.
This affects all `*.search` endpoints for objects with custom fields, and some older endpoints (notably `differential.query`).
This change bulk loads "normal" custom fields, which gets rid of some of these queries. Instead of loading fields for each object, we build a big list of all fields and load them all at once.
The next change will tackle the remaining inefficient edge queries.
Test Plan:
- Configured a custom field with normal database storage in Differential.
- Ran `differential.query`, looking at custom fields in results for correctness.
- Ran `differential.revision.search`, looking at custom fields in results for correctness.
- In both cases, observed queries drop from `3N` to `2N` (all the "normal" custom field stuff got bulk loaded).
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16351
Summary:
Ref T11404. Currently, SearchEngineAttachments can bulk-load data but SearchEngineExtensions can not.
This leads to poor performance of custom fields. See T11404 for discussion.
This changes the API to support a bulk load + format pattern like the one Attachments use. The next change will use it to bulk-load custom field data.
Test Plan:
- Ran `differential.query`, `differential.revision.search` as a sanity check.
- No behavioral changes are expected
- See next revision.
Reviewers: yelirekim, chad
Reviewed By: chad
Maniphest Tasks: T11404
Differential Revision: https://secure.phabricator.com/D16350
Summary: Fixes T11392. If some tasks are restricted, we only have PHIDs for them, not objects. Just use the PHIDs instead.
Test Plan: {F1741335}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11392
Differential Revision: https://secure.phabricator.com/D16345
Summary:
Ref T4788. This gives us a new level of graceful degradation, so now we show:
- Zero through 100 connected tasks: whole graph.
- More than 100 connnected tasks, but fewer than 100 direct parents/subtasks: just parents and subtasks, with "..." to hint that the graph is cut off.
- More than 100 parents and children: just the "sorry, too much stuff" error message.
Test Plan: {F1740882}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16344
Summary:
Fixes T11386. Ref T4788.
- Apparently fix weird strikethrough effect? Spooky!
- Provide a little icon hint in the left column about which tasks are direct parents/children, vs just reachable somehow. I don't think this is super useful/important, but seems maybe nice?
Test Plan: {F1740779}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788, T11386
Differential Revision: https://secure.phabricator.com/D16342
Summary:
Ref T8116. Partially scavenged from D14152. This roughs in a new Packages application for Arcanist extensions and third-party applications, and adds a "Publisher" object.
A "Publisher" represents an individual or entity who is publishing a package, like "Phacility". It's explicitly //not// necessarily the original author -- just the primary entity vouching for the safety of the code.
A publisher just has a name and a unique key for now. For example, Phacility might have "Phacility" and "phacility", respectively.
Unique keys are immutable, e.g., the package "phacility/arcanist" will always be exactly the same package by exactly the same publisher.
Test Plan: {F1731621}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T8116
Differential Revision: https://secure.phabricator.com/D16314
Summary:
`mysql` has the magic feature of ignoring port arguments and using the socket when connecting to localhost.
This flag makes it not do that.
Test Plan: `./bin/storage shell`, execute `status`, see `Connection: localhost via TCP/IP`.
Reviewers: joshuaspence, #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D16317
Summary:
Ref T4788. One install has some particularly impressive task graphs which are thousands of nodes large.
The current graph is pretty broken in these cases. For now, just render a "too big to show" message. In the future, I'd plan to finesse this (e.g., show parents/children, show links to parents/children, etc).
Test Plan:
- Viewed a normal task.
- Set limit to 3, viewed a task with graph size 6, saw an error message.
- Viewed a revision stack graph (unaffected).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16295
Summary:
Fixes T8911. This corrects several issues which could crop up if a calendar event query matched more results than the query limit:
- The desired order was not applied by the SearchEngine -- it applies the first builtin order instead. Provide a proper builtin order.
- When we generate ghosts, we can't do limiting in the database because we may select and then immediately discard a large number of parent events which are outside of the query range.
- For now, just don't limit results to get the behavior correct.
- This may need to be refined eventually to improve performance.
- When trimming events, we could trim parents and fail to generate ghosts from them. Separate parent events out first.
- Try to simplify some logic.
Test Plan: An "Upcoming" dashboard panel with limit 10 and the main Calendar "Upcoming Events" UI now show the same results.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T8911
Differential Revision: https://secure.phabricator.com/D16289
Summary:
Ref T9275. Swaps Calendar over to modular transactions. Theoretically, this has almost no effect on anything.
Ref T10633. I didn't actually do anything here yet, but this gets us ready to put timestamps in email.
Test Plan: Created and edited a bunch of events, nothing seemed catastrophically broken.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9275, T10633
Differential Revision: https://secure.phabricator.com/D16286
Summary:
Ref T11309. In that task, a user misunderstood two parts of this error:
- They took "exception" to mean "unexpected failure", when it was intended to mean "rare circumstance".
- They intereted the internal ID number of a commit to mean that Phabricator was malfunctioning.
Make the language of this condition more direct, explaining what the situation means in greater detail.
Additionally, we would previously re-throw this exception, which would make the daemon exit, wait a moment, and restart. This was normal and expected.
When //unexpected// failures occur, it's important do to this: it prevents a daemon failing in a loop from causing too many side effects (e.g., limit of 1 email per 5 seconds instead of thousands per second).
When expected, permanent failures occur, we do not need to do this: the task will not be retried. I just did it because it was slightly more consistent ("failures restart daemons") and we had few permanent failure types at the time.
We have more now, and restarting the daemons generates some additional logs which have the potential to confuse. Cycling the daemon also (intentionally) reduces the rate at which we process tasks, which can be bad for permanent failures like "deleted commit" because users can delete a huge number of commits and possibly clog up the queue with cycle-after-failure actions.
Test Plan:
Tried to process a deleted commit, saw a new message:
```
2016-07-11 9:30:22 AM [STDE] <VERB> PhabricatorTaskmasterDaemon Task 1428658 was cancelled: Commit "R55:6c46b7d0fb82a859ca3f87a95dc8dcceef8088c9" (with internal ID "282161") is no longer reachable from any branch, tag, or ref in this repository, so it will not be imported. This usually means that the branch the commit was on was deleted or overwritten.
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11309
Differential Revision: https://secure.phabricator.com/D16268
Summary:
Ref T3554. Makes `bin/worker cancel --class <classname>` work (cancel all tasks with that type).
This is useful in development if your queue is full of a bunch of gunk, and a need has occasionally arisen in production environments (usually "one option is cancel everything and move on").
Test Plan: Ran `bin/worker cancel` to cancel blocks of tasks by class name.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T3554
Differential Revision: https://secure.phabricator.com/D16267
Summary:
Fixes T11274. When task titles are long, we currently wrap stuff and the trace graph renders real weird.
Instead, prevent taks/revision titles from wrapping/overflowing.
(This works in a slightly weird way, and `text-overflow: ellipsis;` has no apparent effect on any of the containers.)
Test Plan: {F1712394}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11274
Differential Revision: https://secure.phabricator.com/D16233
Summary:
Ref T5267. Two general changes:
- Make string extraction use a cache, so that it doesn't take several minutes every time you change something. Minor updates now only take a few seconds (like `arc liberate` and similar).
- Instead of dumping a sort-of-template file out, write out to a cache (`src/.cache/i18n_strings.json`). I'm planning to add more steps to read this cache and do interesting things with it (emit translatewiki strings, generate or update standalone translation files, etc).
Test Plan:
- Ran `bin/i18n extract`.
- Ran it again, saw it go a lot faster.
- Changed stuff, ran it, saw it only look at new stuff.
- Examined caches.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16227
Summary: Fixes T11159. We get two different values here (`NULL` and `0`) with different meanings.
Test Plan:
- Ran `STOP SLAVE;`.
- Saw this:
{F1710181}
- Ran `START SLAVE;`.
- Back to normal.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11159
Differential Revision: https://secure.phabricator.com/D16225
Summary: Ref T4788. It's not easy to tell at a glance which objects are open vs closed. Try to make that a bit more clear. This could probably use some more tweaking.
Test Plan: {F1708330}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16219
Summary:
Ref T4788. As it turns out, our tasks are very tightly connected.
Instead of loading every parent/child task, then every parent/child of those tasks, etc., etc., only load tasks in the "same direction" that we're already heading.
For example, we load children of children, but not parents of children. And we load parents of parents, but not children of parents.
Basically we only go "up" and "down" now, but not "out" as much. This should reduce the gigantic multiple-thousand-node graphs currently shown in the UI.
I still discover the whole graph for revisiosn, because I think it's probably more useful and always much smaller. That might need adjustment too, though.
Test Plan: Seems fine locally??
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16218
Summary:
Ref T4788. This fixes all the bugs I was immediately able to catch:
- "Directory-Like" graph shapes could draw too many vertical lines.
- "Reverse-Directory-Like" graph shapes could draw too few vertical lines.
- Terminated, branched graph shapes drew the very last line to the wrong place.
This covers the behavior with tests, so we should be able to fix more stuff later without breaking anything.
Test Plan:
- Added failing tests and made them pass.
{F1708158}
{F1708159}
{F1708160}
{F1708161}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16216
Summary: Ref T4788. This seems reasonable locally, but not sure how it will feel on real data. Might need some tweaks, or might just be a terrible idea.
Test Plan: {F1708059}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16214
Summary:
Ref T4788. This separates the revision graph view into a base class with core logic and a revision class with Differential-specific logic, so I can subclass it in Maniphest, etc., and try using it in other applications to show similar graphs.
Not sure if we'll stick with it, but even if we don't this makes the code a bit cleaner and gets custom rendering logic out of the RevisionViewController, which is nice.
Test Plan: Viewed revisions, saw the stack UI completely unchanged.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16213
Summary: Ref T10628. Turn these into tabs in a single box, since "local commits" and "similar revisions" are of particularly rare use.
Test Plan: {F1707196}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10628
Differential Revision: https://secure.phabricator.com/D16209
Summary:
Ref T11232. The cluster connection pathway specifies a timeout when connecting, but this connection pathway does not. (I'm not sure if we just never did or if it got lost at some point.)
Soon, T11044 will obsolete this and unify the database connection pathways, but that's a more complicated change.
I'm not sure if this will fix T11232, but it can't hurt.
Test Plan: Put a `throw` on timeout specifications. Before the change: did not hit it in non-cluster configurations. After the change: hit it.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11232
Differential Revision: https://secure.phabricator.com/D16194
Summary: Fixes T11201.
Test Plan:
Created bogus edges like this:
```
INSERT INTO edge (src, type, dst, dateCreated, seq) values ('PHID-TASK-vnddativbialb5p6ymis', 999999, 'quack', UNIX_TIMESTAMP(), 1);
```
Then ran `bin/remove destroy` on the relevant object.
Before the patch, destruction halted after hittin the bad edge.
After the patch, a warning is emitted but destruction continues.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11201
Differential Revision: https://secure.phabricator.com/D16171
Summary:
Ref T11179. This splits "Edit Blocking Tasks" into two options now that we have more room ("Edit Parent Tasks", "Edit Subtasks").
This also renames "Blocking" tasks to "Subtasks", and "Blocked" tasks to "Parent" tasks. My goals here are:
- Make the relationship direction more clear: it's more clear which way is up with "parent" and "subtask" at a glance than with "blocking" and "blocked" or "dependent" and "dependency".
- Align language with "Create Subtask".
- To some small degree, use more flexible/general-purpose language, although I haven't seen any real confusion here.
Fixes T6815. I think I narrowed this down to two issues:
- Just throwing a bare exeception (we now return a dialog explicitly).
- Not killing open transactions when the cyclec check fails (we now kill them).
Test Plan:
- Edited parent tasks.
- Edited subtasks.
- Tried to introduce graph cycles, got a nice error dialog.
{F1697087}
{F1697088}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6815, T11179
Differential Revision: https://secure.phabricator.com/D16166