Summary:
Ref T11741. On recent-enough versions of MySQL, we would prefer to use InnoDB for fulltext indexes instead of MyISAM.
Allow `bin/storage adjust` to read actual and expected table engines, and apply adjustments as necessary.
We have one existing bad table that uses the wrong engine, `metamta_applicationemail`. This change corrects that table.
Test Plan:
- Ran `bin/storage upgrade`.
- Saw the adjustment phase apply this change properly:
```
>>>[463] <query> ALTER TABLE `local_metamta`.`metamta_applicationemail` COLLATE = 'utf8mb4_bin', ENGINE = 'InnoDB'
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11741
Differential Revision: https://secure.phabricator.com/D16941
Summary:
Ref T11741. InnoDB uses a stopwords table instead of a stopwords file.
During `storage upgrade`, synchronize the table from the stopwords file on disk.
Test Plan:
- Ran `storage upgrade`.
- Ran `select * from stopwords`, saw stopwords.
- Added some garbage to the table.
- Ran `storage upgrade`, saw it remove it.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11741
Differential Revision: https://secure.phabricator.com/D16940
Summary:
Ref T11044.
- Use shorter lock names. Fixes T11916.
- These granular exceptions now always raise as a more generic "Cluster" exception, even for a single host, because there's less special code around running just one database.
Test Plan:
- Configured bad `mysql.port`, ran `bin/storage upgrade`, got a more helpful error message.
- Ran `bin/storage upgrade --trace`, saw shorter lock names.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044, T11916
Differential Revision: https://secure.phabricator.com/D16924
Summary: Fixes T11845. Users can still embed a text panel on the home page to give it some ambiance.
Test Plan: Wrote an autoplay video as a comment, saw it in feed. Before change: autoplay. After change: no auto play. On task: still autoplay.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11845
Differential Revision: https://secure.phabricator.com/D16920
Summary:
Ref T11044. Few issues here:
- The `PhutilProxyException` is missing an argument (hit this while in read-only mode).
- The `$ref_key` is unused.
- When you add a new master to an existing cluster, we can incorrectly apply `.php` patches which we should not reapply. Instead, mark them as already-applied.
Test Plan:
- Poked this locally, but will initialize `secure004` as an empty master to be sure.
Reviewers: chad, avivey
Reviewed By: avivey
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16916
Summary: Ref T11044. Fixes T11672. In T11672, persistent connections seem to work fine, but they can require `max_connections` and other settings to be raised. Since most users don't need them, make them an advanced option.
Test Plan: Configured persistent connections, loaded some pages, observed persistent connections get used.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044, T11672
Differential Revision: https://secure.phabricator.com/D16913
Summary:
Ref T11044. Sometimes we have a sequence of patches like this:
- `01.newtable.sql`: Adds a new table to Files.
- `02.movedata.php`: Moves some data that used to live in Tokens to the new table.
This is fairly rare, but not unheard of. More commonly, we can have this sequence:
- `03.newtable.sql`: Add a new column to Phame.
- `04.setvalue.php`: Copy data into that new column.
In both cases, when applying database-by-database, we can get into trouble.
- In the first case, if Files is on a different master, we'll try to move data into a new table before creating the table.
- In the second case, if Phame is on a different master, the PHP patch will connect to it before we add the new column.
In either case, we try to interact with tables or columns which don't exist yet.
Instead, apply each patch in order, to all databases which need it. So we'll apply `01.newtable.sql` EVERYWHERE first, then move on.
In the case of PHP patches, we also now only apply them once, since they never make schema changes. It should normally be OK to apply them more than once safely, but this is a little faster and a little safer if we ever make a mistake.
Test Plan:
- Ran `bin/storage upgrade` on single-host and clustered setups.
- Initialized new storage on single-host and clustered setups.
- Upgraded again after initialization.
- Ran with `--apply`.
- Ran with `--dry-run`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16912
Summary:
Ref T11044. This was old Facebook cruft for reading configuration from SMC (and maybe doing some other questionable things). See D183.
(See also D175 for discussion of this from 2011.)
In modern Phabricator, you can subclass `SiteConfig` to provide dynamic configuration, and we do so in the Phacility cluster. This lets you change any config, and change in response to requests (e.g., for instancing) and is generally more powerful than this mechanism was.
This configuration provider theoretically let you roll your own replication or partitioning, but in practice I believe no one ever did, and no one ever could have anyway without more support in the upstream (for migrations, read-after-write, etc).
Test Plan:
- Grepped for removed option.
- Browsed around with clustering off.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16911
Summary:
Ref T11044. One popular tool in a modern operations environment is Puppet. The primary purpose of this tool is to randomly revert hosts to older or different configurations.
Introducing an element of chaotic unpredictability into operations trains staff to be on high alert at all times, rather than lulled into complacency by predictability or consistency.
When Puppet reverts a Phabricator host's configuration to an older version, we might start writing data to a lot of crazy places where it shouldn't go. This will create a big sticky mess that is virtually impossible to undo, mostly because we'll get two files with ID 123 or two tasks with ID 456 or whatever else and good luck with that.
Instead, after changing the partition layout, require `bin/storage partition` to be run. This writes a copy of the config everywhere.
Then, when we start serving web requests, make sure every database has the exact same config. This will foil Puppet by refusing to run requests on hosts it has reverted.
Test Plan:
- Changed partition configuration.
- Ran Phabricator.
- FOILED!
- Ran `bin/storage partition` to sync config.
- Things worked again.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16910
Summary:
Ref T11044. Fixes T10931. This option has essentially never been useful for anything, and we've picked the best implementation for a long time (MySQLi if available, MySQL if not).
I am not aware of any reason to ever set this manually. If someone comes up with some bizarre but legitimate use case that I haven't thought of, we can modularize it.
Test Plan: Browsed around. Grepped for `mysql.implementation`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10931, T11044
Differential Revision: https://secure.phabricator.com/D16909
Summary:
I frequently run into a situation where I want to kill tasks that have accumulated a lot of failures regardless of what class they are. Or I'll want to kill every worker of a certain class but only if it has failed at least once. This change allows me to run `./bin/worker cancel --class <MYCLASS> --min-failure-count 5` to only kill tasks with at least 5 failed attempts.
The `--min-failure-count N` argument can be used by itself as well as with `--class CLASSNAME`. I don't think it makes sense for it to work with `--id ID`, but I'm not dead set on that or anything.
Test Plan: I ran the worker management workflow with and without the `--min-failure-count` argument and it worked as expected.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley, yelirekim
Differential Revision: https://secure.phabricator.com/D16906
Summary:
Fixes T10759. Fixes T11817. This runs all the general sanity/configuration checks on all the active servers.
None of these warnings are very important, and this doesn't change any logical stuff.
Depends on D16904.
Test Plan: Painstakingly triggered each warning, verified that they rendered correctly and that messages told me which host was affected.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759, T11817
Differential Revision: https://secure.phabricator.com/D16905
Summary:
Ref T10759. Check master/replica status during startup.
After D16903, this also means that we check this status after a database comes back online after being unreachable.
If a master is replicating, fatal (since this can do a million kinds of bad things).
If a replica is not replicating, warn (this just means the replica is behind so some data is at risk).
Also: if your masters were actually configured properly (mine weren't until this change detected it), we would throw away patches as we applied them, so they would only apply to the //first// master. Instead, properly apply all migration patches to all masters.
Test Plan:
- Started Phabricator with a replicating master, got a fatal.
- Stopped replication on a replica, got a warning.
- With two non-replicating masters, upgraded storage.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16904
Summary:
Ref T10759. Currently, these checks run only against configured masters. Instead, check every host.
These checks also sort of cheat through restart during a recovery, when some hosts will be unreachable: they test for "disaster" by seeing if no masters are reachable, and just skip all the checks in that case.
This is bad for at least two reasons:
- After recent changes, it is possible that //some// masters are dead but it's still OK to start. For example, "slowvote" may have no master, but everything else is reachable. We can safely run without slowvote.
- It's possible to start during a disaster and miss important setup checks completely, since we skip them, get a clean bill of health, and never re-test them.
Instead:
- Test each host individually.
- Fundamental problems (lack of InnoDB, bad schema) are fatal on any host.
- If we can't connect, raise it as a //warning// to make sure we check it later. If you start during a disaster, we still want to make sure that schemata are up to date if you later recover a host.
In particular, I'm going to add these checks soon:
- Fatal if a "master" is replicating.
- Fatal if a "replica" is not replicating.
- Fatal if a database partition config differs from web partition config.
- When we let a database off with a warning because it's down, and later upgrade it to a fatal because we discover it is broken after it comes up again, fatal everything. Currently, we keep running if we "discover" the presence of new fatals after surviving setup checks for the first time.
Test Plan:
- Configured with multiple masters, intentionally broke one (simulating a disaster where one master is lost), saw Phabricator still startup.
- Tested individual setup checks by intentionally breaking them.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16902
Summary:
Ref T11044. I'm going to hold this until after the release cut, but I think it's good to go.
This allows installs to configure multiple masters in `cluster.databases` and partition applications across them (for example, put Maniphest on a dedicated database).
When we make a Maniphest connection we go look up which master we should be hitting first, then connect to it.
This has at least approximately been planned for many years, so the actual change is largely just making sure that your config makes sense.
Test Plan:
- Configured `db001.epriestley.com` and `db002.epriestley.com` as master/master.
- Partitioned applications between them.
- Interacted with various applications, saw writes go to the correct host.
- Viewed "Database Servers" and saw partitioning information.
- Ran schema upgrades.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16876
Summary: Ref T11034. Try to produce a roughly-one-sentence summary instead of a roughly-one-paragraph summary for the browse dialog.
Test Plan:
- Added unit tests, ran unit tests.
- Wrote a longer summary for a project, browsed to it, saw a shorter summary.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11034
Differential Revision: https://secure.phabricator.com/D16892
Summary:
Fixes T11809. Ref
- Explicitly document the summary icon hints -- I don't think these are too hard to figure out (and maybe this stuff should just go in the tooltips) but we can start here.
- Use color + shape to distinguish between "cancelled" and "declined", not just color (for users with vision accessibility issues).
- Translate a "minute(s)" string into sensible English.
- Use RSVP status on the month view green circle thing.
Test Plan:
- Read docs.
- Looked at month view.
- Read reminder mail.
- Viewed month view mobile view.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11809
Differential Revision: https://secure.phabricator.com/D16872
Summary: Depends on D16847. Ref T11044. This updates the remaining storage-related workflows from the CLI to accommodate multiple masters.
Test Plan:
- Configured multiple masters.
- Ran all `bin/storage` workflows.
- Ran `arc unit --everything`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16848
Summary:
Depends on D16115. Ref T11044. In the brave new world of multiple masters, we need to check the schemata on each master when looking for missing storage patches, keys, schema changes, etc.
This realigns all the "check out what's up with that schema" calls to work for multiple hosts, and updates the web UI to include a "Server" column and allow you to browse per-server.
This doesn't update `bin/storage`, so it breaks things on its own (and unit tests probably won't pass). I'll update that in the next change.
Test Plan: Configured local environment in cluster mode with multiple masters, saw both hosts' status reported in web UI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16847
Summary:
Ref T11044. This moves toward partitioned application databases:
- You can define multiple masters.
- Convert all the easily-convertible code to become multi-master aware.
This doesn't convert most of `bin/storage` or "Config > Database (Stuff)" yet, as both are quite involved. They still work for now, but only operate on the first master instead of all masters.
Test Plan: Configured multiple masters, browsed around, ran `bin/storage` commands, ran `bin/storage --host ...`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16115
Summary:
This has been replaced by `PolicyCodex` after D16830. Also:
- Rebuild Celerity map to fix grumpy unit test.
- Fix one issue on the policy exception workflow to accommodate the new code.
Test Plan:
- `arc unit --everything`
- Viewed policy explanations.
- Viewed policy errors.
Reviewers: chad
Reviewed By: chad
Subscribers: hach-que, PHID-OPKG-gm6ozazyms6q6i22gyam
Differential Revision: https://secure.phabricator.com/D16831
Summary:
Ref T5267. When extrating data from `pht()` calls, also extract the argument types and export them into the map so they can be used by consumers.
We recognize plurals (`phutil_count()`, `new PhutilNumber`) and genders (`phutil_person()`). We'll need to annotate the codebase for those, since they're currently runtime-only.
Test Plan:
Rebuilt extraction maps, got data like this (note "number" type annotation).
```
"Scaling pool \"%s\" up to %s daemon(s).": {
"uses": [
{
"file": "/daemon/PhutilDaemonOverseer.php",
"line": 378
}
],
"types": [
null,
"number"
]
},
```
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16823
Summary:
Ref T4788. I thought I implemented this, but actualy didn't.
When we're in the "mid-sized" fallback mode (graph has more than 100 nodes, but not more than than 100 parents/children), don't actually draw the graph. It's almost always uninteresting and huge.
Instead, this just renders a list of direct parents, then the task, then the direct children, which is pretty straightforward.
Test Plan: Set limit to 5, saw mid-sized fallback graph with no actual graph drawing.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4788
Differential Revision: https://secure.phabricator.com/D16816
Summary: Ref T5267. Fix one minor bug (paths were not being resolved properly) and one minor string issue (missing `%d` in a string).
Test Plan: Extracted strings, got a cleaner result.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T5267
Differential Revision: https://secure.phabricator.com/D16808
Summary: Ref T7931. This is still quite rough, but should technically send vaguely-useful email as part of the standard trigger infrastructure.
Test Plan: Ran `bin/phd start`, created an event shortly, saw reminder email send in `bin/mail list-outbound`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T7931
Differential Revision: https://secure.phabricator.com/D16784
Summary: Fixes T11799. This string is varying on the first parameter, but should vary on the second parameter.
Test Plan: Ran `bin/garbage set-policy ...`, saw proper translation.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11799
Differential Revision: https://secure.phabricator.com/D16769
Summary:
Depends on D16755. Right now, we build a setup check map (to run preflight checks), then later load libraries.
This means any checks included in third-party libraries don't get added to the map, and no longer run.
(These are rare, but Phacility has a couple).
Instead, delete the caches after loading extra libraries.
Test Plan: With this and D16755, re-ran setup checks and saw Phacility setup checks run.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16756
Summary:
Fixes T11771. Adds a lock around each GC process so we don't try to, e.g., delete old files on two machines at once just because they're both running trigger daemons.
The other aspects of this daemon (actual triggers; nuance importers) already have separate locks.
Test Plan: Ran `bin/phd debug trigger --trace`, saw daemon acquire locks and collect garbage.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11771
Differential Revision: https://secure.phabricator.com/D16739
Summary: Ref T11766. When users run `git pull` or similar, log the operation in the pull log.
Test Plan: Performed SSH pulls, got a log in the database. Today, this event log is purely diagnostic and has no UI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11766
Differential Revision: https://secure.phabricator.com/D16738
Summary: Ref T11773. This is an initial first step toward a more complete solution, but should make the worst case much less bad: prior to this change, the worst case was "30 second exeuction timeout". After this patch, the worst case is "no results + explanatory message", which is strictly better.
Test Plan:
Made all feed stories fail policy checks, loaded home page.
- Before adding overheating: 9,600 queries / 20 seconds
- After adding overheating: 376 queries / 800ms
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11773
Differential Revision: https://secure.phabricator.com/D16735
Summary:
Ref T10747. Rough flow is:
- Run a query.
- Select a new "Export Events..." action.
- This lets you define an "Export", which has a unique URL you can paste into Google Calendar or Calendar.app or whatever.
Most of this does nothing yet but here's the boilerplate.
Test Plan: Doesn't do anything yet.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10747
Differential Revision: https://secure.phabricator.com/D16675
Summary:
Ref T11672. At low loads, this causes us to use more connections, which is pushing some installs over the default limits.
Rather than trying to walk users through changing `max_connections`, `open_files_limit`, `fs.file-max`, `ulimit`, etc., just put things back for now. After T11044 we should have headroom to use persistent connections within the default limits on all reasonable systems..
Test Plan: Loaded Phabricator, poked around.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11672
Differential Revision: https://secure.phabricator.com/D16591
Summary: Fixes T11676. Instead of trying to fit task titles to the display, truncate them and let the table scroll.
Test Plan:
Table now scrolls when cramped:
{F1843396}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11676
Differential Revision: https://secure.phabricator.com/D16583
Summary:
Fixes T11677. This makes two minor adjustments to the repository import daemons:
- The first step ("Message") now queues at a slightly-lower-than-default (for already-imported repositories) or very-low (for newly importing repositories) priority level.
- The other steps now queue at "default" priority level. This is actually what they already did, but without this change their behavior would be to inherit the priority level of their parents.
This has two effects:
- When adding new repositories to an existing install, they shouldn't block other things from happening anymore.
- The daemons will tend to start one commit and run through all of its steps before starting another commit. This makes progress through the queue more even and predictable.
- Before, they did ALL the message tasks, then ALL the change tasks, etc. This works fine but is confusing/uneven/less-predictable because each type of task takes a different amount of time.
Test Plan:
- Added a new repository.
- Saw all of its "message" steps queue at priority 4000.
- Saw followups queue at priority 2000.
- Saw progress generally "finish what you started" -- go through the queue one commit at a time, instead of one type of task at a time.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11677
Differential Revision: https://secure.phabricator.com/D16585
Summary:
Fixes T11675. This capability was erroneously (probably?) removed in D14766.
This search implementation (which uses exact match) probably isn't perfect for all cases of "text" fields, but empirically it seems to be what a significant number of users are after.
Test Plan:
Searched for a custom text field value.
{F1843383}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11675
Differential Revision: https://secure.phabricator.com/D16582
Summary:
Ref T11672. Depends on D16577. When establishing a connection from a webserver context, try to use persistent connections.
The hope is that this will fix outbound port exhaustion issues experienced on repository hosts handling large queue volumes.
Test Plan:
Added this to a page:
```lang=php
$tables = array(
new PhabricatorUser(),
new ManiphestTask(),
new DifferentialRevision(),
new PhabricatorRepository(),
new PhabricatorPaste(),
);
$ids = array();
foreach ($tables as $table) {
$conn = $table->establishConnection('r');
$cid = queryfx_one(
$conn,
'SELECT CONNECTION_ID() cid');
$ids[get_class($table)] = $cid['cid'];
}
var_dump($ids);
```
Reloaded the page a bunch of times and saw no reissued connections (the pool seems to keep a particular connection bound to a particular database), but did see connection reuse across requests.
That is, across reloads the same connection IDs appeared, but the same connection ID never appeared twice in the same request. This is what we want.
Also googled for issues with persistent connections, but everything I found was unconcerning and obscure (local variables and other very complex state that we don't use), and a bunch of the docs are reassuring (transactions, etc., get reset properly).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11672
Differential Revision: https://secure.phabricator.com/D16578
Summary:
Ref T11613. In D16503/T11598 I refined the setup flow to improve messaging for early-stage setup issues, but failed to fully untangle things.
We sometimes still try to access a cache which uses configuration before we build configuration, which causes an error.
Instead, store "are we in flight / has setup ever worked?" in a separate cache which doesn't use the cache namespace. This stops us from trying to read config before building config.
Test Plan:
Hit bad extension error with a fake extension, got a proper setup help page:
{F1812803}
Solved the error, reloaded, broke things again, got a "friendly" page:
{F1812805}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11613
Differential Revision: https://secure.phabricator.com/D16542
Summary: Fixes T11607.
Test Plan:
- Made a comment using `{key ...}`.
- Used `bin/mail show-outbound --id X --dump-html > test.html` to review HTML:
{F1805304}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11607
Differential Revision: https://secure.phabricator.com/D16523
Summary:
Fixes T11583.
- When users run `bin/storage upgrade` for the first time on a new install, we currently give them a prompt which feels rough and which they can only reasonably ever answer "yes" to.
- We generally use cautionary language ("found issues with schema") in this workflow. Adjustments are now routine, so use more neutral and progress-oriented language ("found adjustments to apply").
Test Plan:
- Ran `bin/storage upgrade --namesapce kappa123`, got an adjustment using neutral language without prompting.
- Dropped a key, ran `bin/storage upgrade`, got normal workflow (but with more neutral language).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11583
Differential Revision: https://secure.phabricator.com/D16509
Summary: Fixes T11593. We ask for a list of values when searching for custom "link" fields, but don't handle it correctly when actually construcitng a query.
Test Plan:
Added this custom field:
```
{
"mycompany.target-version": {
"name": "Target Version",
"type": "link",
"search": true
}
}
```
Set a task to "beta". Let daemons index it. Queried for:
```
constraints: {
"custom.mycompany.target-version": [
"beta"
]
}
```
Got just one result back.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11593
Differential Revision: https://secure.phabricator.com/D16508
Summary: Ref T11589. Provide a way for scripts to say "just continue if database config fails", and use it in `bin/config` and `bin/storage`.
Test Plan:
- Broke database config.
- Ran `bin/config`, worked fine.
- Ran `bin/storage`, got helpful "set up the database" message.
- Ran `bin/repository`, got fatal.
- Ran normal site with valid/invalid config, got proper feedback.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16502
Summary:
Ref T11589. This runs:
- preflight checks (critical checks: PHP version stuff, extensions);
- configuration;
- normal checks.
The PHP checks are split into critical ("bad version") and noncritical ("sub-optimal config").
I tidied up the extension checks slightly, we realistically depend on `cURL` nowadays.
Test Plan:
- Faked a preflight failure.
- Hit preflight check.
- Got expected error screen.
- Loaded normal pages.
- Hit a normal setup check.
- Used DarkConsole "Startup" tab to verify that preflight checks take <1ms to run (we run them on every page without caching, at least for now, but they only do trivial checks like PHP versions).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16500
Summary: Caught one of these while reviewing docs, grepped for the other one.
Test Plan: `grep`, reading
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16498
If the namespace is something like "test_example" we currently fail to
renamespace the dump.
(Cowboy committing this since this is currently blocking a data export.)
Test Plan:
- Renamespaced a local dump, examined the output, saw 60 create / 60 use, reimported it.
- Will export in production.
Auditors: chad
Summary:
Fixes T11577. When we connect to a host and try to select a database which does not exist, we currently treat it as though the host wasn't reachable.
This isn't correct, and prevents storage from being initialized while already in cluster mode, since the "config" database won't exist yet the first time we connect.
Instead, distinguish between `AphrontSchemaQueryException` (thrown on connection if the requested database is not present) and other errors.
Test Plan:
- Put Phabricator into cluster database mode (`cluster.databases = ...`).
- Swapped `storage.default-namespace` to force initialization of a new install.
- Ran `bin/storage upgrade`.
- Before patch: Immediate fatal about unreachablility.
- After patch: Database initialized.
- Also ran initialization steps in tranditional single-host mode (`cluster.databases` empty, `mysql.host` configured).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11577
Differential Revision: https://secure.phabricator.com/D16489
Summary:
Ref T10867 for original use case. This workflow provides a plausible way for administrators to stop the daemons when performing upgrades or maintenance, then bring those daemons back up without resulting in the failure of builds that were running at the time.
On our organization's phab install, builds are running 24/7. The majority of these builds last for at least several minutes, and contain build steps which fail if interrupted and then resumed, as happens when turning daemons on and off.
Instead of allowing these build steps to resume execution as normal, this workflow will instruct active builds to restart their entire build process instead of just resuming whichever step they were on.
Test Plan:
contrived a build plan which would fail if resumed partway through:
- lease a working copy
- command `touch restart_{build.id}`
- command `test -e restart_{build.id} && rm restart_{build.id} && sleep 60`
followed old procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/phd start`
- saw the builds fail
followed new procedure:
- run a few of these builds manually
- `./bin/phd stop`
- `./bin/harbormaster restart --active`
- `./bin/phd start`
- saw the builds pass
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Maniphest Tasks: T10867
Differential Revision: https://secure.phabricator.com/D16485
Summary: Ref T11132, significantly cleans up the Config app, new layout, icons, spacing, etc. Some minor todos around re-designing "issues", mobile support, and maybe another pass at actual Group pages.
Test Plan: Visit and test every page in the config app, set new items, resolve setup issues, etc.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16468
Summary: Ref T11132. This gets rid of the red bar for admins and instead shows a new menu item next to notifications/chat if there are unresolved configuration issues. Menu goes away if there are no issues. May move this later into the bell icon, but think think might be the right place to start especially for NUX and updates. Maybe limit the number of items?
Test Plan:
Tested with some, lots, and no config issues.
{F1790156}
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16461
Summary: Previously, the chatbot docs instructed users to get certificates for the conduit API and put the cert in a `conduit.cert` config key. In order to get the chatbot to work, I needed to instead get an API key and put it in the `conduit.token` config entry.
Test Plan: Doc fix. Tried the new documented way and it worked.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D16443