Summary: Ref T12450. These are like 95% my fault, but Elastic appears to spell the name "Elasticsearch" consistently in their branding.
Test Plan: `grep ElasticSearch`
Reviewers: chad, 20after4
Maniphest Tasks: T12450
Differential Revision: https://secure.phabricator.com/D17601
Summary:
[ ] Write an "Upgrading: ..." guidance task with narrow instructions for installs that are upgrading.
[ ] Do we need to add an indexing activity (T11932) for installs with ElasticSearch?
[ ] We should more clearly detail exactly which versions of ElasticSearch are supported (for example, is ElasticSearch <2 no longer supported)? From T9893 it seems like we may //only// have supported ElasticSearch <2 before, so are the two regions of support totally nonoverlapping and all ElasticSearch users will need to upgrade?
[ ] Documentation should provide stronger guidance toward MySQL and away from Elastic for the vast majority of installs, because we've historically seen users choosing Elastic when they aren't actually trying to solve any specific problem.
[ ] When users search for fulltext results in Maniphest and hit too many documents, the current behavior is approximately silent failure (see T12443). D17384 has also lowered the ceiling for ElasticSearch, although previous changes lowered it for MySQL search. We should not fail silently, and ideally should build toward T12003.
[ ] D17384 added a new "keywords" field, but MySQL does not search it (I think?). The behavior should be as consistent across MySQL and Elastic as we can make it. Likely cleaner is giving "Project" objects a body, with "slugs" and "description" separated by newlines?
[ ] `PhabricatorSearchEngineTestCase` is now pointless and only detects local misconfigurations.
[ ] It would be nice to build a practical test suite instead, where we put specific documents into the index and then search for them. The upstream test could run against MySQL, and some `bin/search test` could run against a configured engine like ElasticSearch. This would make it easier to make sure that behavior was as uniform as possible across engine implementations.
[ ] Does every assigned task now match "user" in ElasticSearch?
[x] `PhabricatorElasticFulltextStorageEngine` has a `json_encode()` which should be `phutil_json_encode()`.
[ ] `PhabricatorSearchService` throws an untranslated exception.
[ ] When a search cluster is down, we probably don't degrade with much grace (unhandled exception)?
[ ] I haven't run bin/search init, but bin/search index doesn't warn me that I may want to. This might be worth adding. The UI does warn me.
[ ] bin/search init warns me that the index is "incorrect". It might be more clear to distinguish between "missing" and "incorrect", since it's more comforting to users to see "everything is as we expect, doing normal first-time setup now" than "something is wrong, fixing it".
[ ] CLI message "Initializing search service "ElasticSearch"" does not end with a period, which is inconsistent with other UI messages.
[ ] It might be nice to let bin/search commands like init and index select a specific service (or even service + host) to act on, as bin/storage --ref ... now does. You can generally get the result you want by fiddling with config.
[ ] When a service isn't writable, bin/search init reports "Search cluster has no hosts for role "write".". This is accurate but does not provide guidance: it might be more useful to the user to explain "This service is not writable, so we're skipping index check for it.".
[x] Even with write off for MySQL, bin/search index --type task --trace still updates MySQL, I think? I may be misreading the trace output. But this behavior doesn't make sense if it is the actual behavior, and it seems like reindexAbstractDocument() uses "all services", not "writable services", and the MySQL engine doesn't make sure it's writable before indexing.
[x] Searching or user fails to find task Grant users tokens when a mention is created, suggesting that stemming is not working.
[x] Searching for users finds that task, but fails to find a task containing "per user per month" in a comment, also suggesting that stemming is not working.
[x] Searching for maniphest fails to find task maniphest.query elephant, suggesting that tokenization in ElasticSearch is not as good as the MySQL tokenization for these words (see D17330).
[x] The "index incorrect" warning UI uses inconsistent title case.
[x] The "index incorrect" warning UI could format the command to be run more cleanly (with addCommand(), I think).
refs T12450
Test Plan:
* Stared blankly at the code.
* Disabled 'write' role on mysql fulltext service.
* Edited a task, ran search indexer, verified that the mysql index wasn't being updated.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin
Maniphest Tasks: T12450
Differential Revision: https://secure.phabricator.com/D17564
Summary:
The goal is to make fulltext search back-ends more extensible, configurable and robust.
When this is finished it will be possible to have multiple search storage back-ends and
potentially multiple instances of each.
Individual instances can be configured with roles such as 'read', 'write' which control
which hosts will receive writes to the index and which hosts will respond to queries.
These two roles make it possible to have any combination of:
* read-only
* write-only
* read-write
* disabled
This 'roles' mechanism is extensible to add new roles should that be needed in the future.
In addition to supporting multiple elasticsearch and mysql search instances, this refactors
the connection health monitoring infrastructure from PhabricatorDatabaseHealthRecord and
utilizes the same system for monitoring the health of elasticsearch nodes. This will
allow Wikimedia's phabricator to be redundant across data centers (mysql already is,
elasticsearch should be as well).
The real-world use-case I have in mind here is writing to two indexes (two elasticsearch clusters
in different data centers) but reading from only one. Then toggling the 'read' property when
we want to migrate to the other data center (and when we migrate from elasticsearch 2.x to 5.x)
Hopefully this is useful in the upstream as well.
Remaining TODO:
* test cases
* documentation
Test Plan:
(WARNING) This will most likely require the elasticsearch index to be deleted and re-created due to schema changes.
Tested with elasticsearch versions 2.4 and 5.2 using the following config:
```lang=json
"cluster.search": [
{
"type": "elasticsearch",
"hosts": [
{
"host": "localhost",
"roles": { "read": true, "write": true }
}
],
"port": 9200,
"protocol": "http",
"path": "/phabricator",
"version": 5
},
{
"type": "mysql",
"roles": { "write": true }
}
]
Also deployed the same changes to Wikimedia's production Phabricator instance without any issues whatsoever.
```
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Tags: #elasticsearch, #clusters, #wikimedia
Differential Revision: https://secure.phabricator.com/D17384
Summary:
Fixes T12306. Currently, we warn about daemons not running even if they're in normal "alive" states, particularly "waiting to restart after a failure".
This check was made more strict in D12088, back when we tried to version check running daemons. Since we implemented auto-restart-after-config-change we don't do this anymore, so it should be fine to make this more lax again.
Test Plan:
- Faked an exception for all tasks.
- Before patch: reloading the daemon setup error sometimes raised a false positive ("waiting" daemon detected as dead).
- After patch: daemon setup error no longer triggers.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T12306
Differential Revision: https://secure.phabricator.com/D17397
Summary: Ref T9640. On 7.0 we had signal handling issues so we can never support it, but async signals should resolve them on 7.1 or newer.
Test Plan: On PHP 7.1, got through the setup warning.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9640
Differential Revision: https://secure.phabricator.com/D17197
Summary: Ref T9640. This option was removed in PHP7, so there's no reason to warn about it.
Test Plan: No longer saw a setup warning on PHP7.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9640
Differential Revision: https://secure.phabricator.com/D17196
Summary:
- Fixes T11995. This got moved but I missed renaming this callsite.
- Fixes T11993. If you have valid credentials, but haven't run `storage upgrade` yet, we can hit this exception during setup. Just ignore it instead.
Test Plan:
- Saved global settings, no more fatal.
- Changed `storage-namespace` to junk, loaded web UI with valid database credentials.
{F2106358}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11993, T11995
Differential Revision: https://secure.phabricator.com/D17024
Summary: Fixes T11544. Attempt to detect if we're on a tiny, burstable-CPU AWS instance and complain.
Test Plan:
- Completely faked this locally.
- Hit the URI on an EC2 instance to check that it's correct (got back "m3.large", since that was the instance class).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11544
Differential Revision: https://secure.phabricator.com/D17014
Summary:
Ref T11553. With some regularity, users make various configuration mistakes which we can detect by making a request to ourselves.
I use a magical header to make this request because we want to test everything else (parameters, path).
- Fixes T4854, probably. Tries to detect mod_pagespeed by looking for a header. This is a documentation-based "fix", I didn't actually install mod_pagespeed or formally test this.
- Fixes T6866. We now test for parameters (e.g., user somehow lost "QSA").
- Ref T6709. We now test that stuff is decoded exactly once (e.g., user somehow lost "B").
- Fixes T4921. We now test that Authorization survives the request.
- Fixes T2226. Adds a setup check to determine whether gzip is enabled on the web server, and attempts to enable it at the PHP level.
- Fixes `<space space newline newline space><?php` in `preamble.php`.
Test Plan: Tested all of these setup warnings, although mostly by faking them.
Reviewers: joshuaspence, chad
Reviewed By: chad
Subscribers: Korvin
Maniphest Tasks: T4854, T4921, T6709, T6866, T11553, T2226
Differential Revision: https://secure.phabricator.com/D12622
Summary:
Ref T11922. After updating to HEAD of `master`, you need to manually rebuild the index. We don't do this during `bin/storage upgrade` because it can take a very long time (`secure.phabricator.com` took roughly an hour) and can happen while Phabricator is running.
However, if we don't warn users about this they'll just get a broken index unless they go read the changelog (or file an issue, then we tell them to go read the changelog).
This adds a very simple table for notes to administrators so we can write a "you need to go rebuild the index" note, then adds one.
Administrators clear the note by completing the activity and running `bin/config done reindex`. This isn't automatic because there are various strategies you can use to approach the issue, which I'll discuss in greater detail in the linked documentation.
Also, fix an issue where `bin/storage upgrade --apply <patch>` could try to re-mark an already-applied patch as applied.
Test Plan:
- Ran storage ugrades.
- Got instructions to rebuild search index.
- Cleared instructions with `bin/config done reindex`.
Reviewers: chad
Reviewed By: chad
Subscribers: avivey
Maniphest Tasks: T11922
Differential Revision: https://secure.phabricator.com/D16965
Summary:
Ref T11741. This makes everything work if we switch to InnoDB, but never actually switches yet.
Since the default minimum word length (3) and stopword list (36 common English words) in InnoDB are generally pretty reasonable, I just didn't add any setup advice for them. I figure we're better off with simpler setup until we identify some real problem that the builtin stopwords create.
Test Plan: Swapped the `false` to `true`, ran `storage adjust`, got InnoDB fulltext indexes, searched for stuff, got default "AND" behavior.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11741
Differential Revision: https://secure.phabricator.com/D16942
Summary:
Ref T11741. Fixes T10642. Parse and compile user queries with a consistent ruleset, then submit queries to the backend using whatever ruleset MySQL is configured with.
This means that `ft_boolean_syntax` no longer needs to be configured (we'll just do the right thing in all cases).
This should improve behavior with RDS immediately (T10642), and allow us to improve behavior with InnoDB in the future (T11741).
Test Plan:
- Ran various queries in the UI, saw the expected results.
- Ran bad queries, got useful errors.
- Searched threads in Conpherence.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10642, T11741
Differential Revision: https://secure.phabricator.com/D16939
Summary:
Ref T11044. This was old Facebook cruft for reading configuration from SMC (and maybe doing some other questionable things). See D183.
(See also D175 for discussion of this from 2011.)
In modern Phabricator, you can subclass `SiteConfig` to provide dynamic configuration, and we do so in the Phacility cluster. This lets you change any config, and change in response to requests (e.g., for instancing) and is generally more powerful than this mechanism was.
This configuration provider theoretically let you roll your own replication or partitioning, but in practice I believe no one ever did, and no one ever could have anyway without more support in the upstream (for migrations, read-after-write, etc).
Test Plan:
- Grepped for removed option.
- Browsed around with clustering off.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16911
Summary:
Ref T11044. One popular tool in a modern operations environment is Puppet. The primary purpose of this tool is to randomly revert hosts to older or different configurations.
Introducing an element of chaotic unpredictability into operations trains staff to be on high alert at all times, rather than lulled into complacency by predictability or consistency.
When Puppet reverts a Phabricator host's configuration to an older version, we might start writing data to a lot of crazy places where it shouldn't go. This will create a big sticky mess that is virtually impossible to undo, mostly because we'll get two files with ID 123 or two tasks with ID 456 or whatever else and good luck with that.
Instead, after changing the partition layout, require `bin/storage partition` to be run. This writes a copy of the config everywhere.
Then, when we start serving web requests, make sure every database has the exact same config. This will foil Puppet by refusing to run requests on hosts it has reverted.
Test Plan:
- Changed partition configuration.
- Ran Phabricator.
- FOILED!
- Ran `bin/storage partition` to sync config.
- Things worked again.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16910
Summary:
Ref T11044. Fixes T10931. This option has essentially never been useful for anything, and we've picked the best implementation for a long time (MySQLi if available, MySQL if not).
I am not aware of any reason to ever set this manually. If someone comes up with some bizarre but legitimate use case that I haven't thought of, we can modularize it.
Test Plan: Browsed around. Grepped for `mysql.implementation`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10931, T11044
Differential Revision: https://secure.phabricator.com/D16909
Summary:
Fixes T10759. Fixes T11817. This runs all the general sanity/configuration checks on all the active servers.
None of these warnings are very important, and this doesn't change any logical stuff.
Depends on D16904.
Test Plan: Painstakingly triggered each warning, verified that they rendered correctly and that messages told me which host was affected.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759, T11817
Differential Revision: https://secure.phabricator.com/D16905
Summary:
Ref T10759. Check master/replica status during startup.
After D16903, this also means that we check this status after a database comes back online after being unreachable.
If a master is replicating, fatal (since this can do a million kinds of bad things).
If a replica is not replicating, warn (this just means the replica is behind so some data is at risk).
Also: if your masters were actually configured properly (mine weren't until this change detected it), we would throw away patches as we applied them, so they would only apply to the //first// master. Instead, properly apply all migration patches to all masters.
Test Plan:
- Started Phabricator with a replicating master, got a fatal.
- Stopped replication on a replica, got a warning.
- With two non-replicating masters, upgraded storage.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16904
Summary:
Ref T10759. We may "discover" the presence of a fatal setup error later, after starting Phabricator.
This can happen in a few ways, but most are unlikely. The one I'm immediately concerned about is:
- Phabricator starts up during a disaster with some databases unreachable.
- We start with warnings (unreachable databases are generally not fatal, since it's OK for some subset of hosts to be down in replicated/partitioned setups).
- The unreachable databases later recover and become accessible again.
- When we run checks against them, we discover that they are misconfigured.
Currently, "fatal" setup issues are not truly fatal if we're "in flight" -- we've survived setup checks at least once in the past. This is bad in the scenario above.
Especially with partitioning, it could lead to mangled data in a disaster scenario where operations staff makes a small configuration mistake while trying to get things running again.
Instead, if we "discover" a fatal error while already "in flight", reset the whole setup process as though the webserver had just restarted. Don't serve requests again until we can make it through setup without hitting fatals.
Test Plan:
- Started Phabricator with multiple masters, one of which was down and broken.
- Got a warning about the bad master.
- Revived the master.
- Before: Phabricator detects the fatal, but keeps serving requests.
- After: Phabricator detects the fatal, resets the webserver, and stops serving requests until the fatal is resolved.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16903
Summary:
Ref T10759. Currently, these checks run only against configured masters. Instead, check every host.
These checks also sort of cheat through restart during a recovery, when some hosts will be unreachable: they test for "disaster" by seeing if no masters are reachable, and just skip all the checks in that case.
This is bad for at least two reasons:
- After recent changes, it is possible that //some// masters are dead but it's still OK to start. For example, "slowvote" may have no master, but everything else is reachable. We can safely run without slowvote.
- It's possible to start during a disaster and miss important setup checks completely, since we skip them, get a clean bill of health, and never re-test them.
Instead:
- Test each host individually.
- Fundamental problems (lack of InnoDB, bad schema) are fatal on any host.
- If we can't connect, raise it as a //warning// to make sure we check it later. If you start during a disaster, we still want to make sure that schemata are up to date if you later recover a host.
In particular, I'm going to add these checks soon:
- Fatal if a "master" is replicating.
- Fatal if a "replica" is not replicating.
- Fatal if a database partition config differs from web partition config.
- When we let a database off with a warning because it's down, and later upgrade it to a fatal because we discover it is broken after it comes up again, fatal everything. Currently, we keep running if we "discover" the presence of new fatals after surviving setup checks for the first time.
Test Plan:
- Configured with multiple masters, intentionally broke one (simulating a disaster where one master is lost), saw Phabricator still startup.
- Tested individual setup checks by intentionally breaking them.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10759
Differential Revision: https://secure.phabricator.com/D16902
Summary:
Ref T11044. This moves toward partitioned application databases:
- You can define multiple masters.
- Convert all the easily-convertible code to become multi-master aware.
This doesn't convert most of `bin/storage` or "Config > Database (Stuff)" yet, as both are quite involved. They still work for now, but only operate on the first master instead of all masters.
Test Plan: Configured multiple masters, browsed around, ran `bin/storage` commands, ran `bin/storage --host ...`.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11044
Differential Revision: https://secure.phabricator.com/D16115
Summary:
Ref T11672. At low loads, this causes us to use more connections, which is pushing some installs over the default limits.
Rather than trying to walk users through changing `max_connections`, `open_files_limit`, `fs.file-max`, `ulimit`, etc., just put things back for now. After T11044 we should have headroom to use persistent connections within the default limits on all reasonable systems..
Test Plan: Loaded Phabricator, poked around.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11672
Differential Revision: https://secure.phabricator.com/D16591
Summary:
Fixes T11683. Likely as a result of the persitent connections change, more users are seeing MySQL connection limit errors.
The persistent connections change means we use //fewer// connections at the high end, but I'm guessing PHP is keeping some more connections around in the pool, so while high-traffic hosts use fewer connections, low-traffic hosts now use more.
Raise an explicit setup warning about this. Users should be adjusting it anyway, there's no value to leaving it at extremely low default and connections are baiscally free until you run out of outbound ports.
Test Plan: {F1844630}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11683
Differential Revision: https://secure.phabricator.com/D16586
Summary:
The commit which added checks for the old homepage options (now in
Dashboard) in rP9d9a47e9cf, added them to the auth section, where they
would present:
This option has been migrated to the "Auth" application. Your old
configuration is still in effect, but now stored in "Auth" instead of
configuration. Going forward, you can manage authentication from the
web UI.
Remove them from the moved-to-Auth list, and coalesce the multiple
definitions of the help text into one.
Test Plan:
- set maniphest.priorities.unbreak-now to something
- observe the setup issue reported
- hope it tells you the right thing
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley, chad
Differential Revision: https://secure.phabricator.com/D16576
Summary:
Fixes T11627.
Beyond being complex, I have no real reason to believe these checks even work (and they don't test repositories, file storage, logfiles, etc).
Test Plan:
Faked the error:
{F1813433}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11627
Differential Revision: https://secure.phabricator.com/D16544
Summary:
Ref T11613. In D16503/T11598 I refined the setup flow to improve messaging for early-stage setup issues, but failed to fully untangle things.
We sometimes still try to access a cache which uses configuration before we build configuration, which causes an error.
Instead, store "are we in flight / has setup ever worked?" in a separate cache which doesn't use the cache namespace. This stops us from trying to read config before building config.
Test Plan:
Hit bad extension error with a fake extension, got a proper setup help page:
{F1812803}
Solved the error, reloaded, broke things again, got a "friendly" page:
{F1812805}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11613
Differential Revision: https://secure.phabricator.com/D16542
Summary:
Ref T11589. When we hit a fatal setup issue (essentially always a connection failure) //after// we've already survived them on at least one request, we can be pretty sure a server went down and that the problem is not a setup/configuration issue.
In this case, show a friendlier error page instead of the fairly detailed technical one.
Test Plan:
- Broke MySQL config.
- Restarted Apache.
- Got the "admin/setup" error page:
{F1803268}
- Fixed the MySQL config.
- Loaded any page, to put us "in flight".
- Broke MySQL config.
- Loaded any page.
- Got the friendly "in flight" error page:
{F1803271}
If you want to design this better, easiest way to get to it is:
- Set `mysql.port` to `9999` in `conf/local/local.json`.
- Reload any page while already running (don't restart).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16503
Summary:
Ref T11589. Previously, when we failed to load database configuration we just continued anyway, in order to get to setup checks so we could raise a better error.
There was a small chance that this could lead to pages running in a broken state, where ONLY that connection failed and everything else worked. This was accidentally fixed by narrowing the exceptions we continue on in D16489.
However, this "fix" meant that users no longer got helpful setup instructions. Instead:
- Keep throwing these exceptions: it's bad to continue if we've failed to connect to the database.
- However, catch them and turn them into setup errors.
- Share all the setup code so these errors and setup check errors work the same way.
Test Plan:
- Intentionally broke `mysql.host` and `mysql.pass`.
- Loaded pages.
- Got good setup errors.
- Hit normal setup errors too.
- Put everything back.
- Swapped into cluster mode.
- Intentionally broke cluster mode, saw failover to readonly.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16501
Summary:
Ref T11589. This runs:
- preflight checks (critical checks: PHP version stuff, extensions);
- configuration;
- normal checks.
The PHP checks are split into critical ("bad version") and noncritical ("sub-optimal config").
I tidied up the extension checks slightly, we realistically depend on `cURL` nowadays.
Test Plan:
- Faked a preflight failure.
- Hit preflight check.
- Got expected error screen.
- Loaded normal pages.
- Hit a normal setup check.
- Used DarkConsole "Startup" tab to verify that preflight checks take <1ms to run (we run them on every page without caching, at least for now, but they only do trivial checks like PHP versions).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16500
Summary:
Ref T11589. Currently, initialization order is a bit tangled: we load configuration from the database, then later test if we can connect to the database.
Instead, I'm going to do: preflight checks ("PHP Version OK?", "Extensions installed?"), then configuration, then normal setup checks.
To prepare for this, flag core checks as "preflight" and add a setup panel to visually confirm that I didn't miss anything.
Test Plan: {F1803210}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11589
Differential Revision: https://secure.phabricator.com/D16499
Summary: Ref T11132, significantly cleans up the Config app, new layout, icons, spacing, etc. Some minor todos around re-designing "issues", mobile support, and maybe another pass at actual Group pages.
Test Plan: Visit and test every page in the config app, set new items, resolve setup issues, etc.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16468
Summary:
Fixes T9235. When the stars align, PHP 5.6 or newer emits a deprecation warning on startup about "always_populate_raw_post_data" which occurs too early for us to intercept and can break responses by adding garbage to the output.
These settings appear to be sufficient:
```
always_populate_raw_post_data = 1
display_errors = 1
display_startup_errors = 1
error_reporting = -1
```
Then make a request with an unusual content type:
```
$ curl -X POST -H "Content-Type: application/json" -d "{foo: bar}" http://phabricator.example.com/
```
This triggers the warning:
```
<br />
<b>Deprecated</b>: Automatically populating $HTTP_RAW_POST_DATA is deprecated and will be removed in a future version. To avoid this warning set 'always_populate_raw_post_data' to '-1' in php.ini and use the php://input stream instead. in <b>Unknown</b> on line <b>0</b><br />
<br />
...
```
To avoid this, just instruct administrators to set this value to "-1", which completely disables the feature and silences the warning.
Test Plan:
- Reproduced this issue by following the instructions above.
- Triggered the setup issue locally and read all the captivating prose:
{F1786911}
- Made the configuration change it directed me to, saw the setup issue resolve.
Reviewers: jcox
Reviewed By: jcox
Maniphest Tasks: T9235
Differential Revision: https://secure.phabricator.com/D16454
Summary: Fixes T8850. Previously, if a user's preamble script mangled `$_SERVER['REMOTE_ADDR']` or somehow set it to `null`, the user would get errors when performing certain actions. Now those errors shouldn't occur, and instead the user will be warned that there is a setup issue related to their preamble script.
Test Plan: Create a preamble script that contains `$_SERVER['REMOTE_ADDR'] = null;` then navigate to /config/issue/. There should be a warning there about `REMOTE_ADDR` not being available.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, yelirekim, epriestley
Maniphest Tasks: T8850
Differential Revision: https://secure.phabricator.com/D16450
Summary: Fixes T11437. Provides a normal form for configuring this, instead of weird "look up the PHID and adjust things in the database" stuff.
Test Plan:
{F1753651}
{F1753652}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11437
Differential Revision: https://secure.phabricator.com/D16377
Summary: Ref T9640. Fixes T9888. Decline to support PHP 7 until the async signal handling issue in T11270 is resolved.
Test Plan: Faked local version, got helpful error message.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9640, T9888
Differential Revision: https://secure.phabricator.com/D16231
Summary: Ref T11098. There is no reason to maintain these as separate values now that they can be configured in global settings.
Test Plan:
- Hit and read setup issue.
- Fiddled with settings.
- I'll vet this more throughly in the next diff since I need to fix an issue with global defaults in mail and can explicitly test this at the same time.
Reviewers: chad
Reviewed By: chad
Subscribers: eadler
Maniphest Tasks: T11098
Differential Revision: https://secure.phabricator.com/D16117
Summary:
Ref T10939. I'm not //totally// opposed to the existence of this element, but I think it's the kind of thing that would never make it upstream today. I think this should just be a T418 custom sort of thing in the long run, not a mainline upstream feature.
Overall, I think this thing is nearly useless and just adds visual clutter. My dashboard is about 100% red. This also sort of teaches users that it's fine to let revisions sit for a couple of days, which isn't what I'd like the UI to teach. Finally, removing it helps the UI feel a little less cluttered after the visually busy changes in D15926.
Test Plan: Grepped for removed config. Viewed revision list.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10939
Differential Revision: https://secure.phabricator.com/D15927
Summary: Ref T10694. This setting no longer has any effect: we always show a limited amount of context now.
Test Plan: `grep`
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10694
Differential Revision: https://secure.phabricator.com/D15886
Summary:
Fixes T10876. Currently, we can end up with a setup warning banner sticking on each web device, since the state is stored in local cache.
Instead:
- When we actually run the setup checks, save the current state in the database.
- Before we show a cached banner, make sure the database still says the checks are a problem.
This could lead to some inconsistencies if setup checks legitimately pass on some hosts but not on others. For example, if you have `git` installed on one machine but not on another, we may raise a setup warning ("No Git Binary!") about it on one host only.
For now, assume users have their operational environments in some sort of reasonable shape and can install the same stuff everywhere. In the future, we could split the issues into "global" and "per-host" issues if we run into problems with this.
Test Plan:
This is somewhat tricky to test locally since you really need multiple webservers to test it properly, but I:
- Created some setup issues, saw banner.
- Ignored/cleared them, saw banner go away.
- Verified database cache writes were occurring properly.
Then I sort of faked it like this:
- Created a setup issue.
- Manually set the database cache value to `[]` ("no issues").
- Reloaded page.
- No more banner.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10876
Differential Revision: https://secure.phabricator.com/D15802
Summary:
Ref T10832. In practice, `git --version` is not a useful test for this issue:
- Vendors like Debian have backported the patch into custom versions like `0.0.0.1-debian-lots-of-patches.3232`.
- Vendors like Ubuntu distribute multiple different versions which report the same string from `git --version`, some of which are patched and some of which are not.
In other cases, we can perform an empirical test for the vulnerability. Here, we can not, because we can't write a 2GB path in a reasonable amount of time.
Since vendors (other than Apple) //generally// seem to be on top of this and any warning we try to raise based on `git --version` will frequently be incorrect, don't raise this warning.
I'll note this in the changelog instead.
Test Plan: Looked at setup issues, no more warning for vulnerable git version.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10832
Differential Revision: https://secure.phabricator.com/D15756
Summary: Ref T10832. Raise a setup warning for out-of-date versions of `git`.
Test Plan: {F1224632}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10832
Differential Revision: https://secure.phabricator.com/D15745
Summary:
Fixes T9385. This was accidentally mangled a bit a long time ago by D12797, which was a 1,000-file change which got almost everything right.
Simplify the message and fix all the `%s` conversions and how they map to parameters.
Test Plan: {F1220400}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9385
Differential Revision: https://secure.phabricator.com/D15722
Summary:
Fixes T10697. This finishes bringing the rest of the config up to cluster power levels.
Phabricator is now given an arbitrarily long list of notification servers.
Each Aphlict server is given an arbitrarily long list of ports to run services on.
Users are free to make them meet in the middle by proxying whatever they want to whatever else they want.
This should also accommodate clustering fairly easily in the future.
Also rewrote the status UI and changed a million other things. 🐗
Test Plan:
{F1217864}
{F1217865}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15703
Summary: Ref T10697. Mostly straightforward. Also allow the server to have multiple logs and log options in the future (e.g., different verbosities or separate admin/client logs or whatever). No specific plans for this, but the default log is pretty noisy today.
Test Plan: Set up a couple of logs, started server, saw it log to them.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15702
Summary: Ref T10697. This isn't everything but starts generalizing options and moving us toward a cluster-ready state of affairs.
Test Plan: Started server in various configurations, hit most (all?) of the error cases with bad configs, sent test notifications.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15701
Summary:
Ref T4571. Ref T10759. Ref T10758. This isn't complete, but gets most of the job done:
- When `cluster.databases` is set up, most things ignore `mysql.host` now.
- You can `bin/storage upgrade` and stuff works.
- You can browse around in the web UI and stuff works.
There's still a lot of weird tricky stuff to navigate, and this has real no advantages over configuring a single server yet (no automatic failover, etc).
Test Plan:
- Configured `cluster.databases` to point at my `t1.micro` hosts in EC2 (master + replica).
- Ran `bin/storage upgrade`, got a new install setup on them properly.
- Survived setup warnings, browsed around.
- Switched back to local config, ran `bin/storage upgrade`, browsed around, went through setup checks.
- Intentionally broke config (bad hosts, no masters) and things seemed to react reasonably well.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4571, T10758, T10759
Differential Revision: https://secure.phabricator.com/D15668
Summary:
Ref T4571. This adds a new option which allows you to upgrade your one-host configuration to a multi-host configuration by configuring it.
Doing this currently does nothing. I wrote a lot of words about what it is //supposed// to do in the future, though.
Test Plan:
- Tried to configure the option in all the possible bad ways, got errors.
- Read documentation.
Reviewers: chad
Reviewed By: chad
Subscribers: eadler
Maniphest Tasks: T4571
Differential Revision: https://secure.phabricator.com/D15663