2012-12-25 15:11:39 +01:00
|
|
|
<?php
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handle request startup, before loading the environment or libraries. This
|
|
|
|
* class bootstraps the request state up to the point where we can enter
|
|
|
|
* Phabricator code.
|
|
|
|
*
|
|
|
|
* NOTE: This class MUST NOT have any dependencies. It runs before libraries
|
|
|
|
* load.
|
|
|
|
*
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
* Rate Limiting
|
|
|
|
* =============
|
|
|
|
*
|
|
|
|
* Phabricator limits the rate at which clients can request pages, and issues
|
|
|
|
* HTTP 429 "Too Many Requests" responses if clients request too many pages too
|
|
|
|
* quickly. Although this is not a complete defense against high-volume attacks,
|
|
|
|
* it can protect an install against aggressive crawlers, security scanners,
|
|
|
|
* and some types of malicious activity.
|
|
|
|
*
|
|
|
|
* To perform rate limiting, each page increments a score counter for the
|
|
|
|
* requesting user's IP. The page can give the IP more points for an expensive
|
|
|
|
* request, or fewer for an authetnicated request.
|
|
|
|
*
|
|
|
|
* Score counters are kept in buckets, and writes move to a new bucket every
|
|
|
|
* minute. After a few minutes (defined by @{method:getRateLimitBucketCount}),
|
|
|
|
* the oldest bucket is discarded. This provides a simple mechanism for keeping
|
|
|
|
* track of scores without needing to store, access, or read very much data.
|
|
|
|
*
|
|
|
|
* Users are allowed to accumulate up to 1000 points per minute, averaged across
|
|
|
|
* all of the tracked buckets.
|
|
|
|
*
|
2012-12-25 15:11:39 +01:00
|
|
|
* @task info Accessing Request Information
|
|
|
|
* @task hook Startup Hooks
|
|
|
|
* @task apocalypse In Case Of Apocalypse
|
|
|
|
* @task validation Validation
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
* @task ratelimit Rate Limiting
|
2015-08-21 23:53:29 +02:00
|
|
|
* @task phases Startup Phase Timers
|
2012-12-25 15:11:39 +01:00
|
|
|
*/
|
|
|
|
final class PhabricatorStartup {
|
|
|
|
|
|
|
|
private static $startTime;
|
Add an option to make it easier to debug page hangs
Summary:
Fixes T6044. We've had two cases (both the same install, coincidentally) where pages got hung doing too much data fetching.
When pages hang, we don't get a useful stack trace out of them, since nginx, php-fpm, or PHP eventually terminates things in a non-useful way without any diagnostic information.
The second time (the recent Macros issue) I was able to walk the install through removing limits on nginx, php-fpm, php, and eventually getting a profile by letting the page run for several minutes until the request completed. However, this install is exceptionally technically proficient and this was still a big pain for everyone, and this approach would not have worked if the page actually looped rather than just taking a long time.
Provide `debug.time-limit`, which should give us a better tool for reacting to this situation: by setting it to a small value (like 10), we'll kill the page after 10 seconds with a trace, before nginx/php-fpm/php/etc can kill it uselessly. Hopefully that will be enough information to find the issue (generally, getting a trace has been 95% of the problem in the two cases we've encountered).
Test Plan: Set this option to `3` and added a sleep loop, saw a termination after 3 seconds with a useful trace.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: csilvers, joshuaspence, epriestley
Maniphest Tasks: T6044
Differential Revision: https://secure.phabricator.com/D10465
2014-09-11 15:28:21 +02:00
|
|
|
private static $debugTimeLimit;
|
2015-06-05 02:26:52 +02:00
|
|
|
private static $accessLog;
|
2013-02-11 20:06:59 +01:00
|
|
|
private static $capturingOutput;
|
2013-06-24 17:21:42 +02:00
|
|
|
private static $rawInput;
|
2014-09-04 21:48:34 +02:00
|
|
|
private static $oldMemoryLimit;
|
2015-08-21 23:53:29 +02:00
|
|
|
private static $phases;
|
2014-04-09 20:52:34 +02:00
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
private static $limits = array();
|
2012-12-25 15:11:39 +01:00
|
|
|
|
|
|
|
|
|
|
|
/* -( Accessing Request Information )-------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @task info
|
|
|
|
*/
|
|
|
|
public static function getStartTime() {
|
|
|
|
return self::$startTime;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-04-02 18:53:56 +02:00
|
|
|
/**
|
|
|
|
* @task info
|
|
|
|
*/
|
|
|
|
public static function getMicrosecondsSinceStart() {
|
2018-11-08 16:30:02 +01:00
|
|
|
// This is the same as "phutil_microseconds_since()", but we may not have
|
|
|
|
// loaded libphutil yet.
|
2013-04-02 18:53:56 +02:00
|
|
|
return (int)(1000000 * (microtime(true) - self::getStartTime()));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
/**
|
|
|
|
* @task info
|
|
|
|
*/
|
2015-06-05 02:26:52 +02:00
|
|
|
public static function setAccessLog($access_log) {
|
|
|
|
self::$accessLog = $access_log;
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-06-24 17:21:42 +02:00
|
|
|
/**
|
|
|
|
* @task info
|
|
|
|
*/
|
|
|
|
public static function getRawInput() {
|
2016-03-16 18:47:07 +01:00
|
|
|
if (self::$rawInput === null) {
|
|
|
|
$stream = new AphrontRequestStream();
|
|
|
|
|
|
|
|
if (isset($_SERVER['HTTP_CONTENT_ENCODING'])) {
|
|
|
|
$encoding = trim($_SERVER['HTTP_CONTENT_ENCODING']);
|
|
|
|
$stream->setEncoding($encoding);
|
|
|
|
}
|
|
|
|
|
|
|
|
$input = '';
|
|
|
|
do {
|
|
|
|
$bytes = $stream->readData();
|
|
|
|
if ($bytes === null) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
$input .= $bytes;
|
|
|
|
} while (true);
|
|
|
|
|
|
|
|
self::$rawInput = $input;
|
|
|
|
}
|
|
|
|
|
2013-06-24 17:21:42 +02:00
|
|
|
return self::$rawInput;
|
|
|
|
}
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
|
|
|
|
/* -( Startup Hooks )------------------------------------------------------ */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2015-08-21 23:53:29 +02:00
|
|
|
* @param float Request start time, from `microtime(true)`.
|
2012-12-25 15:11:39 +01:00
|
|
|
* @task hook
|
|
|
|
*/
|
2015-08-21 23:53:29 +02:00
|
|
|
public static function didStartup($start_time) {
|
|
|
|
self::$startTime = $start_time;
|
|
|
|
|
|
|
|
self::$phases = array();
|
|
|
|
|
2015-06-05 02:26:52 +02:00
|
|
|
self::$accessLog = null;
|
2012-12-25 15:11:39 +01:00
|
|
|
|
|
|
|
static $registered;
|
|
|
|
if (!$registered) {
|
|
|
|
// NOTE: This protects us against multiple calls to didStartup() in the
|
|
|
|
// same request, but also against repeated requests to the same
|
|
|
|
// interpreter state, which we may implement in the future.
|
|
|
|
register_shutdown_function(array(__CLASS__, 'didShutdown'));
|
|
|
|
$registered = true;
|
|
|
|
}
|
2012-12-25 15:15:28 +01:00
|
|
|
|
|
|
|
self::setupPHP();
|
|
|
|
self::verifyPHP();
|
|
|
|
|
Improve top-level exception handling
Summary:
Fixes T6692. Addresses two main issues:
- The write guard would sometimes not get disposed of on exception pathways, generating an unnecessary secondary error which was just a symptom of the original root error.
- This was generally confusing and reduced the quality of reports we received because users would report the symptomatic error sometimes instead of the real error.
- Instead, reflow the handling so that we always dispose of the write guard if we create one.
- If we missed the Controller-level error page generation (normally, a nice page with full CSS, etc), we'd jump straight to Startup-level error page generation (very basic plain text).
- A large class of errors occur too early or too late to be handled by Controller-level pages, but many of these errors are not fundamental, and the plain text page is excessively severe.
- Provide a mid-level simple HTML error page for errors which can't get full CSS, but also aren't so fundamental that we have no recourse but plain text.
Test Plan:
Mid-level errors now produce an intentional-looking error page:
{F259885}
Verified that setup errors still render properly.
@chad, feel free to tweak the exception page -- I just did a rough pass on it. Like the setup error stuff, it doesn't have Celerity, so we can't use `{$colors}` and no other CSS will be loaded.
Reviewers: chad, btrahan
Reviewed By: btrahan
Subscribers: epriestley, chad
Maniphest Tasks: T6692
Differential Revision: https://secure.phabricator.com/D11126
2015-01-02 19:49:27 +01:00
|
|
|
// If we've made it this far, the environment isn't completely broken so
|
|
|
|
// we can switch over to relying on our own exception recovery mechanisms.
|
|
|
|
ini_set('display_errors', 0);
|
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
self::connectRateLimits();
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
|
2013-08-05 03:07:35 +02:00
|
|
|
self::normalizeInput();
|
|
|
|
|
2012-12-25 15:15:28 +01:00
|
|
|
self::verifyRewriteRules();
|
|
|
|
|
|
|
|
self::detectPostMaxSizeTriggered();
|
2013-02-11 20:06:59 +01:00
|
|
|
|
|
|
|
self::beginOutputCapture();
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @task hook
|
|
|
|
*/
|
|
|
|
public static function didShutdown() {
|
2017-10-14 16:14:31 +02:00
|
|
|
// Disconnect any active rate limits before we shut down. If we don't do
|
|
|
|
// this, requests which exit early will lock a slot in any active
|
|
|
|
// connection limits, and won't count for rate limits.
|
|
|
|
self::disconnectRateLimits(array());
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
$event = error_get_last();
|
|
|
|
|
|
|
|
if (!$event) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch ($event['type']) {
|
|
|
|
case E_ERROR:
|
|
|
|
case E_PARSE:
|
|
|
|
case E_COMPILE_ERROR:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
$msg = ">>> UNRECOVERABLE FATAL ERROR <<<\n\n";
|
|
|
|
if ($event) {
|
|
|
|
// Even though we should be emitting this as text-plain, escape things
|
|
|
|
// just to be sure since we can't really be sure what the program state
|
|
|
|
// is when we get here.
|
|
|
|
$msg .= htmlspecialchars(
|
|
|
|
$event['message']."\n\n".$event['file'].':'.$event['line'],
|
|
|
|
ENT_QUOTES,
|
|
|
|
'UTF-8');
|
|
|
|
}
|
|
|
|
|
|
|
|
// flip dem tables
|
|
|
|
$msg .= "\n\n\n";
|
|
|
|
$msg .= "\xe2\x94\xbb\xe2\x94\x81\xe2\x94\xbb\x20\xef\xb8\xb5\x20\xc2\xaf".
|
|
|
|
"\x5c\x5f\x28\xe3\x83\x84\x29\x5f\x2f\xc2\xaf\x20\xef\xb8\xb5\x20".
|
|
|
|
"\xe2\x94\xbb\xe2\x94\x81\xe2\x94\xbb";
|
|
|
|
|
|
|
|
self::didFatal($msg);
|
|
|
|
}
|
|
|
|
|
2012-12-25 15:15:28 +01:00
|
|
|
public static function loadCoreLibraries() {
|
2017-10-11 23:23:09 +02:00
|
|
|
$phabricator_root = dirname(dirname(dirname(__FILE__)));
|
2012-12-25 15:15:28 +01:00
|
|
|
$libraries_root = dirname($phabricator_root);
|
|
|
|
|
|
|
|
$root = null;
|
|
|
|
if (!empty($_SERVER['PHUTIL_LIBRARY_ROOT'])) {
|
|
|
|
$root = $_SERVER['PHUTIL_LIBRARY_ROOT'];
|
|
|
|
}
|
|
|
|
|
|
|
|
ini_set(
|
|
|
|
'include_path',
|
|
|
|
$libraries_root.PATH_SEPARATOR.ini_get('include_path'));
|
|
|
|
|
|
|
|
@include_once $root.'libphutil/src/__phutil_library_init__.php';
|
|
|
|
if (!@constant('__LIBPHUTIL__')) {
|
|
|
|
self::didFatal(
|
|
|
|
"Unable to load libphutil. Put libphutil/ next to phabricator/, or ".
|
|
|
|
"update your PHP 'include_path' to include the parent directory of ".
|
|
|
|
"libphutil/.");
|
|
|
|
}
|
|
|
|
|
|
|
|
phutil_load_library('arcanist/src');
|
|
|
|
|
|
|
|
// Load Phabricator itself using the absolute path, so we never end up doing
|
|
|
|
// anything surprising (loading index.php and libraries from different
|
|
|
|
// directories).
|
|
|
|
phutil_load_library($phabricator_root.'/src');
|
|
|
|
}
|
|
|
|
|
2013-02-11 20:06:59 +01:00
|
|
|
/* -( Output Capture )----------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
public static function beginOutputCapture() {
|
|
|
|
if (self::$capturingOutput) {
|
2014-06-09 20:36:49 +02:00
|
|
|
self::didFatal('Already capturing output!');
|
2013-02-11 20:06:59 +01:00
|
|
|
}
|
|
|
|
self::$capturingOutput = true;
|
|
|
|
ob_start();
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
public static function endOutputCapture() {
|
|
|
|
if (!self::$capturingOutput) {
|
|
|
|
return null;
|
|
|
|
}
|
|
|
|
self::$capturingOutput = false;
|
|
|
|
return ob_get_clean();
|
|
|
|
}
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
|
Add an option to make it easier to debug page hangs
Summary:
Fixes T6044. We've had two cases (both the same install, coincidentally) where pages got hung doing too much data fetching.
When pages hang, we don't get a useful stack trace out of them, since nginx, php-fpm, or PHP eventually terminates things in a non-useful way without any diagnostic information.
The second time (the recent Macros issue) I was able to walk the install through removing limits on nginx, php-fpm, php, and eventually getting a profile by letting the page run for several minutes until the request completed. However, this install is exceptionally technically proficient and this was still a big pain for everyone, and this approach would not have worked if the page actually looped rather than just taking a long time.
Provide `debug.time-limit`, which should give us a better tool for reacting to this situation: by setting it to a small value (like 10), we'll kill the page after 10 seconds with a trace, before nginx/php-fpm/php/etc can kill it uselessly. Hopefully that will be enough information to find the issue (generally, getting a trace has been 95% of the problem in the two cases we've encountered).
Test Plan: Set this option to `3` and added a sleep loop, saw a termination after 3 seconds with a useful trace.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: csilvers, joshuaspence, epriestley
Maniphest Tasks: T6044
Differential Revision: https://secure.phabricator.com/D10465
2014-09-11 15:28:21 +02:00
|
|
|
/* -( Debug Time Limit )--------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Set a time limit (in seconds) for the current script. After time expires,
|
|
|
|
* the script fatals.
|
|
|
|
*
|
|
|
|
* This works like `max_execution_time`, but prints out a useful stack trace
|
|
|
|
* when the time limit expires. This is primarily intended to make it easier
|
|
|
|
* to debug pages which hang by allowing extraction of a stack trace: set a
|
|
|
|
* short debug limit, then use the trace to figure out what's happening.
|
|
|
|
*
|
|
|
|
* The limit is implemented with a tick function, so enabling it implies
|
|
|
|
* some accounting overhead.
|
|
|
|
*
|
|
|
|
* @param int Time limit in seconds.
|
|
|
|
* @return void
|
|
|
|
*/
|
|
|
|
public static function setDebugTimeLimit($limit) {
|
|
|
|
self::$debugTimeLimit = $limit;
|
|
|
|
|
|
|
|
static $initialized;
|
|
|
|
if (!$initialized) {
|
|
|
|
declare(ticks=1);
|
2015-05-13 22:50:28 +02:00
|
|
|
register_tick_function(array(__CLASS__, 'onDebugTick'));
|
Add an option to make it easier to debug page hangs
Summary:
Fixes T6044. We've had two cases (both the same install, coincidentally) where pages got hung doing too much data fetching.
When pages hang, we don't get a useful stack trace out of them, since nginx, php-fpm, or PHP eventually terminates things in a non-useful way without any diagnostic information.
The second time (the recent Macros issue) I was able to walk the install through removing limits on nginx, php-fpm, php, and eventually getting a profile by letting the page run for several minutes until the request completed. However, this install is exceptionally technically proficient and this was still a big pain for everyone, and this approach would not have worked if the page actually looped rather than just taking a long time.
Provide `debug.time-limit`, which should give us a better tool for reacting to this situation: by setting it to a small value (like 10), we'll kill the page after 10 seconds with a trace, before nginx/php-fpm/php/etc can kill it uselessly. Hopefully that will be enough information to find the issue (generally, getting a trace has been 95% of the problem in the two cases we've encountered).
Test Plan: Set this option to `3` and added a sleep loop, saw a termination after 3 seconds with a useful trace.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: csilvers, joshuaspence, epriestley
Maniphest Tasks: T6044
Differential Revision: https://secure.phabricator.com/D10465
2014-09-11 15:28:21 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Callback tick function used by @{method:setDebugTimeLimit}.
|
|
|
|
*
|
|
|
|
* Fatals with a useful stack trace after the time limit expires.
|
|
|
|
*
|
|
|
|
* @return void
|
|
|
|
*/
|
|
|
|
public static function onDebugTick() {
|
|
|
|
$limit = self::$debugTimeLimit;
|
|
|
|
if (!$limit) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
$elapsed = (microtime(true) - self::getStartTime());
|
|
|
|
if ($elapsed > $limit) {
|
|
|
|
$frames = array();
|
|
|
|
foreach (debug_backtrace() as $frame) {
|
|
|
|
$file = isset($frame['file']) ? $frame['file'] : '-';
|
|
|
|
$file = basename($file);
|
|
|
|
|
|
|
|
$line = isset($frame['line']) ? $frame['line'] : '-';
|
|
|
|
$class = isset($frame['class']) ? $frame['class'].'->' : null;
|
|
|
|
$func = isset($frame['function']) ? $frame['function'].'()' : '?';
|
|
|
|
|
|
|
|
$frames[] = "{$file}:{$line} {$class}{$func}";
|
|
|
|
}
|
|
|
|
|
|
|
|
self::didFatal(
|
|
|
|
"Request aborted by debug time limit after {$limit} seconds.\n\n".
|
|
|
|
"STACK TRACE\n".
|
|
|
|
implode("\n", $frames));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
/* -( In Case of Apocalypse )---------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2014-01-21 23:03:09 +01:00
|
|
|
* Fatal the request completely in response to an exception, sending a plain
|
|
|
|
* text message to the client. Calls @{method:didFatal} internally.
|
|
|
|
*
|
|
|
|
* @param string Brief description of the exception context, like
|
|
|
|
* `"Rendering Exception"`.
|
2019-02-11 18:58:45 +01:00
|
|
|
* @param Throwable The exception itself.
|
2014-01-21 23:03:09 +01:00
|
|
|
* @param bool True if it's okay to show the exception's stack trace
|
|
|
|
* to the user. The trace will always be logged.
|
|
|
|
* @return exit This method **does not return**.
|
|
|
|
*
|
2012-12-25 15:11:39 +01:00
|
|
|
* @task apocalypse
|
|
|
|
*/
|
2014-01-21 23:03:09 +01:00
|
|
|
public static function didEncounterFatalException(
|
|
|
|
$note,
|
2019-02-11 18:58:45 +01:00
|
|
|
$ex,
|
2014-01-21 23:03:09 +01:00
|
|
|
$show_trace) {
|
|
|
|
|
|
|
|
$message = '['.$note.'/'.get_class($ex).'] '.$ex->getMessage();
|
|
|
|
|
|
|
|
$full_message = $message;
|
|
|
|
$full_message .= "\n\n";
|
|
|
|
$full_message .= $ex->getTraceAsString();
|
|
|
|
|
|
|
|
if ($show_trace) {
|
|
|
|
$message = $full_message;
|
|
|
|
}
|
|
|
|
|
|
|
|
self::didFatal($message, $full_message);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Fatal the request completely, sending a plain text message to the client.
|
|
|
|
*
|
|
|
|
* @param string Plain text message to send to the client.
|
|
|
|
* @param string Plain text message to send to the error log. If not
|
|
|
|
* provided, the client message is used. You can pass a more
|
|
|
|
* detailed message here (e.g., with stack traces) to avoid
|
|
|
|
* showing it to users.
|
|
|
|
* @return exit This method **does not return**.
|
|
|
|
*
|
|
|
|
* @task apocalypse
|
|
|
|
*/
|
|
|
|
public static function didFatal($message, $log_message = null) {
|
|
|
|
if ($log_message === null) {
|
|
|
|
$log_message = $message;
|
|
|
|
}
|
|
|
|
|
2013-02-11 20:06:59 +01:00
|
|
|
self::endOutputCapture();
|
2015-06-05 02:26:52 +02:00
|
|
|
$access_log = self::$accessLog;
|
2013-04-09 20:27:37 +02:00
|
|
|
if ($access_log) {
|
|
|
|
// We may end up here before the access log is initialized, e.g. from
|
|
|
|
// verifyPHP().
|
2013-05-10 01:08:26 +02:00
|
|
|
$access_log->setData(
|
|
|
|
array(
|
|
|
|
'c' => 500,
|
|
|
|
));
|
|
|
|
$access_log->write();
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
header(
|
|
|
|
'Content-Type: text/plain; charset=utf-8',
|
|
|
|
$replace = true,
|
|
|
|
$http_error = 500);
|
|
|
|
|
2014-01-21 23:03:09 +01:00
|
|
|
error_log($log_message);
|
2016-07-22 02:22:35 +02:00
|
|
|
echo $message."\n";
|
2012-12-25 15:11:39 +01:00
|
|
|
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* -( Validation )--------------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2013-08-05 03:07:35 +02:00
|
|
|
* @task validation
|
2012-12-25 15:11:39 +01:00
|
|
|
*/
|
|
|
|
private static function setupPHP() {
|
|
|
|
error_reporting(E_ALL | E_STRICT);
|
2014-09-04 21:48:34 +02:00
|
|
|
self::$oldMemoryLimit = ini_get('memory_limit');
|
2012-12-25 15:11:39 +01:00
|
|
|
ini_set('memory_limit', -1);
|
2014-01-23 23:00:44 +01:00
|
|
|
|
|
|
|
// If we have libxml, disable the incredibly dangerous entity loader.
|
|
|
|
if (function_exists('libxml_disable_entity_loader')) {
|
|
|
|
libxml_disable_entity_loader(true);
|
|
|
|
}
|
Always setlocale() to en_US.UTF-8 for the main process
Summary:
Depends on D18987. See PHI343. Fixes T13060. See also T7339.
When the main process starts up with `LANG=POSIX` (this is the default on Ubuntu) and we later try to run a subprocess with a UTF8 character in the argument list (like `git cat-file blob 🐑.txt`), the argument is not passed to the subprocess correctly.
We already set `LANG=en_US.UTF-8` in the //subprocess// environment, but this only controls behavior for the subprocess itself. It appears that the argument list encoding before the actual subprocess starts depends on the parent process's locale setting, which makes some degree of sense.
Setting `putenv('LANG=en_US.UTF-8')` has no effect on this, but my guess is that the parent process's locale setting is read at startup (rather than read anew from `LANG` every time) and not changed by further modifications of `LANG`.
Using `setlocale(...)` does appear to fix this.
Ideally, installs would probably set some UTF-8-compatible LANG setting as the default. However, this makes setup harder and I couldn't figure out how to do it on our production Ubuntu AMI after spending a reasonable amount of time at it (see T13060).
Since it's very rare that this setting matters, try to just do the right thing. This may fail if "en_US.UTF-8" isn't available, but I think warnings/remedies to this are in the scope of T7339, since we want this locale to exist for other legitimate reasons anyway.
Test Plan:
- Applied this fix in production, processed the failing worker task from PHI343 after kicking Apache hard enough.
- Ran locally with `setlocale(LC_ALL, 'duck.quack')` to make sure a bad/invalid/unavailable setting didn't break anything, didn't hit any issues.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13060
Differential Revision: https://secure.phabricator.com/D18988
2018-02-04 15:01:49 +01:00
|
|
|
|
|
|
|
// See T13060. If the locale for this process (the parent process) is not
|
|
|
|
// a UTF-8 locale we can encounter problems when launching subprocesses
|
|
|
|
// which receive UTF-8 parameters in their command line argument list.
|
|
|
|
@setlocale(LC_ALL, 'en_US.UTF-8');
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
2014-09-04 21:48:34 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* @task validation
|
|
|
|
*/
|
|
|
|
public static function getOldMemoryLimit() {
|
|
|
|
return self::$oldMemoryLimit;
|
|
|
|
}
|
|
|
|
|
2013-08-05 03:07:35 +02:00
|
|
|
/**
|
|
|
|
* @task validation
|
|
|
|
*/
|
|
|
|
private static function normalizeInput() {
|
|
|
|
// Replace superglobals with unfiltered versions, disrespect php.ini (we
|
2015-06-05 02:26:52 +02:00
|
|
|
// filter ourselves).
|
|
|
|
|
|
|
|
// NOTE: We don't filter INPUT_SERVER because we don't want to overwrite
|
|
|
|
// changes made in "preamble.php".
|
2017-10-11 00:13:56 +02:00
|
|
|
|
|
|
|
// NOTE: WE don't filter INPUT_POST because we may be constructing it
|
|
|
|
// lazily if "enable_post_data_reading" is disabled.
|
|
|
|
|
2015-06-05 02:26:52 +02:00
|
|
|
$filter = array(
|
|
|
|
INPUT_GET,
|
|
|
|
INPUT_ENV,
|
|
|
|
INPUT_COOKIE,
|
2014-10-07 15:01:04 +02:00
|
|
|
);
|
2013-08-05 20:45:21 +02:00
|
|
|
foreach ($filter as $type) {
|
|
|
|
$filtered = filter_input_array($type, FILTER_UNSAFE_RAW);
|
|
|
|
if (!is_array($filtered)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
switch ($type) {
|
|
|
|
case INPUT_GET:
|
|
|
|
$_GET = array_merge($_GET, $filtered);
|
|
|
|
break;
|
|
|
|
case INPUT_COOKIE:
|
|
|
|
$_COOKIE = array_merge($_COOKIE, $filtered);
|
|
|
|
break;
|
|
|
|
case INPUT_ENV;
|
Filter potentially problematic $_ENV variables
Summary:
Caught this in the production error logs. We can end up with `argv` defined and set to an array in an nginx + php-fpm configuration.
When we later run `ExecFuture` subprocesses, they won't be able to forward the value.
The error this produces looks like this:
```
015/04/27 12:17:35 [error] 10948#0: *674 FastCGI sent in stderr: "PHP message: [2015-04-27 12:17:35] ERROR 8: Array to string conversion at [/core/lib/libphutil/src/future/exec/ExecFuture.php:667]
PHP message: arcanist(head=master, ref.master=805ae12408e8), phabricator(head=master, ref.master=8ce8a761efe9), phutil(head=master, ref.master=fccf03d48e08)
PHP message: #0 ExecFuture::isReady() called at [<phutil>/src/future/Future.php:39]
PHP message: #1 Future::resolve(NULL) called at [<phutil>/src/future/exec/ExecFuture.php:413]
PHP message: #2 ExecFuture::resolvex() called at [<phabricator>/src/applications/diffusion/query/rawdiff/DiffusionGitRawDiffQuery.php:40]
PHP message: #3 DiffusionGitRawDiffQuery::executeQuery() called at [<phabricator>/src/applications/diffusion/query/rawdiff/DiffusionRawDiffQuery.php:17]
PHP message: #4 DiffusionRawDiffQuery::loadRawDiff() called at [<phabricator>/src/applications/diffusion/conduit/DiffusionRawDiffQueryConduitAPIMethod.php:56]
PHP message: #5 DiffusionRawDiffQueryConduitAPIMethod::getResult(ConduitAPIRequest) called at [<phabricator>/src/applications/diffusion/conduit/DiffusionQueryConduitAPIMethod.php:135]
PHP message: #6 DiffusionQueryConduitAPIMethod::execute(ConduitAPIRequest) called at [<phabricator>/src/applications/conduit/method/ConduitAPIMethod.php:90]
PHP message: #7 ConduitAPIMethod::executeMethod(ConduitAPIRequest) called at [<phabricator>/src/applications/conduit/call/ConduitCall.php:134]
PHP message: #8 ConduitCall::executeMethod() called at [<phabricator>/src/applications/conduit/call/ConduitCall.php:84]
PHP message: #9 ConduitCall::execute() called at [<phabricator>/src/applications/diffusion/query/DiffusionQuery.php:81]
PHP message: #10 DiffusionQuery::callConduitWithDiffusionRequest(PhabricatorUser, DiffusionGitRequest, string, array) called at [<phabricator>/src/applications/diffusion/controller/DiffusionController.php:184]
PHP message: #11 DiffusionController::callConduitWithDiffusionRequest(string, array) called at [<phabricat
```
Test Plan: I'm just going to push this to make sure it fixes things, since I can't repro it locally.
Reviewers: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D12571
2015-04-27 14:21:15 +02:00
|
|
|
$env = array_merge($_ENV, $filtered);
|
|
|
|
$_ENV = self::filterEnvSuperglobal($env);
|
2013-08-05 20:45:21 +02:00
|
|
|
break;
|
|
|
|
}
|
2013-08-05 03:07:35 +02:00
|
|
|
}
|
|
|
|
|
2017-10-11 00:13:56 +02:00
|
|
|
self::rebuildRequest();
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @task validation
|
|
|
|
*/
|
|
|
|
public static function rebuildRequest() {
|
|
|
|
// Rebuild $_REQUEST, respecting order declared in ".ini" files.
|
2013-08-05 03:07:35 +02:00
|
|
|
$order = ini_get('request_order');
|
2017-10-11 00:13:56 +02:00
|
|
|
|
2013-08-05 03:07:35 +02:00
|
|
|
if (!$order) {
|
|
|
|
$order = ini_get('variables_order');
|
|
|
|
}
|
2017-10-11 00:13:56 +02:00
|
|
|
|
2013-08-05 03:07:35 +02:00
|
|
|
if (!$order) {
|
2017-10-11 00:13:56 +02:00
|
|
|
// $_REQUEST will be empty, so leave it alone.
|
2013-08-05 03:07:35 +02:00
|
|
|
return;
|
|
|
|
}
|
2017-10-11 00:13:56 +02:00
|
|
|
|
2013-08-05 03:07:35 +02:00
|
|
|
$_REQUEST = array();
|
2017-10-11 00:13:56 +02:00
|
|
|
for ($ii = 0; $ii < strlen($order); $ii++) {
|
|
|
|
switch ($order[$ii]) {
|
2013-08-05 03:07:35 +02:00
|
|
|
case 'G':
|
|
|
|
$_REQUEST = array_merge($_REQUEST, $_GET);
|
|
|
|
break;
|
|
|
|
case 'P':
|
|
|
|
$_REQUEST = array_merge($_REQUEST, $_POST);
|
|
|
|
break;
|
|
|
|
case 'C':
|
|
|
|
$_REQUEST = array_merge($_REQUEST, $_COOKIE);
|
|
|
|
break;
|
|
|
|
default:
|
2017-10-11 00:13:56 +02:00
|
|
|
// $_ENV and $_SERVER never go into $_REQUEST.
|
2013-08-05 03:07:35 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-12-25 15:11:39 +01:00
|
|
|
|
Filter potentially problematic $_ENV variables
Summary:
Caught this in the production error logs. We can end up with `argv` defined and set to an array in an nginx + php-fpm configuration.
When we later run `ExecFuture` subprocesses, they won't be able to forward the value.
The error this produces looks like this:
```
015/04/27 12:17:35 [error] 10948#0: *674 FastCGI sent in stderr: "PHP message: [2015-04-27 12:17:35] ERROR 8: Array to string conversion at [/core/lib/libphutil/src/future/exec/ExecFuture.php:667]
PHP message: arcanist(head=master, ref.master=805ae12408e8), phabricator(head=master, ref.master=8ce8a761efe9), phutil(head=master, ref.master=fccf03d48e08)
PHP message: #0 ExecFuture::isReady() called at [<phutil>/src/future/Future.php:39]
PHP message: #1 Future::resolve(NULL) called at [<phutil>/src/future/exec/ExecFuture.php:413]
PHP message: #2 ExecFuture::resolvex() called at [<phabricator>/src/applications/diffusion/query/rawdiff/DiffusionGitRawDiffQuery.php:40]
PHP message: #3 DiffusionGitRawDiffQuery::executeQuery() called at [<phabricator>/src/applications/diffusion/query/rawdiff/DiffusionRawDiffQuery.php:17]
PHP message: #4 DiffusionRawDiffQuery::loadRawDiff() called at [<phabricator>/src/applications/diffusion/conduit/DiffusionRawDiffQueryConduitAPIMethod.php:56]
PHP message: #5 DiffusionRawDiffQueryConduitAPIMethod::getResult(ConduitAPIRequest) called at [<phabricator>/src/applications/diffusion/conduit/DiffusionQueryConduitAPIMethod.php:135]
PHP message: #6 DiffusionQueryConduitAPIMethod::execute(ConduitAPIRequest) called at [<phabricator>/src/applications/conduit/method/ConduitAPIMethod.php:90]
PHP message: #7 ConduitAPIMethod::executeMethod(ConduitAPIRequest) called at [<phabricator>/src/applications/conduit/call/ConduitCall.php:134]
PHP message: #8 ConduitCall::executeMethod() called at [<phabricator>/src/applications/conduit/call/ConduitCall.php:84]
PHP message: #9 ConduitCall::execute() called at [<phabricator>/src/applications/diffusion/query/DiffusionQuery.php:81]
PHP message: #10 DiffusionQuery::callConduitWithDiffusionRequest(PhabricatorUser, DiffusionGitRequest, string, array) called at [<phabricator>/src/applications/diffusion/controller/DiffusionController.php:184]
PHP message: #11 DiffusionController::callConduitWithDiffusionRequest(string, array) called at [<phabricat
```
Test Plan: I'm just going to push this to make sure it fixes things, since I can't repro it locally.
Reviewers: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D12571
2015-04-27 14:21:15 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Adjust `$_ENV` before execution.
|
|
|
|
*
|
|
|
|
* Adjustments here primarily impact the environment as seen by subprocesses.
|
|
|
|
* The environment is forwarded explicitly by @{class:ExecFuture}.
|
|
|
|
*
|
|
|
|
* @param map<string, wild> Input `$_ENV`.
|
|
|
|
* @return map<string, string> Suitable `$_ENV`.
|
|
|
|
* @task validation
|
|
|
|
*/
|
|
|
|
private static function filterEnvSuperglobal(array $env) {
|
|
|
|
|
|
|
|
// In some configurations, we may get "argc" and "argv" set in $_ENV.
|
|
|
|
// These are not real environmental variables, and "argv" may have an array
|
|
|
|
// value which can not be forwarded to subprocesses. Remove these from the
|
|
|
|
// environment if they are present.
|
|
|
|
unset($env['argc']);
|
|
|
|
unset($env['argv']);
|
|
|
|
|
|
|
|
return $env;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
/**
|
2013-08-05 03:07:35 +02:00
|
|
|
* @task validation
|
2012-12-25 15:11:39 +01:00
|
|
|
*/
|
|
|
|
private static function verifyPHP() {
|
|
|
|
$required_version = '5.2.3';
|
|
|
|
if (version_compare(PHP_VERSION, $required_version) < 0) {
|
|
|
|
self::didFatal(
|
|
|
|
"You are running PHP version '".PHP_VERSION."', which is older than ".
|
|
|
|
"the minimum version, '{$required_version}'. Update to at least ".
|
|
|
|
"'{$required_version}'.");
|
|
|
|
}
|
|
|
|
|
2020-01-14 21:17:08 +01:00
|
|
|
if (@get_magic_quotes_gpc()) {
|
2012-12-25 15:11:39 +01:00
|
|
|
self::didFatal(
|
|
|
|
"Your server is configured with PHP 'magic_quotes_gpc' enabled. This ".
|
|
|
|
"feature is 'highly discouraged' by PHP's developers and you must ".
|
|
|
|
"disable it to run Phabricator. Consult the PHP manual for ".
|
|
|
|
"instructions.");
|
|
|
|
}
|
2013-07-10 22:20:00 +02:00
|
|
|
|
|
|
|
if (extension_loaded('apc')) {
|
|
|
|
$apc_version = phpversion('apc');
|
|
|
|
$known_bad = array(
|
|
|
|
'3.1.14' => true,
|
|
|
|
'3.1.15' => true,
|
2013-07-16 20:45:29 +02:00
|
|
|
'3.1.15-dev' => true,
|
2013-07-10 22:20:00 +02:00
|
|
|
);
|
|
|
|
if (isset($known_bad[$apc_version])) {
|
|
|
|
self::didFatal(
|
|
|
|
"You have APC {$apc_version} installed. This version of APC is ".
|
|
|
|
"known to be bad, and does not work with Phabricator (it will ".
|
|
|
|
"cause Phabricator to fatal unrecoverably with nonsense errors). ".
|
|
|
|
"Downgrade to version 3.1.13.");
|
|
|
|
}
|
|
|
|
}
|
2016-07-22 02:22:35 +02:00
|
|
|
|
|
|
|
if (isset($_SERVER['HTTP_PROXY'])) {
|
|
|
|
self::didFatal(
|
|
|
|
'This HTTP request included a "Proxy:" header, poisoning the '.
|
|
|
|
'environment (CVE-2016-5385 / httpoxy). Declining to process this '.
|
|
|
|
'request. For details, see: https://phurl.io/u/httpoxy');
|
|
|
|
}
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2013-08-05 03:07:35 +02:00
|
|
|
* @task validation
|
2012-12-25 15:11:39 +01:00
|
|
|
*/
|
|
|
|
private static function verifyRewriteRules() {
|
2013-01-23 02:17:37 +01:00
|
|
|
if (isset($_REQUEST['__path__']) && strlen($_REQUEST['__path__'])) {
|
2012-12-25 15:11:39 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (php_sapi_name() == 'cli-server') {
|
|
|
|
// Compatibility with PHP 5.4+ built-in web server.
|
|
|
|
$url = parse_url($_SERVER['REQUEST_URI']);
|
|
|
|
$_REQUEST['__path__'] = $url['path'];
|
2013-01-23 02:17:37 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!isset($_REQUEST['__path__'])) {
|
2012-12-25 15:11:39 +01:00
|
|
|
self::didFatal(
|
|
|
|
"Request parameter '__path__' is not set. Your rewrite rules ".
|
|
|
|
"are not configured correctly.");
|
|
|
|
}
|
2013-01-23 02:17:37 +01:00
|
|
|
|
|
|
|
if (!strlen($_REQUEST['__path__'])) {
|
|
|
|
self::didFatal(
|
|
|
|
"Request parameter '__path__' is set, but empty. Your rewrite rules ".
|
|
|
|
"are not configured correctly. The '__path__' should always ".
|
|
|
|
"begin with a '/'.");
|
|
|
|
}
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-12-25 15:15:28 +01:00
|
|
|
/**
|
|
|
|
* Detect if this request has had its POST data stripped by exceeding the
|
|
|
|
* 'post_max_size' PHP configuration limit.
|
|
|
|
*
|
|
|
|
* PHP has a setting called 'post_max_size'. If a POST request arrives with
|
|
|
|
* a body larger than the limit, PHP doesn't generate $_POST but processes
|
|
|
|
* the request anyway, and provides no formal way to detect that this
|
|
|
|
* happened.
|
|
|
|
*
|
|
|
|
* We can still read the entire body out of `php://input`. However according
|
|
|
|
* to the documentation the stream isn't available for "multipart/form-data"
|
|
|
|
* (on nginx + php-fpm it appears that it is available, though, at least) so
|
|
|
|
* any attempt to generate $_POST would be fragile.
|
|
|
|
*
|
|
|
|
* @task validation
|
|
|
|
*/
|
|
|
|
private static function detectPostMaxSizeTriggered() {
|
|
|
|
// If this wasn't a POST, we're fine.
|
|
|
|
if ($_SERVER['REQUEST_METHOD'] != 'POST') {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-10-11 00:13:56 +02:00
|
|
|
// If "enable_post_data_reading" is off, we won't have $_POST and this
|
|
|
|
// condition is effectively impossible.
|
|
|
|
if (!ini_get('enable_post_data_reading')) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2012-12-25 15:15:28 +01:00
|
|
|
// If there's POST data, clearly we're in good shape.
|
|
|
|
if ($_POST) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// For HTML5 drag-and-drop file uploads, Safari submits the data as
|
|
|
|
// "application/x-www-form-urlencoded". For most files this generates
|
|
|
|
// something in POST because most files decode to some nonempty (albeit
|
|
|
|
// meaningless) value. However, some files (particularly small images)
|
|
|
|
// don't decode to anything. If we know this is a drag-and-drop upload,
|
|
|
|
// we can skip this check.
|
|
|
|
if (isset($_REQUEST['__upload__'])) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// PHP generates $_POST only for two content types. This routing happens
|
|
|
|
// in `main/php_content_types.c` in PHP. Normally, all forms use one of
|
|
|
|
// these content types, but some requests may not -- for example, Firefox
|
|
|
|
// submits files sent over HTML5 XMLHTTPRequest APIs with the Content-Type
|
|
|
|
// of the file itself. If we don't have a recognized content type, we
|
|
|
|
// don't need $_POST.
|
|
|
|
//
|
|
|
|
// NOTE: We use strncmp() because the actual content type may be something
|
|
|
|
// like "multipart/form-data; boundary=...".
|
|
|
|
//
|
|
|
|
// NOTE: Chrome sometimes omits this header, see some discussion in T1762
|
|
|
|
// and http://code.google.com/p/chromium/issues/detail?id=6800
|
|
|
|
$content_type = isset($_SERVER['CONTENT_TYPE'])
|
|
|
|
? $_SERVER['CONTENT_TYPE']
|
|
|
|
: '';
|
|
|
|
|
|
|
|
$parsed_types = array(
|
|
|
|
'application/x-www-form-urlencoded',
|
|
|
|
'multipart/form-data',
|
|
|
|
);
|
|
|
|
|
|
|
|
$is_parsed_type = false;
|
|
|
|
foreach ($parsed_types as $parsed_type) {
|
|
|
|
if (strncmp($content_type, $parsed_type, strlen($parsed_type)) === 0) {
|
|
|
|
$is_parsed_type = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!$is_parsed_type) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check for 'Content-Length'. If there's no data, we don't expect $_POST
|
|
|
|
// to exist.
|
|
|
|
$length = (int)$_SERVER['CONTENT_LENGTH'];
|
|
|
|
if (!$length) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Time to fatal: we know this was a POST with data that should have been
|
|
|
|
// populated into $_POST, but it wasn't.
|
|
|
|
|
|
|
|
$config = ini_get('post_max_size');
|
2015-05-13 22:50:28 +02:00
|
|
|
self::didFatal(
|
2012-12-25 15:15:28 +01:00
|
|
|
"As received by the server, this request had a nonzero content length ".
|
|
|
|
"but no POST data.\n\n".
|
|
|
|
"Normally, this indicates that it exceeds the 'post_max_size' setting ".
|
|
|
|
"in the PHP configuration on the server. Increase the 'post_max_size' ".
|
|
|
|
"setting or reduce the size of the request.\n\n".
|
|
|
|
"Request size according to 'Content-Length' was '{$length}', ".
|
|
|
|
"'post_max_size' is set to '{$config}'.");
|
|
|
|
}
|
|
|
|
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
|
|
|
|
/* -( Rate Limiting )------------------------------------------------------ */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2017-10-11 23:23:09 +02:00
|
|
|
* Add a new client limits.
|
2017-04-21 21:41:53 +02:00
|
|
|
*
|
2017-10-11 23:23:09 +02:00
|
|
|
* @param PhabricatorClientLimit New limit.
|
|
|
|
* @return PhabricatorClientLimit The limit.
|
2017-04-21 21:41:53 +02:00
|
|
|
*/
|
2017-10-11 23:23:09 +02:00
|
|
|
public static function addRateLimit(PhabricatorClientLimit $limit) {
|
|
|
|
self::$limits[] = $limit;
|
|
|
|
return $limit;
|
2017-04-21 21:41:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
/**
|
2017-10-11 23:23:09 +02:00
|
|
|
* Apply configured rate limits.
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
*
|
2017-10-11 23:23:09 +02:00
|
|
|
* If any limit is exceeded, this method terminates the request.
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
*
|
|
|
|
* @return void
|
|
|
|
* @task ratelimit
|
|
|
|
*/
|
2017-10-11 23:23:09 +02:00
|
|
|
private static function connectRateLimits() {
|
|
|
|
$limits = self::$limits;
|
|
|
|
|
|
|
|
$reason = null;
|
|
|
|
$connected = array();
|
|
|
|
foreach ($limits as $limit) {
|
|
|
|
$reason = $limit->didConnect();
|
|
|
|
$connected[] = $limit;
|
|
|
|
if ($reason !== null) {
|
|
|
|
break;
|
|
|
|
}
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
}
|
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
// If we're killing the request here, disconnect any limits that we
|
|
|
|
// connected to try to keep the accounting straight.
|
|
|
|
if ($reason !== null) {
|
|
|
|
foreach ($connected as $limit) {
|
|
|
|
$limit->didDisconnect(array());
|
|
|
|
}
|
2017-04-21 21:41:53 +02:00
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
self::didRateLimit($reason);
|
2017-04-21 21:41:53 +02:00
|
|
|
}
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2017-10-11 23:23:09 +02:00
|
|
|
* Tear down rate limiting and allow limits to score the request.
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
*
|
2017-10-11 23:23:09 +02:00
|
|
|
* @param map<string, wild> Additional, freeform request state.
|
|
|
|
* @return void
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
* @task ratelimit
|
|
|
|
*/
|
2017-10-11 23:23:09 +02:00
|
|
|
public static function disconnectRateLimits(array $request_state) {
|
|
|
|
$limits = self::$limits;
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
|
2017-10-14 16:14:31 +02:00
|
|
|
// Remove all limits before disconnecting them so this works properly if
|
|
|
|
// it runs twice. (We run this automatically as a shutdown handler.)
|
|
|
|
self::$limits = array();
|
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
foreach ($limits as $limit) {
|
|
|
|
$limit->didDisconnect($request_state);
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Emit an HTTP 429 "Too Many Requests" response (indicating that the user
|
|
|
|
* has exceeded application rate limits) and exit.
|
|
|
|
*
|
|
|
|
* @return exit This method **does not return**.
|
|
|
|
* @task ratelimit
|
|
|
|
*/
|
2017-10-11 23:23:09 +02:00
|
|
|
private static function didRateLimit($reason) {
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
header(
|
|
|
|
'Content-Type: text/plain; charset=utf-8',
|
|
|
|
$replace = true,
|
|
|
|
$http_error = 429);
|
|
|
|
|
2017-10-11 23:23:09 +02:00
|
|
|
echo $reason;
|
Rate limit requests by IP
Summary:
Fixes T3923. On `secure.phabricator.com`, we occasionally get slowed to a crawl when someone runs a security scanner against us, or 5 search bots decide to simultaneously index every line of every file in Diffusion.
Every time a user makes a request, give their IP address some points. If they get too many points in 5 minutes, start blocking their requests automatically for a while.
We give fewer points for logged in requests. We could futher refine this (more points for a 404, more points for a really slow page, etc.) but let's start simply.
Also, provide a mechanism for configuring this, and configuring the LB environment stuff at the same time (this comes up rarely, but we don't have a good answer right now).
Test Plan: Used `ab` and reloading over and over again to hit rate limits. Read documentation.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: chad, epriestley
Maniphest Tasks: T3923
Differential Revision: https://secure.phabricator.com/D8713
2014-04-09 03:36:21 +02:00
|
|
|
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
2015-08-21 23:53:29 +02:00
|
|
|
|
|
|
|
/* -( Startup Timers )----------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Record the beginning of a new startup phase.
|
|
|
|
*
|
|
|
|
* For phases which occur before @{class:PhabricatorStartup} loads, save the
|
|
|
|
* time and record it with @{method:recordStartupPhase} after the class is
|
|
|
|
* available.
|
|
|
|
*
|
|
|
|
* @param string Phase name.
|
|
|
|
* @task phases
|
|
|
|
*/
|
|
|
|
public static function beginStartupPhase($phase) {
|
|
|
|
self::recordStartupPhase($phase, microtime(true));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Record the start time of a previously executed startup phase.
|
|
|
|
*
|
|
|
|
* For startup phases which occur after @{class:PhabricatorStartup} loads,
|
|
|
|
* use @{method:beginStartupPhase} instead. This method can be used to
|
|
|
|
* record a time before the class loads, then hand it over once the class
|
|
|
|
* becomes available.
|
|
|
|
*
|
|
|
|
* @param string Phase name.
|
|
|
|
* @param float Phase start time, from `microtime(true)`.
|
|
|
|
* @task phases
|
|
|
|
*/
|
|
|
|
public static function recordStartupPhase($phase, $time) {
|
|
|
|
self::$phases[$phase] = $time;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Get information about startup phase timings.
|
|
|
|
*
|
|
|
|
* Sometimes, performance problems can occur before we start the profiler.
|
|
|
|
* Since the profiler can't examine these phases, it isn't useful in
|
|
|
|
* understanding their performance costs.
|
|
|
|
*
|
|
|
|
* Instead, the startup process marks when it enters various phases using
|
|
|
|
* @{method:beginStartupPhase}. A later call to this method can retrieve this
|
|
|
|
* information, which can be examined to gain greater insight into where
|
|
|
|
* time was spent. The output is still crude, but better than nothing.
|
|
|
|
*
|
|
|
|
* @task phases
|
|
|
|
*/
|
|
|
|
public static function getPhases() {
|
|
|
|
return self::$phases;
|
|
|
|
}
|
|
|
|
|
2012-12-25 15:11:39 +01:00
|
|
|
}
|