2011-01-16 13:51:39 -08:00
|
|
|
<?php
|
|
|
|
|
2015-06-15 18:02:26 +10:00
|
|
|
abstract class AphrontResponse extends Phobject {
|
2011-01-16 13:51:39 -08:00
|
|
|
|
|
|
|
private $request;
|
2011-01-27 11:35:04 -08:00
|
|
|
private $cacheable = false;
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
private $canCDN;
|
2011-01-30 08:44:28 -08:00
|
|
|
private $responseCode = 200;
|
2011-05-09 01:10:40 -07:00
|
|
|
private $lastModified = null;
|
2011-01-16 13:51:39 -08:00
|
|
|
|
2011-09-13 16:38:28 -07:00
|
|
|
protected $frameable;
|
|
|
|
|
2011-01-16 13:51:39 -08:00
|
|
|
public function setRequest($request) {
|
|
|
|
$this->request = $request;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
|
|
|
public function getRequest() {
|
|
|
|
return $this->request;
|
|
|
|
}
|
|
|
|
|
2015-03-14 08:29:12 -07:00
|
|
|
|
|
|
|
/* -( Content )------------------------------------------------------------ */
|
|
|
|
|
|
|
|
|
|
|
|
public function getContentIterator() {
|
|
|
|
return array($this->buildResponseString());
|
|
|
|
}
|
|
|
|
|
|
|
|
public function buildResponseString() {
|
|
|
|
throw new PhutilMethodNotImplementedException();
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* -( Metadata )----------------------------------------------------------- */
|
|
|
|
|
|
|
|
|
2011-01-16 13:51:39 -08:00
|
|
|
public function getHeaders() {
|
2011-09-13 16:38:28 -07:00
|
|
|
$headers = array();
|
|
|
|
if (!$this->frameable) {
|
|
|
|
$headers[] = array('X-Frame-Options', 'Deny');
|
|
|
|
}
|
|
|
|
|
Support HTTP Strict Transport Security
Summary:
Ref T4340. The attack this prevents is:
- An adversary penetrates your network. They acquire one of two capabilities:
- Your server is either configured to accept both HTTP and HTTPS, and they acquire the capability to observe HTTP traffic.
- Or your server is configured to accept only HTTPS, and they acquire the capability to control DNS or routing. In this case, they start a proxy server to expose your secure service over HTTP.
- They send you a link to `http://secure.service.com` (note HTTP, not HTTPS!)
- You click it since everything looks fine and the domain is correct, not noticing that the "s" is missing.
- They read your traffic.
This is similar to attacks where `https://good.service.com` is proxied to `https://good.sorvace.com` (i.e., a similar looking domain), but can be more dangerous -- for example, the browser will send (non-SSL-only) cookies and the attacker can write cookies.
This header instructs browsers that they can never access the site over HTTP and must always use HTTPS, defusing this class of attack.
Test Plan:
- Configured HTTPS locally.
- Accessed site over HTTP (got application redirect) and HTTPS.
- Enabled HSTS.
- Accessed site over HTTPS (to set HSTS).
- Tore down HTTPS part of the server and tried to load the site over HTTP. Browser refused to load "http://" and automatically tried to load "https://". In another browser which had not received the "HSTS" header, loading over HTTP worked fine.
- Brought the HTTPS server back up, things worked fine.
- Turned off the HSTS config setting.
- Loaded a page (to set HSTS with expires 0, diabling it).
- Tore down the HTTPS part of the server again.
- Tried to load HTTP.
- Now it worked.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T4340
Differential Revision: https://secure.phabricator.com/D11820
2015-02-19 10:33:48 -08:00
|
|
|
if ($this->getRequest() && $this->getRequest()->isHTTPS()) {
|
|
|
|
$hsts_key = 'security.strict-transport-security';
|
|
|
|
$use_hsts = PhabricatorEnv::getEnvConfig($hsts_key);
|
|
|
|
if ($use_hsts) {
|
|
|
|
$duration = phutil_units('365 days in seconds');
|
|
|
|
} else {
|
|
|
|
// If HSTS has been disabled, tell browsers to turn it off. This may
|
|
|
|
// not be effective because we can only disable it over a valid HTTPS
|
|
|
|
// connection, but it best represents the configured intent.
|
|
|
|
$duration = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
$headers[] = array(
|
|
|
|
'Strict-Transport-Security',
|
|
|
|
"max-age={$duration}; includeSubdomains; preload",
|
|
|
|
);
|
|
|
|
}
|
|
|
|
|
2011-09-13 16:38:28 -07:00
|
|
|
return $headers;
|
2011-01-16 13:51:39 -08:00
|
|
|
}
|
2011-01-30 09:15:01 -08:00
|
|
|
|
2011-01-27 11:35:04 -08:00
|
|
|
public function setCacheDurationInSeconds($duration) {
|
|
|
|
$this->cacheable = $duration;
|
|
|
|
return $this;
|
|
|
|
}
|
2011-01-30 09:15:01 -08:00
|
|
|
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
public function setCanCDN($can_cdn) {
|
|
|
|
$this->canCDN = $can_cdn;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2011-05-09 01:10:40 -07:00
|
|
|
public function setLastModified($epoch_timestamp) {
|
|
|
|
$this->lastModified = $epoch_timestamp;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2011-01-30 08:44:28 -08:00
|
|
|
public function setHTTPResponseCode($code) {
|
|
|
|
$this->responseCode = $code;
|
|
|
|
return $this;
|
|
|
|
}
|
2011-01-30 09:15:01 -08:00
|
|
|
|
2011-01-30 08:44:28 -08:00
|
|
|
public function getHTTPResponseCode() {
|
|
|
|
return $this->responseCode;
|
|
|
|
}
|
2011-01-16 13:51:39 -08:00
|
|
|
|
Accept and route VCS HTTP requests
Summary:
Mostly ripped from D7391, with some changes:
- Serve repositories at `/diffusion/X/`, with no special `/git/` or `/serve/` URI component.
- This requires a little bit of magic, but I got the magic working for Git, Mercurial and SVN, and it seems reasonable.
- I think having one URI for everything will make it easier for users to understand.
- One downside is that git will clone into `X` by default, but I think that's not a big deal, and we can work around that in the future easily enough.
- Accept HTTP requests for Git, SVN and Mercurial repositories.
- Auth logic is a little different in order to be more consistent with how other things work.
- Instead of AphrontBasicAuthResponse, added "VCSResponse". Mercurial can print strings we send it on the CLI if we're careful, so support that. I did a fair amount of digging and didn't have any luck with git or svn.
- Commands we don't know about are assumed to require "Push" capability by default.
No actual VCS data going over the wire yet.
Test Plan:
Ran a bunch of stuff like this:
$ hg clone http://local.aphront.com:8080/diffusion/P/
abort: HTTP Error 403: This repository is not available over HTTP.
...and got pretty reasonable-seeming errors in all cases. All this can do is produce errors for now.
Reviewers: hach-que, btrahan
Reviewed By: hach-que
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7417
2013-10-26 07:56:17 -07:00
|
|
|
public function getHTTPResponseMessage() {
|
2015-04-07 07:37:16 +10:00
|
|
|
switch ($this->getHTTPResponseCode()) {
|
|
|
|
case 100: return 'Continue';
|
|
|
|
case 101: return 'Switching Protocols';
|
|
|
|
case 200: return 'OK';
|
|
|
|
case 201: return 'Created';
|
|
|
|
case 202: return 'Accepted';
|
|
|
|
case 203: return 'Non-Authoritative Information';
|
|
|
|
case 204: return 'No Content';
|
|
|
|
case 205: return 'Reset Content';
|
|
|
|
case 206: return 'Partial Content';
|
|
|
|
case 300: return 'Multiple Choices';
|
|
|
|
case 301: return 'Moved Permanently';
|
|
|
|
case 302: return 'Found';
|
|
|
|
case 303: return 'See Other';
|
|
|
|
case 304: return 'Not Modified';
|
|
|
|
case 305: return 'Use Proxy';
|
|
|
|
case 306: return 'Switch Proxy';
|
|
|
|
case 307: return 'Temporary Redirect';
|
|
|
|
case 400: return 'Bad Request';
|
|
|
|
case 401: return 'Unauthorized';
|
|
|
|
case 402: return 'Payment Required';
|
|
|
|
case 403: return 'Forbidden';
|
|
|
|
case 404: return 'Not Found';
|
|
|
|
case 405: return 'Method Not Allowed';
|
|
|
|
case 406: return 'Not Acceptable';
|
|
|
|
case 407: return 'Proxy Authentication Required';
|
|
|
|
case 408: return 'Request Timeout';
|
|
|
|
case 409: return 'Conflict';
|
|
|
|
case 410: return 'Gone';
|
|
|
|
case 411: return 'Length Required';
|
|
|
|
case 412: return 'Precondition Failed';
|
|
|
|
case 413: return 'Request Entity Too Large';
|
|
|
|
case 414: return 'Request-URI Too Long';
|
|
|
|
case 415: return 'Unsupported Media Type';
|
|
|
|
case 416: return 'Requested Range Not Satisfiable';
|
|
|
|
case 417: return 'Expectation Failed';
|
|
|
|
case 418: return "I'm a teapot";
|
|
|
|
case 426: return 'Upgrade Required';
|
|
|
|
case 500: return 'Internal Server Error';
|
|
|
|
case 501: return 'Not Implemented';
|
|
|
|
case 502: return 'Bad Gateway';
|
|
|
|
case 503: return 'Service Unavailable';
|
|
|
|
case 504: return 'Gateway Timeout';
|
|
|
|
case 505: return 'HTTP Version Not Supported';
|
|
|
|
default: return '';
|
|
|
|
}
|
Accept and route VCS HTTP requests
Summary:
Mostly ripped from D7391, with some changes:
- Serve repositories at `/diffusion/X/`, with no special `/git/` or `/serve/` URI component.
- This requires a little bit of magic, but I got the magic working for Git, Mercurial and SVN, and it seems reasonable.
- I think having one URI for everything will make it easier for users to understand.
- One downside is that git will clone into `X` by default, but I think that's not a big deal, and we can work around that in the future easily enough.
- Accept HTTP requests for Git, SVN and Mercurial repositories.
- Auth logic is a little different in order to be more consistent with how other things work.
- Instead of AphrontBasicAuthResponse, added "VCSResponse". Mercurial can print strings we send it on the CLI if we're careful, so support that. I did a fair amount of digging and didn't have any luck with git or svn.
- Commands we don't know about are assumed to require "Push" capability by default.
No actual VCS data going over the wire yet.
Test Plan:
Ran a bunch of stuff like this:
$ hg clone http://local.aphront.com:8080/diffusion/P/
abort: HTTP Error 403: This repository is not available over HTTP.
...and got pretty reasonable-seeming errors in all cases. All this can do is produce errors for now.
Reviewers: hach-que, btrahan
Reviewed By: hach-que
CC: aran
Maniphest Tasks: T2230
Differential Revision: https://secure.phabricator.com/D7417
2013-10-26 07:56:17 -07:00
|
|
|
}
|
|
|
|
|
2011-09-13 16:38:28 -07:00
|
|
|
public function setFrameable($frameable) {
|
|
|
|
$this->frameable = $frameable;
|
|
|
|
return $this;
|
|
|
|
}
|
|
|
|
|
2013-01-28 18:11:27 -08:00
|
|
|
public static function processValueForJSONEncoding(&$value, $key) {
|
2013-03-23 14:37:18 -07:00
|
|
|
if ($value instanceof PhutilSafeHTMLProducerInterface) {
|
|
|
|
// This renders the producer down to PhutilSafeHTML, which will then
|
|
|
|
// be simplified into a string below.
|
|
|
|
$value = hsprintf('%s', $value);
|
|
|
|
}
|
|
|
|
|
2013-01-28 18:11:27 -08:00
|
|
|
if ($value instanceof PhutilSafeHTML) {
|
|
|
|
// TODO: Javelin supports implicity conversion of '__html' objects to
|
|
|
|
// JX.HTML, but only for Ajax responses, not behaviors. Just leave things
|
|
|
|
// as they are for now (where behaviors treat responses as HTML or plain
|
|
|
|
// text at their discretion).
|
|
|
|
$value = $value->getHTMLContent();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
public static function encodeJSONForHTTPResponse(array $object) {
|
|
|
|
|
|
|
|
array_walk_recursive(
|
|
|
|
$object,
|
2015-05-14 06:50:28 +10:00
|
|
|
array(__CLASS__, 'processValueForJSONEncoding'));
|
2012-02-14 14:51:51 -08:00
|
|
|
|
|
|
|
$response = json_encode($object);
|
|
|
|
|
|
|
|
// Prevent content sniffing attacks by encoding "<" and ">", so browsers
|
|
|
|
// won't try to execute the document as HTML even if they ignore
|
|
|
|
// Content-Type and X-Content-Type-Options. See T865.
|
|
|
|
$response = str_replace(
|
|
|
|
array('<', '>'),
|
|
|
|
array('\u003c', '\u003e'),
|
|
|
|
$response);
|
|
|
|
|
OAuth - Phabricator OAuth server and Phabricator client for new Phabricator OAuth Server
Summary:
adds a Phabricator OAuth server, which has three big commands:
- auth - allows $user to authorize a given client or application. if $user has already authorized, it hands an authoization code back to $redirect_uri
- token - given a valid authorization code, this command returns an authorization token
- whoami - Conduit.whoami, all nice and purdy relative to the oauth server.
Also has a "test" handler, which I used to create some test data. T850 will
delete this as it adds the ability to create this data in the Phabricator
product.
This diff also adds the corresponding client in Phabricator for the Phabricator
OAuth Server. (Note that clients are known as "providers" in the Phabricator
codebase but client makes more sense relative to the server nomenclature)
Also, related to make this work well
- clean up the diagnostics page by variabilizing the provider-specific
information and extending the provider classes as appropriate.
- augment Conduit.whoami for more full-featured OAuth support, at least where
the Phabricator client is concerned
What's missing here... See T844, T848, T849, T850, and T852.
Test Plan:
- created a dummy client via the test handler. setup development.conf to have
have proper variables for this dummy client. went through authorization and
de-authorization flows
- viewed the diagnostics page for all known oauth providers and saw
provider-specific debugging information
Reviewers: epriestley
CC: aran, epriestley
Maniphest Tasks: T44, T797
Differential Revision: https://secure.phabricator.com/D1595
2012-02-03 16:21:40 -08:00
|
|
|
return $response;
|
|
|
|
}
|
|
|
|
|
2012-08-13 16:05:56 -07:00
|
|
|
protected function addJSONShield($json_response) {
|
2012-02-14 14:51:51 -08:00
|
|
|
// Add a shield to prevent "JSON Hijacking" attacks where an attacker
|
|
|
|
// requests a JSON response using a normal <script /> tag and then uses
|
|
|
|
// Object.prototype.__defineSetter__() or similar to read response data.
|
|
|
|
// This header causes the browser to loop infinitely instead of handing over
|
|
|
|
// sensitive data.
|
|
|
|
|
2012-08-13 16:05:56 -07:00
|
|
|
$shield = 'for (;;);';
|
2012-02-14 14:51:51 -08:00
|
|
|
|
OAuth - Phabricator OAuth server and Phabricator client for new Phabricator OAuth Server
Summary:
adds a Phabricator OAuth server, which has three big commands:
- auth - allows $user to authorize a given client or application. if $user has already authorized, it hands an authoization code back to $redirect_uri
- token - given a valid authorization code, this command returns an authorization token
- whoami - Conduit.whoami, all nice and purdy relative to the oauth server.
Also has a "test" handler, which I used to create some test data. T850 will
delete this as it adds the ability to create this data in the Phabricator
product.
This diff also adds the corresponding client in Phabricator for the Phabricator
OAuth Server. (Note that clients are known as "providers" in the Phabricator
codebase but client makes more sense relative to the server nomenclature)
Also, related to make this work well
- clean up the diagnostics page by variabilizing the provider-specific
information and extending the provider classes as appropriate.
- augment Conduit.whoami for more full-featured OAuth support, at least where
the Phabricator client is concerned
What's missing here... See T844, T848, T849, T850, and T852.
Test Plan:
- created a dummy client via the test handler. setup development.conf to have
have proper variables for this dummy client. went through authorization and
de-authorization flows
- viewed the diagnostics page for all known oauth providers and saw
provider-specific debugging information
Reviewers: epriestley
CC: aran, epriestley
Maniphest Tasks: T44, T797
Differential Revision: https://secure.phabricator.com/D1595
2012-02-03 16:21:40 -08:00
|
|
|
$response = $shield.$json_response;
|
2012-02-14 14:51:51 -08:00
|
|
|
|
|
|
|
return $response;
|
|
|
|
}
|
|
|
|
|
2011-01-16 13:51:39 -08:00
|
|
|
public function getCacheHeaders() {
|
2011-05-09 01:10:40 -07:00
|
|
|
$headers = array();
|
2011-01-27 11:35:04 -08:00
|
|
|
if ($this->cacheable) {
|
2016-04-06 14:48:44 -07:00
|
|
|
$cache_control = array();
|
|
|
|
$cache_control[] = sprintf('max-age=%d', $this->cacheable);
|
|
|
|
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
if ($this->canCDN) {
|
2016-04-06 14:48:44 -07:00
|
|
|
$cache_control[] = 'public';
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
} else {
|
2016-04-06 14:48:44 -07:00
|
|
|
$cache_control[] = 'private';
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
}
|
|
|
|
|
2016-04-06 14:48:44 -07:00
|
|
|
$headers[] = array(
|
|
|
|
'Cache-Control',
|
|
|
|
implode(', ', $cache_control),
|
|
|
|
);
|
|
|
|
|
2011-05-09 01:10:40 -07:00
|
|
|
$headers[] = array(
|
|
|
|
'Expires',
|
2014-10-08 00:01:04 +11:00
|
|
|
$this->formatEpochTimestampForHTTPHeader(time() + $this->cacheable),
|
|
|
|
);
|
2011-01-27 11:35:04 -08:00
|
|
|
} else {
|
2011-05-09 01:10:40 -07:00
|
|
|
$headers[] = array(
|
|
|
|
'Cache-Control',
|
Don't require one-time tokens to view file resources
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
2016-04-06 13:06:34 -07:00
|
|
|
'no-store',
|
2014-10-08 00:01:04 +11:00
|
|
|
);
|
2011-05-09 01:10:40 -07:00
|
|
|
$headers[] = array(
|
|
|
|
'Expires',
|
2014-10-08 00:01:04 +11:00
|
|
|
'Sat, 01 Jan 2000 00:00:00 GMT',
|
|
|
|
);
|
2011-05-09 01:10:40 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
if ($this->lastModified) {
|
|
|
|
$headers[] = array(
|
|
|
|
'Last-Modified',
|
2014-10-08 00:01:04 +11:00
|
|
|
$this->formatEpochTimestampForHTTPHeader($this->lastModified),
|
|
|
|
);
|
2011-01-27 11:35:04 -08:00
|
|
|
}
|
2011-05-09 01:10:40 -07:00
|
|
|
|
2012-01-16 16:54:05 -08:00
|
|
|
// IE has a feature where it may override an explicit Content-Type
|
|
|
|
// declaration by inferring a content type. This can be a security risk
|
|
|
|
// and we always explicitly transmit the correct Content-Type header, so
|
2012-02-14 14:51:51 -08:00
|
|
|
// prevent IE from using inferred content types. This only offers protection
|
|
|
|
// on recent versions of IE; IE6/7 and Opera currently ignore this header.
|
2012-01-16 16:54:05 -08:00
|
|
|
$headers[] = array('X-Content-Type-Options', 'nosniff');
|
2012-01-15 11:06:13 -08:00
|
|
|
|
2011-05-09 01:10:40 -07:00
|
|
|
return $headers;
|
|
|
|
}
|
|
|
|
|
|
|
|
private function formatEpochTimestampForHTTPHeader($epoch_timestamp) {
|
|
|
|
return gmdate('D, d M Y H:i:s', $epoch_timestamp).' GMT';
|
2011-01-16 13:51:39 -08:00
|
|
|
}
|
|
|
|
|
2015-03-14 08:29:12 -07:00
|
|
|
public function didCompleteWrite($aborted) {
|
|
|
|
return;
|
|
|
|
}
|
2011-01-16 13:51:39 -08:00
|
|
|
|
|
|
|
}
|