Summary: Depends on D20256. Ref T13249. See PHI1115. This primarily makes `bin/policy unlock --owner epriestley T123` work. This is important for "Edit Locked" tasks, since changing the edit policy doesn't really do anything.
Test Plan: Hard-locked a task as "alice", reassigned it to myself with `bin/policy unlock --owner epriestley`.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13249
Differential Revision: https://secure.phabricator.com/D20257
Summary:
See PHI1115. Ref T13249. Currently, you can `bin/policy unlock` objects which have become inaccessible through some sort of policy mistake.
This script uses a very blunt mechanism to perform unlocks: just manually calling `setXPolicy()` and then trying to `save()` the object. Improve things a bit:
- More surgical: allow selection of which policies you want to adjust with "--view", "--edit", and "--owner" (potentially important for some objects like Herald rules which don't have policies, and "edit-locked" tasks which basically ignore the edit policy).
- More flexible: Instead of unlocking into "All Users" (which could be bad for stuff like Passphrase credentials, since you create a short window where anyone can access them), take a username as a parameter and set the policy to "just that user". Normally, you'd run this as `bin/policy unlock --view myself --edit myself` or similar, now.
- More modular: We can't do "owner" transactions in a generic way, but lay the groundwork for letting applications support providing an owner reassignment mechanism.
- More modern: Use transactions, not raw `set()` + `save()`.
This previously had some hard-coded logic around unlocking applications. I've removed it, and the new generic stuff doesn't actually work. It probably should be made to work at some point, but I believe it's exceptionally difficult to lock yourself out of applications, and you can unlock them with `bin/config set phabricator.application-settings ...` anyway so I'm not too worried about this. It's also hard to figure out the PHID of an application and no one has ever asked about this so I'd guess the reasonable use rate of `bin/policy unlock` to unlock applications in the wild may be zero.
Test Plan:
- Used `bin/policy unlock` to unlock some objects, saw sensible transactions.
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13249
Differential Revision: https://secure.phabricator.com/D20256
Summary:
Ref T13217. This method is slightly tricky:
- We can't safely return a string: return an array instead.
- It no longer makes sense to accept glue. All callers use `', '` as glue anyway, so hard-code that.
Then convert all callsites.
Test Plan: Browsed around, saw fewer "unsafe" errors in error log.
Reviewers: amckinley
Reviewed By: amckinley
Subscribers: yelirekim, PHID-OPKG-gm6ozazyms6q6i22gyam
Maniphest Tasks: T13217
Differential Revision: https://secure.phabricator.com/D19784
Summary:
Ref T13151. See PHI647. This allows us to link to any object by PHID, without disclosing information in the monogram (like `#fire-steve`).
This capability is relevant when building "secure mail", to provide a link to the object regardless of whether the monogram discloses information or not.
Test Plan: Visited `/object/D123/` (redirect), `/object/xyz/` (404), `/object/PHID-DREV-.../` (redirect).
Reviewers: amckinley
Reviewed By: amckinley
Maniphest Tasks: T13151
Differential Revision: https://secure.phabricator.com/D19487
Summary: Ref T13103. Make favicons customizable, and perform dynamic compositing to add marker to indicate things like "unread messages".
Test Plan: Viewed favicons in Safari, Firefox and Chrome. With unread messages, saw pink dot composited into icon.
Maniphest Tasks: T13103
Differential Revision: https://secure.phabricator.com/D19209
Summary:
Fixes T12946. `bin/remove destroy` does not remove working copies: it's more dangerous than usual, and we can't do it in the general (clustered) case.
Print a notification message after destroying a repository.
Test Plan:
- Destroyed a repository, got a hint about the working copy.
- Destroyed a task, things worked normally.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T12946
Differential Revision: https://secure.phabricator.com/D18313
Summary: Ref T4245. We disallow `/diffusion/` in robots.txt already because indexers tend to get lost blaming every line of every file throughout history, but didn't update the list for the `/source/` alias. Update it.
Test Plan: Visited `/robots.txt` locally.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4245
Differential Revision: https://secure.phabricator.com/D17637
Summary:
Ref T11954. This is kind of complex and I'm not sure I want to actually land it, but it gives us a fairly good improvement for clustered repositories so I'm leaning toward moving forward.
When we make (or receive) clustered repository requests, we must first load a bunch of stuff out of Almanac to figure out where to send the request (or if we can handle the request ourselves).
This involves several round trip queries into Almanac (service, device, interfaces, bindings, properties) and generally is fairly slow/expensive. The actual data we get out of it is just a list of URIs.
Caching this would be very easy, except that invalidating the cache is difficult, since editing any binding, property, interface, or device may invalidate the cache for indirectly connected services and repositories.
To address this, introduce `PhabricatorCacheEngine`, which is an extensible engine like `PhabricatorDestructionEngine` for propagating cache updates. It has two modes:
- Discover linked objects (that is: find related objects which may need to have caches invalidated).
- Invalidate caches (that is: nuke any caches which need to be nuked).
Both modes are extensible, so third-party code can build repository-dependent caches or whatever. This may be overkill but even if Almanac is the only thing we use it for it feels like a fairly clean solution to the problem.
With `CacheEngine`, make any edit to Almanac stuff propagate up to the Service, and then from the Service to any linked Repositories.
Once we hit repositories, invalidate their caches when Almanac changes.
Test Plan:
- Observed a 20-30ms performance improvement with `ab -n 100`.
- (The main page making Conduit calls also gets a performance improvement, although that's a little trickier to measure directly.)
- Added debugging code to the cache engine stuff to observe the linking and invalidation phases.
- Made invalidation throw; verified that editing properties, bindings, etc, properly invalidates the cache of any indirectly linked repositories.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T11954
Differential Revision: https://secure.phabricator.com/D17000
Summary:
See D16734.
- Add ".ico" files to the Celerity map.
- Add a formal route for "/favicon.ico".
- Remove instructions to configure `/rsrc/` and `/favicon.ico` rewrite rules.
Long ago, we served resources directly via `/rsrc/` in at least some cases. As we added more features, this stopped working more and more often (for example, Apache can never serve CSS this way, because it doesn't know how to post-process `{$variables}`).
In modern code (until this change), only `/favicon.ico` is still expected to be served this way.
Instead, serve it with an explicit route via controller (this allows different Sites to have different favicons, for example).
Remove the instructions suggesting the old rewrite rules be configured. It's OK if they're still in place -- they won't break anything, so we don't need to rush to get users to delete them.
We should keep "webroot/favicon.ico" in place for now, since it needs to be there for users with the old rewrite rule.
Test Plan:
- Ran celerity map.
- Loaded `/favicon.ico`, got resource via route.
- Used `celerity_generate_resource_uri()` to get paths to other icons, loaded them, got icons.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D16737
Summary: Ref T11132, significantly cleans up the Config app, new layout, icons, spacing, etc. Some minor todos around re-designing "issues", mobile support, and maybe another pass at actual Group pages.
Test Plan: Visit and test every page in the config app, set new items, resolve setup issues, etc.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: PHID-OPKG-gm6ozazyms6q6i22gyam, Korvin
Maniphest Tasks: T11132
Differential Revision: https://secure.phabricator.com/D16468
Summary: Switches over to new property UI boxes, splits core and apps into separate pages. Move Versions into "All Settings". I think there is some docs I likely need to update here as well.
Test Plan: Click on each item in the sidebar, see new headers.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D16429
Summary: Wow! Real value here.
Test Plan: No more red underlines.
Reviewers: chad
Reviewed By: chad
Differential Revision: https://secure.phabricator.com/D15941
Summary:
Ref T4571. If we fail to connect to the master, automatically try to degrade into a temporary read-only mode ("UNREACHABLE") for the remainder of the request, if possible.
If the request was something like "load the homepage", that'll work fine. If it was something like "submit a comment", there's nothing we can do and we just have to fail.
Detecting this condition imposes a performance penalty: every request checks the connection and gives the database a long time to respond, since we don't want to drop writes unless we have to. So the degraded mode works, but it's really slow, and may perpetuate the problem if the root issue is load-related.
This lays the groundwork for improving this case by degrading futher into a "SEVERED" mode which will persist across requests. In the future, if several requests in a short period of time fail, we'll sever the database host and refuse to try to connect to it for a little while, connecting directly to replicas instead (basically, we're "health checking" the master, like a load balancer would health check a web application server). This will give us a better (much faster) degraded mode in a major service disruption, and reduce load on the master if the root cause is load-related, giving it a better chance of recovering on its own.
Test Plan:
- Disabled master in config by changing the host/username, got degraded automatically to UNREACAHBLE mode immediately.
- Faked full SEVERED mode, requests hit replicas and put me in the mode properly.
- Made stuff work, hit some good pages.
- Hit some non-cluster pages.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4571
Differential Revision: https://secure.phabricator.com/D15674
Summary: Ref T4571. If `cluster.databases` is configured but only has replicas, implicitly drop to read-only mode and send writes to a replica.
Test Plan:
- Disabled the `master`, saw Phabricator automatically degrade into read-only mode against replicas.
- (Also tested: explicit read-only mode, non-cluster mode, properly configured cluster mode).
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4571
Differential Revision: https://secure.phabricator.com/D15672
Summary:
Ref T4571. Allows users to click the "read-only mode" notification to get more information about why an install is in read-only mode.
Installs can be in this mode for several reasons (explicit administrative action, no masters defined, no masters reachable), and it's useful to be able to tell the difference.
Test Plan: {F1212930}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T4571
Differential Revision: https://secure.phabricator.com/D15671
Summary:
Ref T10262. This removes one-time tokens and makes file data responses always-cacheable (for 30 days).
The URI will stop working once any attached object changes its view policy, or the file view policy itself changes.
Files with `canCDN` (totally public data like profile images, CSS, JS, etc) use "cache-control: public" so they can be CDN'd.
Files without `canCDN` use "cache-control: private" so they won't be cached by the CDN. They could still be cached by a misbehaving local cache, but if you don't want your users seeing one anothers' secret files you should configure your local network properly.
Our "Cache-Control" headers were also from 1999 or something, update them to be more modern/sane. I can't find any evidence that any browser has done the wrong thing with this simpler ruleset in the last ~10 years.
Test Plan:
- Configured alternate file domain.
- Viewed site: stuff worked.
- Accessed a file on primary domain, got redirected to alternate domain.
- Verified proper cache headers for `canCDN` (public) and non-`canCDN` (private) files.
- Uploaded a file to a task, edited task policy, verified it scrambled the old URI.
- Reloaded task, new URI generated transparently.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10262
Differential Revision: https://secure.phabricator.com/D15642
Summary: Ref T9979. Convert all DestructionEngine behaviors to extensions.
Test Plan:
{F1033244}
Destroyed an object, verifying:
- Herald transcripts were destroyed;
- edges were destroyed;
- flags were destroyed;
- tokens were destroyed;
- transactions were destroyed;
- worker tasks were cancelled.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9979
Differential Revision: https://secure.phabricator.com/D14832
Summary:
Ref T9979. The general shape of "engine" code feels pretty good, and I plan to move indexing to be more in line with other modern engines, with the ultimate goal of supporting subprojects (T10010) and several intermediate goals.
Before moving indexing, clean up Destruction, since some of the new indexes will need destruction hooks and destruction currently has a lot of `instanceof` stuff that should be easy to fix by applying more modern approaches.
Test Plan:
- Used `bin/remove destroy` to destory an Almanac device.
- Verified that properties for the device were destroyed.
- Viewed module panel in UI.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9979
Differential Revision: https://secure.phabricator.com/D14831
Summary: As suggested in D14461.
Test Plan: Used `./bin/remove destroy` on an Almanac service with properties attached, saw entries removed from the `phabricator_almanac.almanac_property` table.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin
Differential Revision: https://secure.phabricator.com/D14490
Summary:
Fixes T9494. This:
- Removes all the random GC.x.y.z config.
- Puts it all in one place that's locked and which you use `bin/garbage set-policy ...` to adjust.
- Makes every TTL-based GC configurable.
- Simplifies the code in the actual GCs.
Test Plan:
- Ran `bin/garbage collect` to collect some garbage, until it stopped collecting.
- Ran `bin/garbage set-policy ...` to shorten policy. Saw change in web UI. Ran `bin/garbage collect` again and saw it collect more garbage.
- Set policy to indefinite and saw it not collect garabge.
- Set policy to default and saw it reflected in web UI / `collect`.
- Ran `bin/phd debug trigger` and saw all GCs fire with reasonable looking queries.
- Read new docs.
{F857928}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T9494
Differential Revision: https://secure.phabricator.com/D14219
Summary:
I stumbled across this TODO and was worried that there was a glaring hole in MFA that I'd somehow forgotten about, but the TODO is just out of date.
These actions are rate limited properly by `PhabricatorAuthTryFactorAction`, which permits a maximum of 10 actions per hour.
- Remove the TODO.
- Add `bin/auth unlimit` to make it easier to reset rate limits if someone needs to do that for whatever reason.
Test Plan:
- Tried to brute force through MFA.
- Got rate limited properly after 10 failures.
- Reset rate limit with `bin/auth unlimit`.
- Saw the expected number of actions clear.
{F805288}
Reviewers: chad
Reviewed By: chad
Subscribers: joshuaspence
Differential Revision: https://secure.phabricator.com/D14105
Summary: All classes should extend from some other class. See D13275 for some explanation.
Test Plan: `arc unit`
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: epriestley, Korvin
Differential Revision: https://secure.phabricator.com/D13283
Summary: When destroying an object, also remove all associated flags and tokens.
Test Plan: Awarded a Maniphest task with a token and flagged it as well. Destroyed it with `./bin/remove destroy` and saw flag and token removed from the database.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D13112
Summary:
Make sure we're 100% clear that this is really, truly not recommended.
Also improve the text itself, and show the objects which are being destroyed more clearly.
Test Plan: Removed objects with `bin/remove destroy`.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D12952
Summary: Fixes T8237. This table is unusual and doesn't have an `id` column, so `delete()` doesn't actually know how to delete records and fails.
Test Plan:
- Used `bin/remove destroy` to destroy an object with notifications, as per T8237.
- Applied patch.
- Used `bin/remove destroy` to get a clean delete.
- Verified related notifications vanished from notification menu.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T8237
Differential Revision: https://secure.phabricator.com/D12907
Summary:
Fixes T6956. Before this change, we called PhabricatorUser::getOmnipotentUser in the various delete methods to query the data. Now, we use $engine->getViewer(), since its always a good thing to have less calls to PhabricatorUser::getOmnipotentUser thrown around the codebase.
I used the "codemod" tool to audit the existing calls to PhabricatorDestructorEngine (all of them) so ostensibly this gets all the spots. If I missed something though, its still going to work, so this change is very low risk.
Test Plan: ./bin/remove destroy P1; visit P1 and get a 404
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin, epriestley
Maniphest Tasks: T6956
Differential Revision: https://secure.phabricator.com/D12866
Summary: Ref T6822.
Test Plan: `grep`. This method is only called from within `PhutilArgumentWorkflow::__construct`.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Maniphest Tasks: T6822
Differential Revision: https://secure.phabricator.com/D11415
Summary:
Ref T1191. Now that the whole database is covered, we don't need to do as much work to build expected schemata. Doing them database-by-database was helpful in converting, but is just reudndant work now.
Instead of requiring every application to build its Lisk objects, just build all Lisk objects.
I removed `harbormaster.lisk_counter` because it is unused.
It would be nice to autogenerate edge schemata, too, but that's a little trickier.
Test Plan: Database setup issues are all green.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley, hach-que
Maniphest Tasks: T1191
Differential Revision: https://secure.phabricator.com/D10620
Summary:
Ref T1191.
- Adds definitions for missing keys and keys with wrong uniqueness. Generally, I defined these before fixing the key query to actually pull all keys and support uniqueness.
- Moves "key uniqueness" to note severity; this is fixable (probably?) and there are no remaining issues.
- Moves "Missing Key" to note severity; missing keys are fixable and all remaining missing keys are really missing (either missing edge keys, or missing PHID keys):
{F210089}
- Moves "Surplus Key" to note seveirty; surplus keys are fixable all remaining surplus keys are really surplus (duplicate key in Harbormaster, key on unused column in Worker):
{F210090}
Test Plan:
- Vetted missing/surplus/unique messages.
- 146 issues remaining.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T1191
Differential Revision: https://secure.phabricator.com/D10590
Summary: T1191. Nothing very notable here.
Test Plan: Saw more blue in web UI.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D10522
Summary: Ref T5915. Make `bin/remove destroy` a bit more thorough, since Herald transcripts can have field information in them.
Test Plan: Used `bin/remove destroy` to nuke revisions, saw their transcripts vanish too.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T5915
Differential Revision: https://secure.phabricator.com/D10306
Summary: Provide an implementation for the `getName` method rather than automagically determining the application name.
Test Plan: Saw reasonable application names in the launcher.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: epriestley, Korvin
Differential Revision: https://secure.phabricator.com/D10027
Summary: Ref T5655. Some discussion in D9839. Generally speaking, `Phabricator{$name}Application` is clearer than `PhabricatorApplication{$name}`.
Test Plan:
# Pinned and uninstalled some applications.
# Applied patch and performed migrations.
# Verified that the pinned applications were still pinned and that the uninstalled applications were still uninstalled.
# Performed a sanity check on the database contents.
Reviewers: btrahan, epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: hach-que, epriestley, Korvin
Maniphest Tasks: T5655
Differential Revision: https://secure.phabricator.com/D9982
Summary: Ref T5655. The `PhabricatorDestructibleInterface` interface is misspelled as `PhabricatorDestructableInterface`. Fix the spelling mistake.
Test Plan: `grep`. Seeing as this interface is fairly recent, I don't expect that this would cause any widespread breakages.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: epriestley, Korvin
Maniphest Tasks: T5655
Differential Revision: https://secure.phabricator.com/D9988
Summary:
Ref T5245. These were a bad idea.
We no longer need actors for edge edits either, so remove those. Generally, edges have fit into the policy model as pure/low-level infrastructure, and they do not have any policy or capability information in and of themselves.
Test Plan: `grep`
Reviewers: chad, btrahan, joshuaspence
Reviewed By: joshuaspence
Subscribers: epriestley
Maniphest Tasks: T5245
Differential Revision: https://secure.phabricator.com/D9840
Summary: Ref T5179. Ref T4045. Ref T832. We can now write non-utf8 hunks into the database, so try to do more reasonable things with them in the UI.
Test Plan: (See screenshots...)
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T832, T4045, T5179
Differential Revision: https://secure.phabricator.com/D9294
Summary:
Ref T4749. Ref T3265. Ref T4909. Several goals here:
- Move user destruction to the CLI to limit the power of rogue admins.
- Start consolidating all "destroy named object" scripts into a single UI, to make it easier to know how to destroy things.
- Structure object destruction so we can do a better and more automatic job of cleaning up transactions, edges, search indexes, etc.
- Log when we destroy objects so there's a record if data goes missing.
Test Plan: Used `bin/remove destroy` to destroy several users.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T3265, T4749, T4909
Differential Revision: https://secure.phabricator.com/D8940
Summary: Ref T4398. Prevent users from brute forcing multi-factor auth by rate limiting attempts. This slightly refines the rate limiting to allow callers to check for a rate limit without adding points, and gives users credit for successfully completing an auth workflow.
Test Plan: Tried to enter hisec with bad credentials 11 times in a row, got rate limited.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T4398
Differential Revision: https://secure.phabricator.com/D8911
Summary:
Ref T4371. Ref T4699. Fixes T3994.
Currently, we're very conservative about sending errors back to users. A concern I had about this was that mistakes could lead to email loops, massive amounts of email spam, etc. Because of this, I was pretty hesitant about replying to email with more email when I wrote this stuff.
However, this was a long time ago. We now have Message-ID deduplication, "X-Phabricator-Sent-This-Mail", generally better mail infrastructure, and rate limiting. Together, these mechanisms should reasonably prevent anything crazy (primarily, infinite email loops) from happening.
Thus:
- When we hit any processing error after receiving a mail, try to send the author a reply with details about what went wrong. These are limited to 6 per hour per address.
- Rewrite most of the errors to be more detailed and informative.
- Rewrite most of the errors in a user-facing voice ("You sent this mail..." instead of "This mail was sent..").
- Remove the redundant, less sophisticated code which does something similar in Differential.
Test Plan:
- Using `scripts/mail/mail_receiver.php`, artificially received a pile of mail.
- Hit a bunch of different errors.
- Saw reasonable error mail get sent to me.
- Saw other reasonable error mail get rate limited.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T3994, T4371, T4699
Differential Revision: https://secure.phabricator.com/D8692
Summary: I accidentally made these exceptionally ugly recently.
Test Plan: {F137411}
Reviewers: btrahan, chad
Reviewed By: chad
Subscribers: epriestley, chad
Differential Revision: https://secure.phabricator.com/D8684
Summary:
This adds a system which basically keeps a record of recent actions, who took them, and how many "points" they were worth, like:
epriestley email.add 1 1233989813
epriestley email.add 1 1234298239
epriestley email.add 1 1238293981
We can use this to rate-limit actions by examining how many actions the user has taken in the past hour (i.e., their total score) and comparing that to an allowed limit.
One major thing I want to use this for is to limit the amount of error email we'll send to an email address. A big concern I have with sending more error email is that we'll end up in loops. We have some protections against this in headers already, but hard-limiting the system so it won't send more than a few errors to a particular address per hour should provide a reasonable secondary layer of protection.
This use case (where the "actor" needs to be an email address) is why the table uses strings + hashes instead of PHIDs. For external users, it might be appropriate to rate limit by cookies or IPs, too.
To prove it works, I rate limited adding email addresses. This is a very, very low-risk security thing where a user with an account can enumerate addresses (by checking if they get an error) and sort of spam/annoy people (by adding their address over and over again). Limiting them to 6 actions / hour should satisfy all real users while preventing these behaviors.
Test Plan:
This dialog is uggos but I'll fix that in a sec:
{F137406}
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D8683
Summary:
Fixes T4610. Open to suggestions, etc., if there's anything I'm missing.
Also:
- Moves these "system" endpoints into a real application.
- Makes `isUnlisted()` work a little more consistently.
Test Plan: Accessed `/robots.txt`, `/status/` and `/debug/`.
Reviewers: chad, btrahan
Reviewed By: btrahan
Subscribers: aran, epriestley
Maniphest Tasks: T4610
Differential Revision: https://secure.phabricator.com/D8532