1
0
Fork 0
mirror of https://we.phorge.it/source/phorge.git synced 2024-11-15 03:12:41 +01:00
Commit graph

45 commits

Author SHA1 Message Date
Austin McKinley
e8069dfe31 Remove dead symlink
Summary: Fixes https://discourse.phabricator-community.org/t/broken-symbolic-link/2284

Test Plan: Mk. 1 eyeball

Reviewers: epriestley

Reviewed By: epriestley

Subscribers: Korvin

Differential Revision: https://secure.phabricator.com/D19963
2019-01-10 16:27:45 -08:00
epriestley
533e4e13b3 Add a bin/herald test ... for doing test runs via the CLI
Summary: Ref T13216. See D19666. It's currently tricky to profile Herald test runs since you have to submit a form and repeating them is a bit of a mess. Provide a simple CLI wrapper so we can use `--xprofile`. This is also maybe nice-to-have if we're ever debugging anything here.

Test Plan: Ran `bin/herald test --object ... --type ...` and got a sensible looking transcript in the UI.

Reviewers: amckinley

Reviewed By: amckinley

Maniphest Tasks: T13216

Differential Revision: https://secure.phabricator.com/D19806
2018-11-15 15:48:52 -08:00
epriestley
44f0664d2c Add a "lock log" for debugging where locks are being held
Summary: Depends on D19173. Ref T13096. Adds an optional, disabled-by-default lock log to make it easier to figure out what is acquiring and holding locks.

Test Plan: Ran `bin/lock log --enable`, `--disable`, `--name`, etc. Saw sensible-looking output with log enabled and daemons restarted. Saw no additional output with log disabled and daemons restarted.

Maniphest Tasks: T13096

Differential Revision: https://secure.phabricator.com/D19174
2018-03-05 17:55:34 -08:00
epriestley
0470125d9e Add skeleton code for webhooks
Summary: Ref T11330. Adds general support for webhooks. This is still rough and missing a lot of pieces -- and not yet useful for anything -- but can make HTTP requests.

Test Plan: Used `bin/webhook call ...` to complete requests to a test endpoint.

Maniphest Tasks: T11330

Differential Revision: https://secure.phabricator.com/D19045
2018-02-09 13:55:04 -08:00
epriestley
956c4058e6 Add a bin/conduit call support binary
Summary:
Ref T13060. See PHI343. Triaging this bug required figuring out where in the pipeline UTF8 was being dropped, and bisecting the pipeline required making calls to Conduit.

Currently, there's no easy way to debug/inspect arbitrary Conduit calls, especially when they are `diffusion.*` calls which route to a different host (even if you have a real session and use the web console for these, you just see an HTTP service call to the target host in DarkConsole).

Add a `bin/conduit` utility to make this kind of debugging easier, with an eye toward the Phacility production cluster (or other similar clusters) specifically.

Test Plan:
  - Ran `echo '{}' | bin/conduit call --method conduit.ping --input -` and similar.
  - Used a similar approach to successfully diagnose the UTF8 issue in T13060.

Reviewers: amckinley

Reviewed By: amckinley

Maniphest Tasks: T13060

Differential Revision: https://secure.phabricator.com/D18987
2018-02-05 12:20:49 -08:00
epriestley
3038d564a6 Allow bulk edits to be made silently if you have CLI access
Summary:
Fixes T13042. This hooks up the new "silent" mode from D18882 and makes it actually work.

The UI (where we tell you to go run some command and then reload the page) is pretty clumsy, but should solve some problems for now and can be cleaned up eventually. The actual mechanics (timeline aggregation, Herald interaction,  etc.) are on firmer ground.

Test Plan:
  - Made a normal bulk edit, got mail and feed stories.
  - Made a silent bulk edit, no mail and no feed.
  - Saw "Silent Edit" marker in timeline for silent edits:

{F5386245}

Reviewers: amckinley

Reviewed By: amckinley

Maniphest Tasks: T13042

Differential Revision: https://secure.phabricator.com/D18883
2018-01-19 13:24:54 -08:00
Chad Little
3a868940c7 Add a profileimage generation workflow for the cli
Summary: Ref T10319. This adds a basic means of generating default profile images for users. You can generate them for everyone, a group of users, or force updates. This only generated images and stores them in files. It does not assign them to users.

Test Plan:
`bin/people profileimage --all` to generate all images.
`bin/people profileimage --users chad` to generate a user.
`bin/people profileimage --all --force` to force rebuilding all images.

{F3662810}

Reviewers: epriestley

Reviewed By: epriestley

Subscribers: Korvin

Maniphest Tasks: T10319

Differential Revision: https://secure.phabricator.com/D17464
2017-03-04 15:43:13 -08:00
epriestley
1e9a462baa Remove most of the legacy hunk code
Summary: Ref T8475. This gets rid of most of the old "legacy hunk" code. I'll nuke the rest (and drop the old table) once we're more sure that we're in the clear.

Test Plan: Browsed Differential.

Reviewers: chad

Reviewed By: chad

Maniphest Tasks: T8475

Differential Revision: https://secure.phabricator.com/D17040
2016-12-13 14:34:36 -08:00
epriestley
6e6ae36dcf Add a skeleton for Calendar notifications
Summary:
Ref T7931. I'm going to do this separate from existing infrastructure because:

  - events start at different times for different users;
  - I like the idea of being able to batch stuff (send one email about several upcoming events);
  - triggering on ghost/recurring events is a real complicated mess.

This puts a skeleton in place that finds all the events we need to notify about and writes some silly example bodies to stdout, marking that we notified users so they don't get notified again.

Test Plan:
Ran `bin/calendar notify`, got a "great" notification in the command output.

{F1891625}

Reviewers: chad

Reviewed By: chad

Maniphest Tasks: T7931

Differential Revision: https://secure.phabricator.com/D16783
2016-11-01 10:41:15 -07:00
epriestley
2a3c3b2b98 Provide bin/nuance import and ngram indexes for sources
Summary:
Ref T10537. More infrastructure:

  - Put a `bin/nuance` in place with `bin/nuance import`. This has no useful behavior yet.
  - Allow sources to be searched by substring. This supports `bin/nuance import --source whatever` so you don't have to dig up PHIDs.

Test Plan:
  - Applied migrations.
  - Ran `bin/nuance import --source ...` (no meaningful effect, but works fine).
  - Searched for sources by substring in the UI.

Reviewers: chad

Reviewed By: chad

Maniphest Tasks: T10537

Differential Revision: https://secure.phabricator.com/D15436
2016-03-08 10:30:24 -08:00
epriestley
0dd947cced Move diff extraction from commits to a separate test with a CLI command
Summary:
Ref T9319. When we discover a commit, we sometimes update the corresponding revision with a "this is the actual committed change" diff and send out a link to the changes between review and commit.

This is currently very difficult to test, because it only happens the first time and you have to either go set up a bunch of objects or add a bunch of special casing to the parser to hit the workflow.

I'm making some changes to how it pulls file content. To make those changes easier to test, first start extracting this stuff so the code can be run with `bin/differential extract ...` instead of needing to do a bunch of more complicated setup steps.

Test Plan:
  - Ran `bin/differential extract ...` to extract diffs from commits.
  - Forced my way through the daemon workflow by faking out a bunch of flags, got a clean extract + attach + update. After this patch, this should rarely be necessary.

Reviewers: chad

Reviewed By: chad

Maniphest Tasks: T9319

Differential Revision: https://secure.phabricator.com/D14967
2016-01-08 09:22:37 -08:00
epriestley
9c798e5cca Provide bin/garbage for interacting with garbage collection
Summary:
Fixes T9494. This:

  - Removes all the random GC.x.y.z config.
  - Puts it all in one place that's locked and which you use `bin/garbage set-policy ...` to adjust.
  - Makes every TTL-based GC configurable.
  - Simplifies the code in the actual GCs.

Test Plan:
  - Ran `bin/garbage collect` to collect some garbage, until it stopped collecting.
  - Ran `bin/garbage set-policy ...` to shorten policy. Saw change in web UI. Ran `bin/garbage collect` again and saw it collect more garbage.
  - Set policy to indefinite and saw it not collect garabge.
  - Set policy to default and saw it reflected in web UI / `collect`.
  - Ran `bin/phd debug trigger` and saw all GCs fire with reasonable looking queries.
  - Read new docs.

{F857928}

Reviewers: chad

Reviewed By: chad

Maniphest Tasks: T9494

Differential Revision: https://secure.phabricator.com/D14219
2015-10-02 09:17:24 -07:00
epriestley
d804598f17 Add some of a billing daemon skeleton
Summary:
Ref T6881. This adds the worker, and a script to make it easier to test. It doesn't actually invoice anything.

I'm intentionally allowing the script to double-bill since it makes testing way easier (by letting you bill the same period over and over again), and provides a tool for recovery if billing screws up.

(This diff isn't very interesting, just trying to avoid a 5K-line diff at the end.)

Test Plan: Used `bin/phortune invoice ...` to get the worker to print out some date ranges which it would theoretically invoice.

Reviewers: btrahan

Reviewed By: btrahan

Subscribers: epriestley

Maniphest Tasks: T6881

Differential Revision: https://secure.phabricator.com/D11577
2015-01-30 11:29:05 -08:00
epriestley
934df0e735 Add bin/trigger, for testing event triggers
Summary:
Ref T6881. This makes it easier to fire a trigger and make sure it works properly. You can use the `--now` flag to travel through time, and test scheduling conditions with `--last` and `--next`. It will tell you when the trigger would reschedule.

Better than waiting 24 hours to see if things work.

Test Plan: Fired some backups, got useful output which made me think my code probably works correctly.

Reviewers: btrahan

Reviewed By: btrahan

Subscribers: epriestley

Maniphest Tasks: T6881

Differential Revision: https://secure.phabricator.com/D11438
2015-01-20 11:31:32 -08:00
epriestley
7e1c312183 Add bin/worker flood, for flooding the task queue with work
Summary: Ref T6615. Ref T3554. We need better tooling around the queue eventually, so start here.

Test Plan: Added 100K+ tasks locally with `bin/worker flood`. Executed some of them with `bin/phd debug taskmaster` (we already have a TestWorker, used in unit tests).

Reviewers: btrahan

Reviewed By: btrahan

Subscribers: epriestley

Maniphest Tasks: T3554, T6615

Differential Revision: https://secure.phabricator.com/D10894
2014-11-24 11:10:15 -08:00
James Rhodes
8fbebce501 Implement storage of a host ID and a public key for authorizing Conduit between servers
Summary:
Ref T4209.  This creates storage for public keys against authorized hosts, such that servers can be authorized to make Conduit calls as the omnipotent user.

Servers are registered into this system by running the following command once:

```
bin/almanac register
```

NOTE: This doesn't implement authorization between servers, just the storage of public keys.

Placing this against Almanac seemed like the most sensible place, since I'm imagining in future that the `register` command will accept more information (like the hostname of the server so it can be found in the service directory).

Test Plan: Ran `bin/almanac register` and saw the host (and public key information) appear in the database.

Reviewers: #blessed_reviewers, epriestley

Reviewed By: #blessed_reviewers, epriestley

Subscribers: epriestley, Korvin

Maniphest Tasks: T4209

Differential Revision: https://secure.phabricator.com/D10400
2014-10-03 22:52:41 +10:00
epriestley
5b1262c98b Add a bin/hunks script to manage migrations of hunk data
Summary:
Ref T4045. Ref T5179. While we'll eventually need to force a migration, we can let installs (particularly large installs) do an online migration for now. This moves hunks to the new storage format one at a time.

(Note that nothing writes to the new store yet, so this is the only way to populate it.)

WARNING: Installs, don't run this yet! It won't compress the data. Wait until it can also do compression.

Test Plan: Added a `break;` after migrating one row and moved a few rows over. Spot checked them in the database and viewed the affected diffs.

Reviewers: btrahan

Reviewed By: btrahan

Subscribers: epriestley

Maniphest Tasks: T4045, T5179

Differential Revision: https://secure.phabricator.com/D9291
2014-06-03 18:01:23 -07:00
Bob Trahan
e96c363eef Add SMS support
Summary:
Provides a working SMS implementation with support for Twilio.

This version doesn't really retry if we get any gruff at all. Future versions should retry.

Test Plan: used bin/sms to send messages and look at them.

Reviewers: chad, epriestley

Reviewed By: epriestley

Subscribers: aurelijus, epriestley, Korvin

Maniphest Tasks: T920

Differential Revision: https://secure.phabricator.com/D8930
2014-05-09 12:47:21 -07:00
epriestley
2022a70e16 Implement bin/remove, for structured destruction of objects
Summary:
Ref T4749. Ref T3265. Ref T4909. Several goals here:

  - Move user destruction to the CLI to limit the power of rogue admins.
  - Start consolidating all "destroy named object" scripts into a single UI, to make it easier to know how to destroy things.
  - Structure object destruction so we can do a better and more automatic job of cleaning up transactions, edges, search indexes, etc.
  - Log when we destroy objects so there's a record if data goes missing.

Test Plan: Used `bin/remove destroy` to destroy several users.

Reviewers: btrahan

Reviewed By: btrahan

Subscribers: epriestley

Maniphest Tasks: T3265, T4749, T4909

Differential Revision: https://secure.phabricator.com/D8940
2014-05-01 18:23:31 -07:00
epriestley
0726411cb4 Write a very basic string extractor
Summary: Ref T1139. This has some issues and glitches, but is a reasonable initial attempt that gets some of the big pieces in. We have about 5,200 strings in Phabricator.

Test Plan: {F108261}

Reviewers: btrahan

Reviewed By: btrahan

CC: aran, chad

Maniphest Tasks: T1139

Differential Revision: https://secure.phabricator.com/D8138
2014-02-05 11:02:41 -08:00
epriestley
906ac21e54 Begin construction of bin/celerity map
Summary:
Ref T4222. Moves us toward a more modern Celerity CLI, and moves map discovery into the classtree. This is a little bit bulky (and means you can't ship completely standalone celerity maps) but has the advantage of being completely standard, and we could subclass maps into an auto-discovering map later if we have a need for it.

This doesn't affect the existing Celerity stuff. I'm going to build the new stuff in parallel, and then swap us over at the end.

Test Plan: Ran `bin/celerity map`, got reasonable-looking output.

Reviewers: btrahan, hach-que

Reviewed By: hach-que

CC: aran

Maniphest Tasks: T4222

Differential Revision: https://secure.phabricator.com/D7864
2013-12-31 18:02:41 -08:00
epriestley
aad6b57c36 Add bin/harbormaster to make builds easier to debug
Summary:
Ref T1049. Adds `bin/harbormaster` and `bin/harbormaster build` for applying plans from the console. Since this gets `--trace`, it's much easier to debug what's going on.

This doesn't work properly with some of the Drydock steps yet, I need to look at those. I think `setRunAllTasksInProcess` probably obsoletes some of the mechanisms. It might also not work with "Wait for Builds" but I didn't check.

Test Plan: Used `bin/harbormaster` to run a bunch of builds. Ran builds from web UI.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T1049

Differential Revision: https://secure.phabricator.com/D7825
2013-12-26 10:40:52 -08:00
epriestley
618b5cbbc4 Install pre-commit hooks in Git repositories
Summary:
Ref T4189. T4189 describes most of the intent here:

  - When updating hosted repositories, sync a pre-commit hook into them instead of doing a `git fetch`.
  - The hook calls into Phabricator. The acting Phabricator user is sent via PHABRICATOR_USER in the environment. The active repository is sent via CLI.
  - The hook doesn't do anything useful yet; it just veifies basic parameters, does a little parsing, and exits 0 to allow the commit.

Test Plan:
  - Performed Git pushes and pulls over SSH and HTTP.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T4189

Differential Revision: https://secure.phabricator.com/D7682
2013-12-02 15:45:36 -08:00
epriestley
ff8b48979e Simplify Repository remote and local command construction
Summary:
This cleans up some garbage:

  - We were specifying environmental variables with `X=y git ...`, but now have `setEnv()` on both `ExecFuture` and `PhutilExecPassthru`. Use `setEnv()`.
  - We were specifying the working directory with `(cd %s && git ...)`, but now have `setCWD()` on both `ExecFuture` and `PhutilExecPassthru`. Use `setCWD()`.
  - We were specifying the Git credentials with `ssh-agent -c (ssh-add ... && git ...)`. We can do this more cleanly with `GIT_SSH`. Use `GIT_SSH`.
  - Since we have to write a script for `GIT_SSH` anyway, use the same script for Subversion and Mercurial.

This fixes two specific issues:

  - Previously, we were not able to set `-o StrictHostKeyChecking=no` on Git commands, so the first time you cloned a git repo the daemons would generally prompt you to add `github.com` or whatever to `known_hosts`. Since this was non-interactive, things would mysteriously hang, in effect. With `GIT_SSH`, we can specify the flag, reducing the number of ways things can go wrong.
  - This adds `LANG=C`, which probably (?) forces the language to English for all commands. Apparently you need to install special language packs or something, so I don't know that this actually works, but at least two users with non-English languages have claimed it does (see <https://github.com/facebook/arcanist/pull/114> for a similar issue in Arcanist).

At some point in the future I might want to combine the Arcanist code for command execution with the Phabricator code for command execution (they share some stuff like LANG and HGPLAIN). However, credential management is kind of messy, so I'm adopting a "wait and see" approach for now. I expect to split this at least somewhat in the future, for Drydock/Automerge if nothing else.

Also I'm not sure if we use the passthru stuff at all anymore, I may just be able to delete that. I'll check in a future diff.

Test Plan: Browsed and pulled Git, Subversion and Mercurial repositories.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T2230

Differential Revision: https://secure.phabricator.com/D7600
2013-11-20 10:41:35 -08:00
epriestley
888b3839e7 Prepare to route VCS connections through SSH
Summary:
Fixes T2229. This sets the stage for a patch similar to D7417, but for SSH. In particular, SSH 6.2 introduced an `AuthorizedKeysCommand` directive, which lets us do this in a mostly-reasonable way without needing users to patch sshd (if they have a recent enough version, at least).

The way the `AuthorizedKeysCommand` works is that it gets run and produces an `authorized_keys`-style file fragment. This isn't ideal, because we have to dump every key into the result, but should be fine for most installs. The earlier patch against `sshd` passes the public key itself, which allows the script to just look up the key. We might use this eventually, since it can scale much better, so I haven't removed it.

Generally, auth is split into two scripts now which mostly do the same thing:

  - `ssh-auth` is the AuthorizedKeysCommand auth, which takes nothing and dumps the whole keyfile.
  - `ssh-auth-key` is the slightly cleaner and more scalable (but patch-dependent) version, which takes the public key and dumps only matching options.

I also reworked the argument parsing to be a bit more sane.

Test Plan:
This is somewhat-intentionally a bit obtuse since I don't really want anyone using it yet, but basically:

  - Copy `phabricator-ssh-hook.sh` to somewhere like `/usr/libexec/openssh/`, chown it `root` and chmod it `500`.
    - This script should probably also do a username check in the future.
  - Create a copy of `sshd_config` and fix the paths/etc. Point the KeyScript at your copy of the hook.
  - Start a copy of sshd (6.2 or newer) with `-f <your config file>` and maybe `-d -d -d` to foreground and debug.
  - Run `ssh -p 2222 localhost` or similar.

Specifically, I did this setup and then ran a bunch of commands like:

  - `ssh host` (denied, no command)
  - `ssh host ls` (denied, not supported)
  - `echo '{}' | ssh host conduit conduit.ping` (works)

Reviewers: btrahan

Reviewed By: btrahan

CC: hach-que, aran

Maniphest Tasks: T2229, T2230

Differential Revision: https://secure.phabricator.com/D7419
2013-10-29 15:32:40 -07:00
epriestley
e2ed527353 Add a very simple bin/policy script for CLI policy administration
Summary:
Ref T603. I want to provide at least a basic CLI tool for fixing policy problems, since there are various ways users can lock themselves out of objects right now. Although I imagine we'll solve most of them in the application eventually, having a workaround in the meantime will probably make support a lot easier.

This implements `bin/policy show <object>`, which shows an object's policy settings. In a future diff, I'll implement something like `bin/policy set --capability view --policy users <object>`, although maybe just `bin/policy unlock <object>` (which sets view and edit to "all users") would be better for now. Whichever way we go, it will be some blanket answer to people showing up in IRC having locked themselves out of objects which unblocks them while we work on preventing the issue in the first place.

Test Plan: See screenshot.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T603

Differential Revision: https://secure.phabricator.com/D7171
2013-09-29 09:06:41 -07:00
epriestley
86989c9f98 Provide a more flexible script for administrative management of audits
Summary: Fixes T3679. This comes up every so often and the old script is extremely broad (nuke everything in a repository). Provide a more surgical tool.

Test Plan: Ran a bunch of variations of the script and they all seemed to work OK.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran, staticshock

Maniphest Tasks: T3679

Differential Revision: https://secure.phabricator.com/D6678
2013-08-05 10:35:01 -07:00
epriestley
9e0a299b06 Launch daemons with a full Phabricator environment in the overseers
Summary:
Ref T1670. Prepare for the overseers to talk directly to the database instead of using Conduit. See T1670 for discussion.

This shouldn't impact anything, except it has a very small chance of destabilizing the overseers.

Test Plan:
Ran `phd launch`, `phd debug`, `phd start`.

Ran with `--trace-memory` and verified elevated but mostly steady memory usage (8MB / overseer). This climbed by 0.05KB / sec (4MB / day) but the source of the leaks seems to be the cURL calls we're making over Conduit so this will actually fix that. Disabling `--conduit-uri` reported steady memory usage. I wasn't able to identify anything leaking within code we control. This may be something like a dynamic but capped buffer in cURL, since we haven't seen any issues in the wild.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T1670

Differential Revision: https://secure.phabricator.com/D6534
2013-07-23 12:09:45 -07:00
epriestley
42c0f060d5 Push feed publishing deeper into the task queue
Summary:
Ref T2852. I want to model Asana integration as a response to feed events. Currently, we queue one feed event for each HTTP hook.

Instead, always queue one feed event and then have it queue any necessary followup events (now, http hooks; soon, asana).

Add a script to make it easy to reproducibly fire feed event publishing.

Test Plan:
Republished a feed event and verified it hit configured HTTP hooks correctly.

  $ ./bin/feed republish 5765774156541908292 --trace
  >>> [2] <connect> phabricator2_feed
  <<< [2] <connect> 1,660 us
  >>> [3] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
  <<< [3] <query> 595 us
  >>> [4] <connect> phabricator2_differential
  <<< [4] <connect> 760 us
  >>> [5] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [5] <query> 478 us
  >>> [6] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [6] <query> 449 us
  >>> [7] <connect> phabricator2_user
  <<< [7] <connect> 1,062 us
  >>> [8] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
  <<< [8] <query> 540 us
  >>> [9] <connect> phabricator2_file
  <<< [9] <connect> 951 us
  >>> [10] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
  <<< [10] <query> 498 us
  >>> [11] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
  <<< [11] <query> 507 us
  Republishing story...
  >>> [12] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
  <<< [12] <query> 685 us
  >>> [13] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [13] <query> 489 us
  >>> [14] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [14] <query> 512 us
  >>> [15] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
  <<< [15] <query> 601 us
  >>> [16] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
  <<< [16] <query> 405 us
  >>> [17] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
  <<< [17] <query> 551 us
  >>> [18] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
  <<< [18] <query> 507 us
  >>> [19] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [19] <query> 428 us
  >>> [20] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
  <<< [20] <query> 419 us
  >>> [21] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
  <<< [21] <query> 591 us
  >>> [22] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
  <<< [22] <query> 406 us
  >>> [23] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
  <<< [23] <query> 593 us
  >>> [24] <http> http://127.0.0.1/derp/
  <<< [24] <http> 746,157 us
  [2013-06-24 20:23:26] EXCEPTION: (HTTPFutureResponseStatusHTTP) [HTTP/500] Internal Server Error

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T2852

Differential Revision: https://secure.phabricator.com/D6291
2013-06-25 16:29:47 -07:00
epriestley
278905543e Add very basic bin/auth tool
Summary: Ref T1536. This script basically exists to restore access if/when users shoot themselves in the foot by disabling all auth providers and can no longer log in.

Test Plan: {F46411}

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T1536

Differential Revision: https://secure.phabricator.com/D6205
2013-06-17 10:55:05 -07:00
epriestley
e01ceaa07f Provide 'bin/cache', for managing caches
Summary:
See <https://github.com/facebook/phabricator/issues/323>. We have a very old cache management script which doesn't purge all the modern caches (and does purge some caches which are no longer in use). Update it so it purges all the modern caches (remarkup, general, changeset), no longer purges outdated caches, and is easier to use.

Also delete a lot of "this script has moved" scripts from the last few rounds of similar cleanup, I believe all of these have been in master for at least several months, which should be enough time for users to get used to the new stuff.

Test Plan: Ran `bin/cache` with various arguments. Verified caches were purged.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Differential Revision: https://secure.phabricator.com/D5978
2013-05-20 10:16:35 -07:00
deedydas
4fb1a9e00f First Diff of Test Data Generator
Summary: Progress to fix T2903

Test Plan: Ran './bin/lipsum help' and 'generate' workflows.

Reviewers: epriestley

Reviewed By: epriestley

CC: aran, Korvin

Maniphest Tasks: T2903

Differential Revision: https://secure.phabricator.com/D5672
2013-04-12 14:07:16 -07:00
epriestley
e80c59cbc6 Introduce basic bin/mail with a resend workflow
Summary:
Fixes T2458. Ref T2843. @tido's email from T2843 has exhausted its retries and failed, but we want to try it again with the patch from D5464 to capture the actual error. This sort of thing has come up a few times in debugging, too.

Also fixed some stuff that came up while debugging this.

Test Plan:
  - Ran command with no args.
  - Ran resend with no args.
  - Ran resend with bad IDs.
  - Ran resend with already-queued messages, got "already queued" error.
  - Ran resend with already-sent message, got requeue.

Reviewers: btrahan, tido

Reviewed By: tido

CC: aran

Maniphest Tasks: T2458, T2843

Differential Revision: https://secure.phabricator.com/D5493
2013-03-30 15:53:49 -07:00
epriestley
4adf55919c Port Diviner Core to Phabricator
Summary:
This implements most/all of the difficult parts of Diviner on top of Phabricator instead of as standalone components. See T988. In particular, here are the things I want to fix:

**Performance** The Diviner parser works in two stages. The first stage breaks source files into "Atoms". The second stage renders atoms into a display format (e.g., HTML). Diviner currently has a good caching story on the first step of the pipeline, but zero caching in the second step. This means it's very slow, even for a fairly small project like Phabricator. We must re-render every piece of documentation every time, instead of only changed documentation. Most of this diff concerns itself with addressing this problem. There's a fairly large explanatory comment about it, but the trickiest part is that when an atom changes, other atoms (defined in other places) may also change -- for example, if `class B extends A`, editing A should dirty B, even if B is in an entirely different file. We perform analysis in two stages to propagate these changes: first detecting direct changes, then detecting indirect changes. This isn't completely implemented -- we need to propagate 'extends' through more levels -- but I believe it's structurally correct and good enough until we actually document classes.

**Inheritance** Diviner currently has a very weak story on inheritance. I want to inherit a lot more metas/docs. If an interface documents a method, we should just pull that documentation in to every implementation by default (implementations can still override it if they want). It can be shown in grey or something, but it should be desirable and correct to omit documentation of a method implementation when you are implementing a parent. Similarly, I want to pull in inherited methods and @tasks and such. This diff sets up for that, by formalizing "extends" relationships between atoms.

**Overspecialization** Diviner currently specializes atoms (FileAtom, FunctionAtom, ClassAtom, etc.). This is pretty much not useful, because Atomizers (which produce the atoms) need to be highly specialized, and Renderers/Publishers (which consume the atoms) also need to be highly specialized. Nothing interesting actually lives in the atom specializations, and we don't benefit from having them -- it just costs us generality in storage/caches for them. In the new code, I've used a single Atom class to represent any type of atom.

**URIs** We have fairly hideous URIs right now, which are very cumbersome  For in-app doc links, I want to provide nice URIs ("/h/notfications" or similar) which are stable redirects, and probably add remarkup for it: !{notifications} or similar. This diff isn't related to that since it's too premature.

**Search** Once we have a database generation target, we can index the documentation.

**Design** Chad has some nice mocks.

Test Plan: Ran `bin/diviner generate`, `bin/diviner generate --clean`. Saw appropriate graph propagation after edits. This diff doesn't do anything very useful yet.

Reviewers: btrahan, vrana

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T988

Differential Revision: https://secure.phabricator.com/D4340
2013-01-07 14:04:23 -08:00
epriestley
ba489f9d85 Add a local configuration source and a non-environmental ENV config source
Summary:
See discussion in T2221. Before we can move configuration to the database, we have a bootstrapping problem: we need database credentials to live //somewhere// if we can't guess them (and we can only really guess localhost / root / no password).

Some options for this are:

  - Have them live in ENV variables.
    - These are often somewhat unfamiliar to users.
    - Scripts would become a huge pain -- you'd have to dump a bunch of stuff into ENV.
    - Some environments have limited ability to set ENV vars.
    - SSH is also a pain.
  - Have them live in a normal config file.
    - This probably isn't really too awful, but:
    - Since we deploy/upgrade with git, we can't currently let them edit a file which already exists, or their working copy will become dirty.
    - So they have to copy or create a file, then edit it.
    - The biggest issue I have with this is that it will be difficult to give specific, easily-followed directions from Setup. The instructions need to be like "Copy template.conf.php to real.conf.php, then edit these keys: x, y, z". This isn't as easy to follow as "run script Y".
  - Have them live in an abnormal config file with script access (this diff).
    - I think this is a little better than a normal config file, because we can tell users 'run phabricator/bin/config set mysql.user phabricator' and such, which is easier to follow than editing a config file.

I think this is only a marginal improvement over a normal config file and am open to arguments against this approach, but I think it will be a little easier for users to deal with than a normal config file. In most cases they should only need to store three values in this file -- db user/host/pass -- since once we have those we can bootstrap everything else. Normal config files also aren't going away for more advanced users, we're just offering a simple alternative for most users.

This also adds an ENVIRONMENT file as an alternative to PHABRICATOR_ENV. This is just a simple way to specify the environment if you don't have convenient access to env vars.

Test Plan: Ran `config set x y`, verified writes. Wrote to ENVIRONMENT, ran `PHABRICATOR_ENV= ./bin/repository`.

Reviewers: btrahan, vrana, codeblock

Reviewed By: codeblock

CC: aran

Maniphest Tasks: T2221

Differential Revision: https://secure.phabricator.com/D4294
2012-12-30 06:16:15 -08:00
epriestley
f6b1964740 Improve Search architecture
Summary:
The search indexing API has several problems right now:

  - Always runs in-process.
    - It would be nice to push this into the task queue for performance. However, the API currently passses an object all the way through (and some indexers depend on preloaded object attributes), so it can't be dumped into the task queue at any stage since we can't serialize it.
    - Being able to use the task queue will also make rebuilding indexes faster.
    - Instead, make the API phid-oriented.
  - No uniform indexing API.
    - Each "Editor" currently calls SomeCustomIndexer::indexThing(). This won't work with AbstractTransactions. The API is also just weird.
    - Instead, provide a uniform API.
  - No uniform CLI.
    - We have `scripts/search/reindex_everything.php`, but it doesn't actually index everything. Each new document type needs to be separately added to it, leading to stuff like D3839. Third-party applications can't provide indexers.
    - Instead, let indexers expose documents for indexing.
  - Not application-oriented.
    - All the indexers live in search/ right now, which isn't the right organization in an application-orietned view of the world.
    - Instead, move indexers to applications and load them with SymbolLoader.

Test Plan:
  - `bin/search index`
    - Indexed one revision, one task.
    - Indexed `--type TASK`, `--type DREV`, etc., for all types.
    - Indexed `--all`.
  - Added the word "saboteur" to a revision, task, wiki page, and question and then searched for it.
    - Creating users is a pain; searched for a user after indexing.
    - Creating commits is a pain; searched for a commit after indexing.
    - Mocks aren't currently loadable in the result view, so their indexing is moot.

Reviewers: btrahan, vrana

Reviewed By: btrahan

CC: 20after4, aran

Maniphest Tasks: T1991, T2104

Differential Revision: https://secure.phabricator.com/D4261
2012-12-21 14:21:31 -08:00
epriestley
e78898970a Implement SSHD glue and Conduit SSH endpoint
Summary:
  - Build "sshd-auth" (for authentication) and "sshd-exec" (for command execution) binaries. These are callable by "sshd-vcs", located [[https://github.com/epriestley/sshd-vcs | in my account on GitHub]]. They are based on precursors [[https://github.com/epriestley/sshd-vcs-glue | here on GitHub]] which I deployed for TenXer about a year ago, so I have some confidence they at least basically work.
    - The problem this solves is that normally every user would need an account on a machine to connect to it, and/or their public keys would all need to be listed in `~/.authorized_keys`. This is a big pain in most installs. Software like Gitosis/Gitolite solve this problem by giving you an easy way to add public keys to `~/.authorized_keys`, but this is pretty gross.
    - Roughly, instead of looking in `~/.authorized_keys` when a user connects, the patched sshd instead runs `echo <public key> | sshd-auth`. The `sshd-auth` script looks up the public key and authorizes the matching user, if they exist. It also forces sshd to run `sshd-exec` instead of a normal shell.
    - `sshd-exec` receives the authenticated user and any command which was passed to ssh (like `git receive-pack`) and can route them appropriately.
    - Overall, this permits a single account to be set up on a server which all Phabricator users can connect to without any extra work, and which can safely execute commands and apply appropriate permissions, and disable users when they are disabled in Phabricator and all that stuff.
  - Build out "sshd-exec" to do more thorough checks and setup, and delegate command execution to Workflows (they now exist, and did not when I originally built this stuff).
  - Convert @btrahan's conduit API script into a workflow and slightly simplify it (ConduitCall did not exist at the time it was written).

The next steps here on the Repository side are to implement Workflows for Git, SVN and HG wire protocols. These will mostly just proxy the protocols, but also need to enforce permissions. So the approach will basically be:

  - Implement workflows for stuff like `git receive-pack`.
  - These workflows will implement enough of the underlying protocol to determine what resource the user is trying to access, and whether they want to read or write it.
  - They'll then do a permissons check, and kick the user out if they don't have permission to do whatever they are trying to do.
  - If the user does have permission, we just proxy the rest of the transaction.

Next steps on the Conduit side are more simple:

  - Make ConduitClient understand "ssh://" URLs.

Test Plan: Ran `sshd-exec --phabricator-ssh-user epriestley conduit differential.query`, etc. This will get a more comprehensive test once I set up sshd-vcs.

Reviewers: btrahan, vrana

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T603, T550

Differential Revision: https://secure.phabricator.com/D4229
2012-12-19 11:08:07 -08:00
epriestley
07dc943215 Modernize the drydock script
Summary: Add a bin/drydock symlink and break it into workflows. Nothing too special here.

Test Plan: Ran `bin/drydock wait-for-lease`, `bin/drydock lease`, `bin/drydock help`, etc.

Reviewers: btrahan

Reviewed By: btrahan

CC: aran

Maniphest Tasks: T2015

Differential Revision: https://secure.phabricator.com/D3867
2012-11-01 15:30:14 -07:00
epriestley
5d1bd51627 Add a script to migrate files between storage engines
Summary: Quora requested this (moving to S3) but it's also clearly a good idea.

Test Plan:
Ran with various valid/invalid options to test options. Error/sanity checking seemed OK.

Migrated individual local files.

Migrated all my local files back and forth between engines several times.

Uploaded some new files.

Reviewers: btrahan, vrana

Reviewed By: vrana

CC: aran

Maniphest Tasks: T1950

Differential Revision: https://secure.phabricator.com/D3808
2012-10-25 11:36:38 -07:00
epriestley
7c934e4176 Add a basic "fact" application
Summary:
Basic "Fact" application with some storage, part of a daemon, and a control binary.

= Goals =

The general idea is that we have various statistics we'd like to compute, like the frequency of image macros, reviewer responsiveness, task close rates, etc. Computing these on page load is expensive and messy. By building an ETL pipeline and running it in a daemon, we can precompute statistics and just pull them out of "stats" tables.

One way to do this is just to completely hard-code everything, e.g. have a daemon that runs every hour which issues a big-ass query and dumps results into a table per-fact or per fact-group. But this has a bunch of drawbacks: adding new stuff to the pipeline is a pain, various fact aggregators can't share much code, updates are slow and expensive, we can never build generic graphs on top of it, etc.

I'm hoping to build an ETL pipeline which is generic enough that we can use it for most things we're interested in without needing schema changes, and so that installs can use it also without needing schema changes, while still being specific enough that it's fast and we can build useful stuff on top of it. I'm not sure if this will actually work, but it would be cool if it does so I'm starting pretty generally and we'll see how far I get. I haven't built this exact sort of thing before so I might be way off.

I'm basing the whole thing on analyzing entire objects, not analyzing changes to objects. So each part of the pipeline is handed an object and told "analyze this", not handed a change. It pretty much deletes all the old data about that thing and then writes new data. I think this is simpler to implement and understand, and it protects us from all sorts of weird issues where we end up with some kind of garbage in the DB and have to wipe the whole thing.

= Facts =

The general idea is that we extract "facts" out of objects, and then the various view interfaces just report those facts. This change has on type of fact, a "raw fact", which is directly derived from an object. These facts are concerete and relate specifically to the object they are derived from. Some examples of such facts might be:

  D123 has 9 comments.
  D123 uses macro "psyduck" 15 times.
  D123 adds 35 lines.
  D123 has 5 files.
  D123 has 1 object.
  D123 has 1 object of type "DREV".
  D123 was created at epoch timestamp 89812351235.
  D123 was accepted by @alincoln at epoch timestamp 8397981839.

The fact storage looks like this:

  <factType, objectPHID, objectA, valueX, valueY, epoch>

Currently, we supprot one optional secondary key (like a user PHID or macro PHID), two optional integer values, and an optional timestamp. We might add more later. Each fact type can use these fields if it wants. Some facts use them, others don't. For instance, this diff adds a "N:*" fact, which is just the count of total objects in the system. These facts just look like:

  <"N:*", "PHID-xxxx-yyyy", ...>

...where all other fields are ignored. But some of the more complex facts might look like:

  <"DREV:accept", "PHID-DREV-xxxx", "PHID-USER-yyyy", ..., ..., nnnn> # User 'yyyy' accepted at epoch 'nnnn'.
  <"FILE:macro", "PHID-DREV-xxxx", "PHID-MACR-yyyy", 17, ..., ...> # Object 'xxxx' uses macro 'yyyy' 17 times.

Facts have no uniqueness constraints. For @vrana's reviewer responsiveness stuff, we can insert multiple rows for each reviewer, e.g.

  <"DREV:reviewed", "PHID-DREV-xxxx", "PHID-USER-yyyy", nnnn, ..., mmmm> # User 'yyyy' reviewed revision 'xxxx' after 'nnnn' seconds at 'mmmm'.

The second value (valueY) is mostly because we need it if we sample anything (valueX = observed value, valueY = sample rate) but there might be other uses. We might need to add "objectB" at some point too -- currently we can't represent a fact like "User X used macro Y on revision Z", so it would be impossible to compute macro use rates //for a specific user// based on this schema. I think we can start here though and see how far we get.

= Aggregated Facts =

These aren't implemented yet, but the idea is that we can then take the "raw facts" and compute derived/aggregated/rollup facts based on the raw fact table. For example, the "count" fact can be aggregated to arrive at a count of all objects in the system. This stuff will live in a separate table which does have uniqueness constraints, and come in the next diff.

We might need some kind of time series facts too, not sure about that. I think most of our use cases today are covered by raw facts + aggregated facts.

Test Plan: Ran `bin/fact` commands and verified they seemed to do reasonable things.

Reviewers: vrana, btrahan

Reviewed By: vrana

CC: aran, majak

Maniphest Tasks: T1562

Differential Revision: https://secure.phabricator.com/D3078
2012-07-27 13:34:21 -07:00
epriestley
13d96e6377 Introduce "bin/repository" for repository management
Summary:
Nothing new or exciting here yet, just moving the random scripts/repositories/ things to bin/repository. Also add `repository list`.

(Console stuff comes from D2841.)

Test Plan: Ran `repository list`, `repository pull`, `repository discover`, `repository discover --verbose`, `repository help`.

Reviewers: jungejason, vrana

Reviewed By: vrana

CC: aran

Differential Revision: https://secure.phabricator.com/D2849
2012-06-25 12:35:37 -07:00
epriestley
bad22cb303 Add a bin/aphlict wrapper to handle aphlict config / daemonization
Summary: Simple wrapper script to configure, launch, daemonize and restart the Aphlict server.

Test Plan: Ran "bin/aphlict", checked /notification/status/. Ran "bin/aphlict --foreground".

Reviewers: jungejason, vrana

Reviewed By: jungejason

CC: aran

Maniphest Tasks: T944

Differential Revision: https://secure.phabricator.com/D2784
2012-06-18 15:11:19 -07:00
epriestley
087cc0808a Make SQL patch management DAG-based and provide namespace support
Summary:
This addresses three issues with the current patch management system:

  # Two people developing at the same time often pick the same SQL patch number, and then have to go rename it. The system catches this, but it's silly.
  # Second/third-party developers can't use the same system to manage auxiliary storage they may want to add.
  # There's no way to build mock databases for unit tests that need to do reads.

To resolve these things, you can now name your patches whatever you want and conflicts are just merge conflicts, which are less of a pain to fix than filename conflicts.

Dependencies are now a DAG, with implicit dependencies created on the prior patch if no dependencies are specified. Developers can add new concrete subclasses of `PhabricatorSQLPatchList` to add storage management, and define the dependency branchpoint of their patches so they apply in the correct order (although, generally, they should not depend on the mainline patches, presumably).

The commands `storage upgrade --namespace test1234` and `storage destroy --namespace test1234` will allow unit tests to build and destroy MySQL storage.

A "quickstart" mode allows an upgrade from scratch in ~1200ms. Destruction takes about 200ms. These seem like fairily reasonable costs to actually use in tests. Building from scratch patch-by-patch takes about 6000ms.

Test Plan:
  - Created new databases from scratch with and without quickstart in a separate test namespace. Pointed the webapp at the test namespaces, browsed around, everything looked good.
  - Compared quickstart and no-quickstart dump states, they're identical except for mysqldump timestamps and a few similar things.
  - Upgraded a legacy database to the new storage format.
  - Destroyed / dumped storage.

Reviewers: edward, vrana, btrahan, jungejason

Reviewed By: btrahan

CC: aran, nh

Maniphest Tasks: T140, T345

Differential Revision: https://secure.phabricator.com/D2323
2012-04-30 07:54:00 -07:00
epriestley
477954a57e Improve CLI script for account creation and document account/reg setup process
Summary:
There was an old "create_user.php" script but it really was only useful for
creating agents. Provide a more user-friendly script for creating the first
account.

Depends on D278.

Test Plan:
Used 'accountadmin' to create and edit accounts. Read documentation.

Reviewed By: tuomaspelkonen
Reviewers: jungejason, tuomaspelkonen, aran
CC: ccheever, aran, tuomaspelkonen
Differential Revision: 279
2011-05-12 18:44:53 -07:00
epriestley
4893146815 Improve parser scalability, fix a bug or two, provide 'phd', the Phabricator
Daemon interface.
2011-03-13 14:27:03 -07:00