Summary:
Ref T6881. I tried to cheat here by not implementing this, but we need it for destroying triggers directly with `bin/remove destroy`, since that needs to load them by PHID.
So, cheat slightly less. Implement PolicyAware but not CursorPagedPolicyAware.
Test Plan:
- Used `bin/remove destroy` to destroy a trigger by PHID.
- Browsed daemon console.
- Ran trigger daemon.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11445
Summary:
Ref T6881. This makes it easier to fire a trigger and make sure it works properly. You can use the `--now` flag to travel through time, and test scheduling conditions with `--last` and `--next`. It will tell you when the trigger would reschedule.
Better than waiting 24 hours to see if things work.
Test Plan: Fired some backups, got useful output which made me think my code probably works correctly.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11438
Summary: Ref T6881. This is useful to show a "Next backup: 2:30 AM" sort of thing without requring callers to know how triggers work internally.
Test Plan: Showed that kind of thing in Instances.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11437
Summary:
Ref T6881. By design, the EXECUTION order only selects tasks which have been scheduled (since it performs a JOIN). This is inconsistent with other queries and problematic for withID/withPHID queries which may want to select an unscheduled task.
Switch to standard ID ordering by default.
Test Plan:
- Instances console now finds unscheduled triggers.
- Verified that all existing queries specify an explicit order.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11436
Summary: Ref T6881. When stuff with triggers is destroyed, it should destroy the triggers.
Test Plan: Will test in Instances.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11435
Summary: Ref T6881. Add a standard "just queue a task" trigger action; I expect almost all application code to use this.
Test Plan: Will test in Instances.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11429
Summary: Ref T6881. I just want to show trigger info in the instance management console.
Test Plan: Will test in Instances.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11428
Summary: Ref T6881. Before implementing subscriptions, I'm going to vet triggers by using them to do backups. Each instance will get a daily trigger for backups, and that should give us a smaller-scale test to catch issues and limitations, with more opportunities for something to go wrong since it fires more often.
Test Plan: Added unit tests.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11427
Summary:
Ref T6881. Hopefully, this is the hard part.
This adds a new daemon (the "trigger" daemon) which processes triggers, schedules them, and then executes them at the scheduled time. The design is a little complicated, but has these goals:
- High resistance to race conditions: only the application writes to the trigger table; only the daemon writes to the event table. We won't lose events if someone saves a meeting at the same time as we're sending a reminder out for it.
- Execution guarantees: scheduled events are guaranteed to execute exactly once.
- Support for arbitrarily large queues: the daemon will make progress even if there are millions of triggers in queue. The cost to update the queue is proportional to the number of changes in it; the cost to process the queue is proportional to the number of events to execute.
- Relatively good observability: you can monitor the state of the trigger queue reasonably well from the web UI.
- Modular Infrastructure: this is a very low-level construct that Calendar, Phortune, etc., should be able to build on top of.
It doesn't have this stuff yet:
- Not very robust to bad actions: a misbehaving trigger can stop the queue fairly easily. This is OK for now since we aren't planning to make it part of any other applications for a while. We do still get execute-exaclty-once, but it might not happen for a long time (until someone goes and fixes the queue), when we could theoretically continue executing other events.
- Doesn't start automatically: normal users don't need to run this thing yet so I'm not starting it by default.
- Not super well tested: I've vetted the basics but haven't run real workloads through this yet.
- No sophisticated tooling: I added some basic stuff but it's missing some pieces we'll have to build sooner or later, e.g. `bin/trigger cancel` or whatever.
- Intentionally not realtime: This design puts execution guarantees far above realtime concerns, and will not give you precise event execution at 1-second resolution. I think this is the correct goal to pursue architecturally, and certainly correct for subscriptions and meeting reminders. Events which execute after they have become irrelevant can simply decline to do anything (like a meeting reminder which executes after the meeting is over).
In general, the expectation for applications is:
- When creating an object (like a calendar event) that needs to trigger a scheduled action, write a trigger (and save the PHID if you plan to update it later).
- The daemon will process the event and schedule the action efficiently, in a race-free way.
- If you want to move the action, update the trigger and the daemon will take care of it.
- Your action will eventually dump a task into the task queue, and the task daemons will actually perform it.
Test Plan:
Using a test script like this:
```
<?php
require_once 'scripts/__init_script__.php';
$trigger = id(new PhabricatorWorkerTrigger())
->setAction(
new PhabricatorLogTriggerAction(
array(
'message' => 'test',
)))
->setClock(
new PhabricatorMetronomicTriggerClock(
array(
'period' => 33,
)))
->save();
var_dump($trigger);
```
...I queued triggers and ran the daemon:
- Verified triggers fire;
- verified triggers reschedule;
- verified trigger events show up in the web UI;
- tried different periods;
- added some triggers while the daemon was running;
- examined `phd debug` output for anything suspicious.
It seems to work in trivial use case, at least.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11419
Summary:
Ref T6881. This will probably make more sense in a couple of diffs, but this is a class that implements scheduling/recurrence rules. Two rules are provided:
- Trigger an event at a specific time (e.g., a meeting reminder notification).
- Trigger an event on the Nth day of every month (e.g., a subscription bill).
At some point, we'll presumably add a rule for T2896 (maybe using the "RRULE" spec) so you can do stuff like "the second to last thursday of every month", etc., but we don't need that for now.
(The "Nth day of every month, or move it back if no such day exists" rule doesn't seem to be expressible with the "RRULE" format, so implementing that wouldn't give us a superset of this. I think this rule is correct and desirable for this purpose, though.)
Test Plan: Added and executed unit tests.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6881
Differential Revision: https://secure.phabricator.com/D11403
Summary:
This is unusual, but if `getWorkerInstance()` throws we end up with an undefined `$worker` when recovering from the exception.
Instead, handle this case slightly more gracefully.
The easiest way to hit this is to schedule a task for a worker that doesn't exist (or remove an existing worker, which is what I did to hit it).
Test Plan: Saw a more graceful error recovery; ran some normal successful tasks out of the queue.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D11413
Summary: Ref T6822.
Test Plan: `grep`. This method is only called from within `PhutilArgumentWorkflow::__construct`.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Maniphest Tasks: T6822
Differential Revision: https://secure.phabricator.com/D11415
Summary: Ref T6822. This method is only called from `PhutilDaemon::execute()` and can be made `protected`.
Test Plan: See D11404.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Maniphest Tasks: T6822
Differential Revision: https://secure.phabricator.com/D11405
Summary:
Ref T2783. Currently, the repository edit page does some checks agaisnt the local system to look for binaries and files on disk. These checks don't make sense in a cluster environment.
Ideally, we could make a Conduit call to the host (e.g., add something like `diffusion.querysetupstatus`) to do these checks, but since they're pretty basic config things and cluster installs are advanced, it doesn't seem super worthwhile for now.
Test Plan: Saw fewer checks in a cluster repo.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T2783
Differential Revision: https://secure.phabricator.com/D11102
Summary: Ref T5402.
Test Plan:
- Queried archived tasks.
- Grepped for use sites and verified no other callsites are order-sensitive.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T5402
Differential Revision: https://secure.phabricator.com/D11089
Summary: Ref T5402. This more or less "fixes" it but there's probably some polish to do?
Test Plan:
stopped and started daemons. error logs look good.
ran bin/storage upgrade. noted that `adjust` added the appropriate indices for active and archive task.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin, epriestley
Maniphest Tasks: T5402
Differential Revision: https://secure.phabricator.com/D11044
Summary: Ref T5402. This cleans up some code and sets us up to use this sort of data more easily later.
Test Plan: viewed the daemon console from the web and the log of a specific archived daemon. both looked good. for other callsites looked really, really carefully.
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: Korvin, epriestley
Maniphest Tasks: T5402
Differential Revision: https://secure.phabricator.com/D11042
Summary:
Ref T6742. Root cause of the issue:
- Daemon was running on a machine with a very long host name, which produced a lease name which was longer than 64 characters.
- MySQL wasn't set in STRICT_ALL_TABLES.
- The daemon would `UPDATE .. SET leaseOwner = <very long string>` to lock a task, and MySQL would silently truncate.
- The daemon would then try to select the locked task, but fail, because there's no matching lease owner.
To resolve this, use only the first 32 characters of the hostname. See IRC for more discussion.
Test Plan: Will confirm with reporter.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T6742
Differential Revision: https://secure.phabricator.com/D10998
Summary:
Fixes T6702. Ref T3554. Currently, tasks can be cancelled, retried and freed from the web UI by any logged in user.
This isn't appreciably dangerous (I can't come up with a way that a user could do anything security-affecting), but I think I probably intended this to be admin-only, but these actions should move to the CLI anyway.
Move them to the CLI. Lay some groundwork for some future `bin/worker cancel --class SomeTaskClass`, but don't implement that yet.
Test Plan: Used `cancel`, `retry` and `free` from the CLI. Hit all the error/success states.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T3554, T6702
Differential Revision: https://secure.phabricator.com/D10939
Summary:
Ref T6615. Mixing ASC and DESC ordering on a multipart key makes it dramatically less effective (or perhaps totally ineffective).
Reverse the meaning of the `priority` column so it goes in the same direction as the `id` column (both ascending, lower values execute sooner).
Test Plan:
- Queued 1.2M tasks with `bin/worker flood`.
- Processed ~1 task/second with `bin/phd debug taskmaster` before patch.
- Applied patch, took ~5 seconds for ~1.2M rows.
- Processed ~100-200 tasks/second with `bin/phd debug taskmaster` after patch.
- "Next in Queue" query on daemon page dropped from 1.5s to <1ms.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: aklapper, 20after4, epriestley
Maniphest Tasks: T6615
Differential Revision: https://secure.phabricator.com/D10895
Summary: Ref T6615. Ref T3554. We need better tooling around the queue eventually, so start here.
Test Plan: Added 100K+ tasks locally with `bin/worker flood`. Executed some of them with `bin/phd debug taskmaster` (we already have a TestWorker, used in unit tests).
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T3554, T6615
Differential Revision: https://secure.phabricator.com/D10894
Summary:
Fixes an issue with T5336 / D9871. We did 99% of the work here but didn't actually turn on the priority sorting. The unit test passed by default, which didn't catch this.
- Fix the unit test (it failed).
- Fix the query (test now passes).
- Add a "Next in Queue" element to the UI to make this kind of thing easier to spot/understand.
Test Plan: Ran unit test. Viewed "Next in Queue". Queued some tasks, flushed the queue. Web UI tracked the state sensibly.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: cburroughs, epriestley
Differential Revision: https://secure.phabricator.com/D10766
Summary:
Ref T1191. Now that the whole database is covered, we don't need to do as much work to build expected schemata. Doing them database-by-database was helpful in converting, but is just reudndant work now.
Instead of requiring every application to build its Lisk objects, just build all Lisk objects.
I removed `harbormaster.lisk_counter` because it is unused.
It would be nice to autogenerate edge schemata, too, but that's a little trickier.
Test Plan: Database setup issues are all green.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley, hach-que
Maniphest Tasks: T1191
Differential Revision: https://secure.phabricator.com/D10620
Summary:
Ref T1191. When changing the column type of an AUTO_INCREMENT column, we currently may lose the autoincrement attribute.
Instead, support it. This is a bit messy because AUTO_INCREMENT columns interact with PRIMARY KEY columns (tables may only have one AUTO_INCREMENT column, and it must be a primary key). We need to migrate in more phases to avoid this issue.
Introduce new `auto` and `auto64` types to represent autoincrement IDs.
Test Plan:
- Saw autoincrement show up correctly in web UI.
- Fixed an autoincrement issue on the XHProf storage table with `bin/storage adjust` safely.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T1191
Differential Revision: https://secure.phabricator.com/D10607
Summary:
Ref T1191. Ref T6203. While generating expected schemata, I ran into these columns which seem to have sketchy nullability.
- Mark most of them for later resolution (T6203). They work fine today and don't need to block T1191. Changing them can break the application, so we can't autofix them.
- Forgive a couple of them that are sort-of reasonable or going to get wiped out.
Test Plan: Saw 94 remaining warnings.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: hach-que, epriestley
Maniphest Tasks: T1191, T6203
Differential Revision: https://secure.phabricator.com/D10593
Summary:
Ref T1191. Some notes here:
- Drops the old LDAP and OAuth info tables. These were migrated to the ExternalAccount table a very long time ago.
- Separates surplus/missing keys from other types of surplus/missing things. In the long run, my plan is to have only two notice levels:
- Error: something we can't fix (missing database, table, or column; overlong key).
- Warning: something we can fix (surplus anything, missing key, bad column type, bad key columns, bad uniqueness, bad collation or charset).
- For now, retaining three levels is helpful in generating all the expected scheamta.
Test Plan:
- Saw ~200 issues resolve, leaving ~1,300.
- Grepped for removed tables.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T1191
Differential Revision: https://secure.phabricator.com/D10580
Summary: T1191. Nothing very notable here.
Test Plan: Saw more blue in web UI.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D10522
Summary:
Fixes T5855. Adds a `--graceful N` flag to `phd stop` and `phd restart`.
`phd` will send SIGINT, wait `N` seconds, SIGTERM, wait 15 seconds, and SIGKILL. By default, `N` is 15.
Test Plan:
- Ran `bin/phd debug ...` and used `^C` to interrupt daemons. Saw graceful shutdown behavior, and abrupt termination on multiple `^C`.
- Ran `bin/phd start`, `bin/phd stop` and `bin/phd restart` with `--graceful` set to various things, notably `0`. Saw graceful shutdowns on the CLI and in the web UI. With `0`, abrupt shutdowns.
Reviewers: btrahan, hach-que
Reviewed By: hach-que
Subscribers: epriestley
Maniphest Tasks: T5855
Differential Revision: https://secure.phabricator.com/D10228
Summary: Fixes T5336. Currently, `PhabricatorWorkerLeaseQuery` is basically FIFO. It makes more sense for the queue to be a priority-queue, and to assign higher priorities to alerts (email and SMS).
Test Plan: Created dummy tasks in the queue (with different priorities). Verified that the priority field was set correctly in the DB and that the priority was shown on the `/daemon/` page. Started a `PhabricatorTaskmasterDaemon` and verified that the higher priority tasks were executed before lower priority tasks.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: epriestley, Korvin
Maniphest Tasks: T5336
Differential Revision: https://secure.phabricator.com/D9871
Summary: I'm pretty sure that `@group` annotations are useless now... see D9855. Also fixed various other minor issues.
Test Plan: Eye-ball it.
Reviewers: #blessed_reviewers, epriestley, chad
Reviewed By: #blessed_reviewers, epriestley
Subscribers: epriestley, Korvin, hach-que
Differential Revision: https://secure.phabricator.com/D9859
Summary: Ran `arc lint --apply-patches --everything` over rP, mainly to change double quotes to single quotes where appropriate. These changes also validate that the `ArcanistXHPASTLinter::LINT_DOUBLE_QUOTE` rule is working as expected.
Test Plan: Eyeballed it.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: epriestley, Korvin, hach-que
Differential Revision: https://secure.phabricator.com/D9431
Summary: Ref D8930. My "send test" for SMS was failing before this patch, and now it works nicely.
Test Plan: Used new code in D8930 that uses $this->queueTask() to get some work done and it got done in process
Reviewers: epriestley
Reviewed By: epriestley
Subscribers: epriestley, Korvin
Differential Revision: https://secure.phabricator.com/D9018
Summary:
For Harbormaster tasks which want to poll or wait, this lets them say "try again a little later" without having to sleep and hold a queue slot.
This is basically the same as failing, except that we don't increment the failure counter. Instead, we just set the current lease to the correct length and then exit. The task will be retried after the lease expires.
Test Plan: Using both `bin/harbormaster` and `phd debug taskmaster`, ran a lot of waiting tasks through the queue, faking them to either yield or not yield in a controlled manner. The queue responded as expected, yielding tasks appropraitely and retrying them later.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D8792
Summary:
See discussion in D8773. Three small adjustments which should help prevent this kind of issue:
- When queueing followup tasks, hold them on the worker until we finish the task, then queue them only if the work was successful.
- Increase the default lease time from 60 seconds to 2 hours. Although most tasks finish in far fewer than 60 seconds, the daemons are generally stable nowadays and these short leases don't serve much of a purpose. I think they also date from an era where lease expiry and failure were less clearly distinguished.
- Increase the default wait-after-failure from 60 seconds to 5 minutes. This largely dates from the MetaMTA era, where Facebook ran services with high failure rates and it was appropriate to repeatedly hammer them until things went through. In modern infrastructure, such failures are rare.
Test Plan:
- Verified that tasks queued properly after the main task was updated.
- Verified that leases default to 7200 seconds.
- Intentionally failed a task and verified default 300 second wait before retry.
- Removed all default leases shorter than 7200 seconds (there was only one).
- Checked all the wait before retry implementations for anything much shorter than 5 minutes (they all seem reasonable).
Reviewers: btrahan, sowedance
Reviewed By: sowedance
Subscribers: epriestley
Differential Revision: https://secure.phabricator.com/D8774
Summary:
There are quite a few tests in Arcanist, libphutil and Phabricator that do something similar to `$this->assertEqual(false, ...)` or `$this->assertEqual(true, ...)`.
This is unnecessarily verbose and it would be cleaner if we had `assertFalse` and `assertTrue` methods.
Test Plan: I contemplated adding a unit test for the `getCallerInfo` method but wasn't sure if it was required / where it should live.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley
CC: Korvin, epriestley, aran
Differential Revision: https://secure.phabricator.com/D8460
Summary:
Ref T2015. Currently, Drydock has a `wait-for-lease` workflow which is invoked in the background by the `lease` workflow.
The goal of this mechanism is to allow `bin/drydock lease` to print out logs as the lease is acquired. However, this predates the `runAllTasksInProcess` flags, and they provide a simpler and more robust way (potentially with `--trace` and `PhutilConsole`) to do synchronous execution and debug logging.
Simplify this whole mechanism: just run everything in-process in `bin/drydock lease`, and do logging via `--trace`. We could thread a `PhutilConsole` through things too, but this seems good enough for now.
Also various cleanup/etc.
Test Plan: Ran `bin/drydock lease`. Ran `bin/harbormaster build X --plan Y`, for `Y` being a Drydock-dependent build plan.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2015
Differential Revision: https://secure.phabricator.com/D7835
Summary:
Fixes two issues:
- When rendering a task's details, we currently issue a policy-oblivious query. Instead, issue a policy-aware query.
- The formatting is a little bit weird, with the top half in a box and the bottom half with an older style. Make them consistent.
Test Plan: Looked at the detail pages for several tasks in queue.
Reviewers: btrahan, chad
Reviewed By: chad
CC: aran
Differential Revision: https://secure.phabricator.com/D7812
Summary:
Ref T1049. See discussion in D7745. We have some specific interest in this for D7745, but generally we want to consume tasks with expired leases in roughly FIFO order, just like we consume new tasks in roughly FIFO order. Currently, when we select an expired task we order them by `id`, but this is the original insert order, not lease expiration order. Instead, order by `leaseExpires`.
This query is actually much better than the old one was, since the WHERE part is `leaseExpries < VALUE`.
Test Plan: Ran `EXPLAIN` on the query. Ran a taskmaster in debug mode and saw it lease new and expired tasks successfully.
Reviewers: hach-que, btrahan
Reviewed By: hach-que
CC: aran
Maniphest Tasks: T1049
Differential Revision: https://secure.phabricator.com/D7746
Summary: Fixes T3610. This got un-scoped at some point, and is only used in Drydock so it escaped notice.
Test Plan: Ran `bin/drydock lease --type host` without hitting an exception about incorrect query construction.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T3610
Differential Revision: https://secure.phabricator.com/D6552
Summary: Fixes T3557. One thing which made T3557 kind of a mess was the lack of information about progress through temporary failures. Add a column which records a task's last failure time, and surface it in the console.
Test Plan: {F51277}
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T3557
Differential Revision: https://secure.phabricator.com/D6550
Summary: Fixes T2569. This is the other common exception source which is ambiguous. List the task ID explicitly to make debugging easier.
Test Plan: {F51268}
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2569
Differential Revision: https://secure.phabricator.com/D6549
Summary: Ref T2852. When a Differential revision is linked to an Asana task, show the related task in Differential.
Test Plan: {F49234}
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2852
Differential Revision: https://secure.phabricator.com/D6387
Summary:
Ref T2852. Asana sync tasks currently have a standard retry/backoff schedule, but the defaults are quite aggressive (retry every 60s forever). Instead, retry at increasing intervals and stop retrying after a few tries.
- Retry at intervals and stop retrying after a few iterations.
- Modernize some interfaces.
- Add better information about retry behaviors to the web UI.
Test Plan: {F49194}
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2852
Differential Revision: https://secure.phabricator.com/D6381
Summary:
Ref T2852. Add a `log()` method to `PhabricatorWorker` to make debugging easier.
I renamed the similar Drydock-specific method.
Test Plan:
Used logging in a future revision:
...
<<< [36] <http> 211,704 us
Updating main task.
>>> [37] <http> https://app.asana.com/api/1.0/tasks/6153776820388
...
Reviewers: btrahan, chad
Reviewed By: chad
CC: aran
Maniphest Tasks: T2852
Differential Revision: https://secure.phabricator.com/D6296
Summary:
Ref T2852. I want to model Asana integration as a response to feed events. Currently, we queue one feed event for each HTTP hook.
Instead, always queue one feed event and then have it queue any necessary followup events (now, http hooks; soon, asana).
Add a script to make it easy to reproducibly fire feed event publishing.
Test Plan:
Republished a feed event and verified it hit configured HTTP hooks correctly.
$ ./bin/feed republish 5765774156541908292 --trace
>>> [2] <connect> phabricator2_feed
<<< [2] <connect> 1,660 us
>>> [3] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
<<< [3] <query> 595 us
>>> [4] <connect> phabricator2_differential
<<< [4] <connect> 760 us
>>> [5] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [5] <query> 478 us
>>> [6] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [6] <query> 449 us
>>> [7] <connect> phabricator2_user
<<< [7] <connect> 1,062 us
>>> [8] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
<<< [8] <query> 540 us
>>> [9] <connect> phabricator2_file
<<< [9] <connect> 951 us
>>> [10] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
<<< [10] <query> 498 us
>>> [11] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
<<< [11] <query> 507 us
Republishing story...
>>> [12] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
<<< [12] <query> 685 us
>>> [13] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [13] <query> 489 us
>>> [14] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [14] <query> 512 us
>>> [15] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
<<< [15] <query> 601 us
>>> [16] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
<<< [16] <query> 405 us
>>> [17] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
<<< [17] <query> 551 us
>>> [18] <query> SELECT story.* FROM `feed_storydata` story JOIN `feed_storyreference` ref ON ref.chronologicalKey = story.chronologicalKey WHERE (ref.chronologicalKey IN (5765774156541908292)) GROUP BY story.chronologicalKey ORDER BY story.chronologicalKey DESC
<<< [18] <query> 507 us
>>> [19] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [19] <query> 428 us
>>> [20] <query> SELECT * FROM `differential_revision` WHERE phid IN ('PHID-DREV-ywqmrj5zgkdloqh5p3c5')
<<< [20] <query> 419 us
>>> [21] <query> SELECT * FROM `user` WHERE phid in ('PHID-USER-lqiz3yd7wmk64ejugvov')
<<< [21] <query> 591 us
>>> [22] <query> SELECT * FROM `file` WHERE phid IN ('PHID-FILE-gq6dlsysvxbn3dgwvky7')
<<< [22] <query> 406 us
>>> [23] <query> SELECT * FROM `user_status` WHERE userPHID IN ('PHID-USER-lqiz3yd7wmk64ejugvov') AND UNIX_TIMESTAMP() BETWEEN dateFrom AND dateTo
<<< [23] <query> 593 us
>>> [24] <http> http://127.0.0.1/derp/
<<< [24] <http> 746,157 us
[2013-06-24 20:23:26] EXCEPTION: (HTTPFutureResponseStatusHTTP) [HTTP/500] Internal Server Error
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T2852
Differential Revision: https://secure.phabricator.com/D6291
Summary: MySQL is not able to use indexes with searching for tuples.
Test Plan:
Explained the query before and after, saw `key_len` 16 instead of 8.
Also saw time 0.0 s instead of 2.9 s (but that was probably caused by warming up).
Reviewers: epriestley
Reviewed By: epriestley
CC: aran, Korvin
Differential Revision: https://secure.phabricator.com/D5580