Summary: Apparently I am not qualified to do basic math.
Test Plan: Unit test.
Reviewers: vrana
Reviewed By: vrana
CC: aran
Differential Revision: https://secure.phabricator.com/D3218
Summary: See discussion in D3078 for why I've separated this. Pretty sure it's not quite ready yet -- I want to build a couple of things on it so we have a better idea of what we need (autoincrement ID? <factType, objectA, epoch> primary key? objectB column? valueZ?) and don't need to do a ton of schema patches.
Test Plan: Applied patches, ran D3078.
Reviewers: vrana, btrahan, majak
Reviewed By: vrana
CC: aran
Maniphest Tasks: T1581, T1562
Differential Revision: https://secure.phabricator.com/D3088
Summary: I think this is simpler? Includes test cases.
Test Plan: Ran tests. Loaded /paste/.
Reviewers: vrana, nh
Reviewed By: vrana
CC: aran
Differential Revision: https://secure.phabricator.com/D3209
Summary: See title. Adds features needed for D3136.
Test Plan:
Observe sanity (or run D3136 in a sandbox
and observe that voting works).
Reviewers: epriestley
Reviewed By: epriestley
CC: gmarcotte, aran, Korvin
Differential Revision: https://secure.phabricator.com/D3205
Summary:
A few goals here:
- Slightly simplify the Query classtree -- it's now linear: `Query` -> `OffsetPagedQuery` (adds offset/limit) -> `PolicyQuery` (adds policy filtering) -> `CursorPagedPolicyQuery` (adds cursors).
- Allow us to move from non-policy queries to policy queries without any backward compatibility breaks, e.g. Conduit methods which accept 'offset'.
- Separate the client limit ("limit") from the datafetch hint limit ("rawresultlimit") so we can make the heurstic smarter in the future if we want. Some discussion inline.
Test Plan: Expanded unit tests to cover offset behaviors.
Reviewers: vrana, btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T603
Differential Revision: https://secure.phabricator.com/D3192
Summary:
- Commit detail view
- List of projects
- "edit" action which takes the user to a simple form where they can only add / remove projects.
- Integrated the project relationship into the commit search indexer
- fixed a bug from D790; it seems you must select the column if you're going to join against it later. Without this change searching for author or projectfails 100% for me.
Test Plan: added and removed projects. verified appropriate projects showed up in detail and edit view. searched for commits by project and found the ones I was supposed to...!
Reviewers: epriestley
Reviewed By: epriestley
CC: aran, Korvin
Maniphest Tasks: T1614
Differential Revision: https://secure.phabricator.com/D3189
Summary:
- Store project members in edges.
- Migrate existing members to edge storage.
- Delete PhabricatorProjectAffiliation.
- I left the actual underlying data around just in case something goes wrong; we can delete it evenutally.
Test Plan:
- Ran migration.
- Created a new project.
- Joined and left a project.
- Added and removed project members.
- Manually called PhabricatorOwnersOwner::loadAffiliatedUserPHIDs() to verify its behavior.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Maniphest Tasks: T603
Differential Revision: https://secure.phabricator.com/D3186
Summary:
I'm trying to make progress on the policy/visibility stuff since it's a blocker for Wikimedia.
First, I want to improve Projects so they can serve as policy groups (e.g., an object can have a visibility policy like "Visible to: members of project 'security'"). However, doing this without breaking anything or snowballing into a bigger change is a bit awkward because Projects are name-ordered and we have a Conduit API which does offset paging. Rather than breaking or rewriting this stuff, I want to just continue offset paging them for now.
So I'm going to make PhabricatorPolicyQuery extend PhabricatorOffsetPagedQuery, but can't currently since the `executeWithPager` methods would clash. These methods do different things anyway and are probably better with different names.
This also generally improves the names of these classes, since cursors are not necessarily IDs (in the feed case, they're "chronlogicalKeys", for example). I did leave some of the interals as "ID" since calling them "Cursor"s (e.g., `setAfterCursor()`) seemed a little wrong -- it should maybe be `setAfterCursorPosition()`. These APIs have very limited use and can easily be made more consistent later.
Test Plan: Browsed around various affected tools; any issues here should throw/fail in a loud/obvious way.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Maniphest Tasks: T603
Differential Revision: https://secure.phabricator.com/D3177
Summary:
See T1602.
This is just the minimal functional patch; the scripts will continue
working because of the `DEFAULT ''`.
Test Plan:
Can't fully test this until I get more code working, but
nothing broke horribly yet.
Reviewers: epriestley
Reviewed By: epriestley
CC: nh, aran, Korvin
Maniphest Tasks: T1602
Differential Revision: https://secure.phabricator.com/D3147
Summary: The Log and PID directory should be separable in the config file
Test Plan: Start the daemons, and check if the pid and log files are stored in directories that were specified in the config file.
Reviewers: epriestley
CC: aran, Korvin
Differential Revision: https://secure.phabricator.com/D3149
Summary:
- Adds a new "Applications" application.
- Builds an application list via application config instead of via hard-coding, so we can move toward better concepts of installing/uninstalling applications, etc.
- Applications indicate that they need attention with notice counts and brief status messages rathern than 50 giant tables of all sorts of app data.
I want to try replacing the home screen with this screen, pretty much. Not sure if this is totally crazy or not. What does everyone else think?
Test Plan: Will add screenshots.
Reviewers: btrahan, chad, vrana, alanh
Reviewed By: vrana
CC: aran, davidreuss, champo
Maniphest Tasks: T1569
Differential Revision: https://secure.phabricator.com/D3129
Summary:
To make it easier to monitor daemons, let's store their current state
(running, died, exited, or unknown) to the db. The purpose of this is to
provide more information on the daemon console about the status of daemons,
especially when they are running on multiple machines. This is mostly backend
work, with only a few frontend changes. (It is also dependent on a change
to libphutil.)
These changes will make dead or stuck daemons more obvious, and will allow
more work on the frontend to hide daemons (and logs) that have exited cleanly,
i.e. ones we don't care about any more.
Test Plan:
- run db migration, check in db that all daemons were marked as exited
- start up a daemon, check in db that it is marked as running
- open web interface, check that daemon is listed as running
- after daemon has been running for a little bit, check in db that dateModified
is being updated (indicating daemon is properly sending heartbeat)
- kill -9 daemon (but don't run bin/phd yet), and check that db still shows it
as running
- edit daemon db entry to show it as being on a different host, and backdate
dateModified field by 3 minutes, and check the web ui to show that the status
is unknown.
- change db entry to have proper host, check in web ui that daemon status is
displayed as dead. Check db to see that the status was saved.
- run bin/phd stop, and see that the formerly dead daemon is now exited.
Reviewers: epriestley, vrana
Reviewed By: epriestley
CC: aran, Korvin
Differential Revision: https://secure.phabricator.com/D3126
Summary: You need to use -- to separate arguments for phd and the daemon.
Test Plan: Ran with the extra --.
Reviewers: epriestley, vrana
Reviewed By: vrana
CC: aran, Korvin
Differential Revision: https://secure.phabricator.com/D3106
Summary:
Some facts are aggregations of other facts. For example, we may compute how many times each macro is used in each object as a "raw fact":
Dnnn uses macro "psyduck" 6 times.
But we want to present this data in aggregate form, e.g. "order macros by popularity". We can do this at runtime and it probably won't be too awful a query, but we can also aggregate it cheaply:
Macro "psyduck" is used 3920 times across all objects.
...and then do a query like "select macros ordered by usage".
"Aggregate" facts support facts like this. The aggregate facts I've implemented are:
- Count of all objects.
- Count of objects of type X.
- Last time facts were updated.
These clearly fit the "aggregate" facts template well. I'm not 100% sure macros do. We can use this table to answer a question like "What are the most popular macros, ordered by use?" We can also use it to answer a question like "What are the most popular macros in the last 6 months?", if we build a specific fact for that. But we can't use it to answer a question like "What are the most popular macros between times X and Y?". Maybe that's important; maybe not.
This seems like a good fit for at least some types of facts.
I'll de-magic the keys a bit in the next diff.
Test Plan: Ran the engines and got some aggregated facts about other facts.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Maniphest Tasks: T1562
Differential Revision: https://secure.phabricator.com/D3089
Summary:
Basic "Fact" application with some storage, part of a daemon, and a control binary.
= Goals =
The general idea is that we have various statistics we'd like to compute, like the frequency of image macros, reviewer responsiveness, task close rates, etc. Computing these on page load is expensive and messy. By building an ETL pipeline and running it in a daemon, we can precompute statistics and just pull them out of "stats" tables.
One way to do this is just to completely hard-code everything, e.g. have a daemon that runs every hour which issues a big-ass query and dumps results into a table per-fact or per fact-group. But this has a bunch of drawbacks: adding new stuff to the pipeline is a pain, various fact aggregators can't share much code, updates are slow and expensive, we can never build generic graphs on top of it, etc.
I'm hoping to build an ETL pipeline which is generic enough that we can use it for most things we're interested in without needing schema changes, and so that installs can use it also without needing schema changes, while still being specific enough that it's fast and we can build useful stuff on top of it. I'm not sure if this will actually work, but it would be cool if it does so I'm starting pretty generally and we'll see how far I get. I haven't built this exact sort of thing before so I might be way off.
I'm basing the whole thing on analyzing entire objects, not analyzing changes to objects. So each part of the pipeline is handed an object and told "analyze this", not handed a change. It pretty much deletes all the old data about that thing and then writes new data. I think this is simpler to implement and understand, and it protects us from all sorts of weird issues where we end up with some kind of garbage in the DB and have to wipe the whole thing.
= Facts =
The general idea is that we extract "facts" out of objects, and then the various view interfaces just report those facts. This change has on type of fact, a "raw fact", which is directly derived from an object. These facts are concerete and relate specifically to the object they are derived from. Some examples of such facts might be:
D123 has 9 comments.
D123 uses macro "psyduck" 15 times.
D123 adds 35 lines.
D123 has 5 files.
D123 has 1 object.
D123 has 1 object of type "DREV".
D123 was created at epoch timestamp 89812351235.
D123 was accepted by @alincoln at epoch timestamp 8397981839.
The fact storage looks like this:
<factType, objectPHID, objectA, valueX, valueY, epoch>
Currently, we supprot one optional secondary key (like a user PHID or macro PHID), two optional integer values, and an optional timestamp. We might add more later. Each fact type can use these fields if it wants. Some facts use them, others don't. For instance, this diff adds a "N:*" fact, which is just the count of total objects in the system. These facts just look like:
<"N:*", "PHID-xxxx-yyyy", ...>
...where all other fields are ignored. But some of the more complex facts might look like:
<"DREV:accept", "PHID-DREV-xxxx", "PHID-USER-yyyy", ..., ..., nnnn> # User 'yyyy' accepted at epoch 'nnnn'.
<"FILE:macro", "PHID-DREV-xxxx", "PHID-MACR-yyyy", 17, ..., ...> # Object 'xxxx' uses macro 'yyyy' 17 times.
Facts have no uniqueness constraints. For @vrana's reviewer responsiveness stuff, we can insert multiple rows for each reviewer, e.g.
<"DREV:reviewed", "PHID-DREV-xxxx", "PHID-USER-yyyy", nnnn, ..., mmmm> # User 'yyyy' reviewed revision 'xxxx' after 'nnnn' seconds at 'mmmm'.
The second value (valueY) is mostly because we need it if we sample anything (valueX = observed value, valueY = sample rate) but there might be other uses. We might need to add "objectB" at some point too -- currently we can't represent a fact like "User X used macro Y on revision Z", so it would be impossible to compute macro use rates //for a specific user// based on this schema. I think we can start here though and see how far we get.
= Aggregated Facts =
These aren't implemented yet, but the idea is that we can then take the "raw facts" and compute derived/aggregated/rollup facts based on the raw fact table. For example, the "count" fact can be aggregated to arrive at a count of all objects in the system. This stuff will live in a separate table which does have uniqueness constraints, and come in the next diff.
We might need some kind of time series facts too, not sure about that. I think most of our use cases today are covered by raw facts + aggregated facts.
Test Plan: Ran `bin/fact` commands and verified they seemed to do reasonable things.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran, majak
Maniphest Tasks: T1562
Differential Revision: https://secure.phabricator.com/D3078
Summary: PhutilBufferedIterator now implements all the nonspecific logic here.
Test Plan:
Created a test script like this:
$iterator = new LiskMigrationIterator(new DifferentialRevision());
$iterator->setPageSize(3);
foreach ($iterator as $key => $rev) {
echo "{$key}: ".$rev->getID()."\n";
}
Ran it and verified sensible iteration results.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Maniphest Tasks: T1562
Differential Revision: https://secure.phabricator.com/D3076
Summary: Also move the other tests up so they'll trigger when this stuff is touched.
Test Plan: liberate
Reviewers: nh, btrahan, vrana
Reviewed By: nh
CC: aran
Maniphest Tasks: T1283
Differential Revision: https://secure.phabricator.com/D3074
Summary:
Currently, we have this cumbersome `PhabricatorRepositoryCommitMessageDetailParser` hook. This is really old and outdated; I want to just use the Differential custom field parser. See T945 for a specific application.
However, it allows installs to override author/committer association. Instead, provide an event hook for doing this.
Test Plan: Added a listener, made every commit resolve to "turtle", parsed some commits, verified the events looked sane and they now correctly were all attributed to "turtle".
Reviewers: btrahan, vrana, nh
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1337
Differential Revision: https://secure.phabricator.com/D3040
Summary: Currently, MySQL/MySQLi connections store passwords in plain text on the object. Allow them to be stored in PhutilOpaqueEnvelopes instead. See D3053.
Test Plan: Loaded site.
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Differential Revision: https://secure.phabricator.com/D3054
Summary: We pull "retries" and a doc link from PhabricatorEnv directly. Break these dependencies so the classes can move to libphutil.
Test Plan: Browsed site, triggered a schema exception and verified I still got the useful footer text.
Reviewers: btrahan, vrana
Reviewed By: vrana
CC: aran
Maniphest Tasks: T1283
Differential Revision: https://secure.phabricator.com/D3053
Summary: Add an explicit close() method to connections and call it in GlobalLock.
Test Plan:
Wrote a script like this:
$lock = PhabricatorGlobalLock::newLock('test');
echo "LOCK";
$lock->lock();
sleep(10);
echo "UNLOCK";
$lock->unlock();
sleep(9999);
Using `SHOW FULL PROCESSLIST`, verified the connection closed after 10 seconds with both the "MySQL" and "MySQLi" implementations.
Reviewers: btrahan, vrana
Reviewed By: vrana
CC: aran
Maniphest Tasks: T1470
Differential Revision: https://secure.phabricator.com/D3035
Summary: created a PhabricatorInlineCommentPreviewController so controllers in Diffusion and Differential respectively just have to handle the URI mapping and data loading like good little controllers.
Test Plan:
left inline comments on commits, deleted inline commits, submitted inline comments -- all worked well
did the same on some diffs
Reviewers: epriestley
Reviewed By: epriestley
CC: aran, Korvin
Maniphest Tasks: T1176
Differential Revision: https://secure.phabricator.com/D3034
Summary:
- Add edges for this relationship.
- Use edges to store this data.
- Migrate old data.
- Fix some warnings with generating feed stories about Aux and Edge transactions.
- Fix a task-task edge issue with "Create Subtask".
Test Plan:
- Migrated data, verified reivsions showed up.
- Attached and detached tasks to revisions and vice versa.
- Created a new revision with attached tasks.
- Created a subtask.
Reviewers: btrahan, vrana
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D3018
Summary: Theses are sort of silly anyway since they should all have the actor in them rather than being sentence fragments, but make them work OK for English at least. See D3013.
Test Plan:
Ran:
echo pht('added %d dependencie(s): %s', 1, 'derp')."\n";
echo pht('added %d dependencie(s): %s', 2, 'derp, derp')."\n";
Got:
added dependency: derp
added dependencies: derp, derp
Reviewers: vrana, btrahan
Reviewed By: vrana
CC: aran
Differential Revision: https://secure.phabricator.com/D3015
Summary:
blogs are collections of posts. a blog also has metadata like a name, description and "bloggers" that can edit the metadata of the blog and contribute posts.
changes include the post edit flow where bloggers can now select which blogs to publish to. also made various small tweaks throughout the UI to make things sensical and clean as the concept of blogs is introduced.
there's edges powering this stuff. bloggers <=> blogs and posts <=> blogs in particular.
Test Plan:
made blogs, deleted blogs, tried to make blogs with no bloggers. all went well.
verified ui to publish only showed up for public posts, published posts to blogs, un-published posts to blogs, re-published posts to blogs, deleted posts and verified they disappeared from blogs.
Reviewers: epriestley
Reviewed By: epriestley
CC: aran, Korvin
Maniphest Tasks: T1373
Differential Revision: https://secure.phabricator.com/D3003
Summary: See D3006. Move this data to the edge store.
Test Plan:
- Created dependencies, migrated, verified dependencies were preserved.
- Added new dependencies, they worked.
Reviewers: vrana, btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D3007
Summary:
- Use edges to store "X depends on Y" information in Maniphest.
- Show both "Depends On" and "Dependent Tasks".
- Migrate all the old edges.
Test Plan:
- Added some relationships, migrated, verified they were preserved.
- Added some new valid relationships, verified tasks got updated with sensible transactions and sent reasonable emails.
- Tried to add a cycle, got an ugly but effective error.
Reviewers: vrana, btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D3006
Summary: This should simplify a bunch of stuff in D3006 and D3003.
Test Plan: Will update D3006.
Reviewers: btrahan, vrana
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D3010
Summary: See D3006, D3007. Make it easier to do migrations like that without holding all results in memory.
Test Plan:
Ran this code with an artificially small page size (2):
foreach (new LiskMigrationIterator(new DifferentialRevision()) as $rev) {
echo "Revision ".$rev->getID()."\n";
}
Verified each revision as loaded and processed.
Reviewers: vrana, btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D3008
'phd status' should have a stable result when invoked multiple times.
Automatically removing PID files for dead daemons every time 'phd status' is
invoked prevents tools from noticing that a daemon has died if something
happens to invoke 'phd status' before the tool looks. This affects Puppet
noticably, since it probably runs the status command every half hour.
'phd status' may be invoked by tools (such as puppet) which need to make
automated decisions about whether to start/restart the daemon. To enable this,
'phd status' now exits with 0 if all daemons are running, 1 if no daemons are
running, and 2 if some (but not all) daemons are running.
Summary:
Lisk currently behaves in two different ways if you call it like `load("cow")` (throws) versus `load(99999999)` (returns null), where neither ID exists.
This was intended to catch programming errors as distinct from missing data, but in practice the former is very rare and you have to handle the latter in most cases anyway. The case where you pass "0" is particularly confusing. See D2971 for an example.
On the balance, I think this ends up being far more confusing than helpful. Instead, just return NULL if we're sure there's no such object.
Test Plan: Reasoned about program behavior.
Reviewers: alanh, btrahan, vrana
Reviewed By: vrana
CC: aran
Differential Revision: https://secure.phabricator.com/D2977
Summary: http://www.networksorcery.com/enp/protocol/irc.htm
Test Plan: augment code with an additional debug line (phlog('hi');) so I can see my case was trigged and it will fall through. setup an ill-configured IRC server with ngircd. Configure an ircbot to connect to said ill-configured IRC server. verify ircbot connected to channel. verify in irc bot logs that debug line was invoked.
Reviewers: epriestley
Reviewed By: epriestley
CC: aran, Korvin
Maniphest Tasks: T1452
Differential Revision: https://secure.phabricator.com/D2962
Summary: Allow the GC daemon to collect the new markup cache.
Test Plan: Ran gc daemon in "debug" mode, saw it collect cache entries.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D2947
Summary:
- See D2945.
- Drop `cache` field from ManiphestTransaction.
- Render task descriptions and transactions through PhabricatorMarkupEngine.
- Also pull the list of macros more lazily.
Test Plan:
- Verified transactions and transaction preview work correctly and interact with cache correctly.
- Verified tasks descriptions and task preview work correctly.
- Verified we don't hit the imagemacro table when we're rendering everything from cache anymore.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D2946
Summary:
The immediate issue this addresses is T1366, adding a rendering cache to Phriction. For wiki pages with code blocks especially, rerendering them each time is expensive.
The broader issue is that out markup caches aren't very good right now. They have three major problems:
**Problem 1: the data is stored in the wrong place.** We currently store remarkup caches on objects. This means we're always loading it and passing it around even when we don't need it, can't genericize cache management code (e.g., have one simple script to drop/GC caches), need to update authoritative rows to clear caches, and can't genericize rendering code since each object is different.
To solve this, I created a dedicated cache database that I plan to move all markup caches to use.
**Problem 2: time-variant rules break when cached.** Some rules like `**bold**` are time-invariant and always produce the same output, but some rules like `{Tnnn}` and `@username` are variant and may render differently (because a task was closed or a user is on vacation). Currently, we cache the raw output, so these time-variant rules get locked at whatever values they had when they were first rendered. This is the main reason Phriction doesn't have a cache right now -- I wanted `{Tnnn}` rules to reflect open/closed tasks.
To solve this, I split markup into a "preprocessing" phase (which does all the parsing and evaluates all time-invariant rules) and a "postprocessing" phase (which evaluates time-variant rules only). The preprocessing phase is most of the expense (and, notably, includes syntax highlighting) so this is nearly as good as caching the final output. I did most of the work here in D737 / D738, but we never moved to use it in Phabricator -- we currently just do the two operations serially in all cases.
This diff splits them apart and caches the output of preprocessing only, so we benefit from caching but also get accurate time-variant rendering.
**Problem 3: cache access isn't batched/pipelined optimally.** When we're rendering a list of markup blocks, we should be able to batch datafetching better than we do. D738 helped with this (fetching is batched within a single hunk of markup) and this improves batching on cache access. We could still do better here, but this is at least a step forward.
Also fixes a bug with generating a link in the Phriction history interface ($uri gets clobbered).
I'm using PHP serialization instead of JSON serialization because Remarkup does some stuff with non-ascii characters that might not survive JSON.
Test Plan:
- Created a Phriction document and verified that previews don't go to cache (no rows appear in the cache table).
- Verified that published documents come out of cache.
- Verified that caches generate/regenerate correctly, time-variant rules render properly and old documents hit the right caches.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1366
Differential Revision: https://secure.phabricator.com/D2945
Summary:
Certain types of things we should be storing in edges (notably, Task X depends on Task Y) should always be acyclic. Allow `PhabricatorEdgeEditor` to enforce this, since we can't correctly enforce it outside of the editor without being vulnerable to races.
Each edge type can be marked acyclic. If an edge type is acyclic, we perform additional steps when writing new edges of that type:
- We acquire a global lock on the edge type before performing any reads or writes. This ensures we can't produce a cycle as a result of a race where two edits add edges which independently do not produce a cycle, but do produce a cycle when combined.
- After performing writes but before committing transactions, we load the edge graph for each acyclic type and verify that it is, in fact, acyclic. If we detect cycles, we abort the edit.
- When we're done, we release the edge type locks.
This is a relatively high-complexity change, but gives us a simple way to flag an edge type as acyclic in a robust way.
Test Plan: Ran unit tests.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D2940
Summary:
Currently, multiple unit tests that acquire global locks will interfere with each other. Namespace the locks so they don't.
(Possibly we should also rename this to PhabricatorStorageNamespaceLock or something since it's not really global any more, but that's kind of unwieldy...)
Test Plan: Acquired locks with --trace and verified they were namespaced properly.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D2939
Summary:
In unit tests which use fixtures, we open transactions on every connection we establish. However, since we don't track connections that are established with "$force_new" (currently, only GlobalLock connections) we never close these transactions normally.
Instead of not tracking these connections, track them using unique keys so we'll never get a cache hit on them.
Test Plan: Built unit tests on top of this, had them stop dying from unclosed transactions.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D2938
Summary:
A later diff adds unit tests against edges, but we need real objects to connect with edges. Add some trivial objects to the Harbormaster database to compliment the similar HarbormasterScratchTable.
On its own, this does nothing interesting.
Test Plan: Built unit tests on this in a followup.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Maniphest Tasks: T1162
Differential Revision: https://secure.phabricator.com/D2937
Summary:
See discussion here:
https://secure.phabricator.com/chatlog/channel/%23phabricator/?at=21186
Basically, MySQL usually raises a good error if we exceed "max_allowed_packet":
EXCEPTION: (AphrontQueryException) #1153: Got a packet bigger than 'max_allowed_packet' bytes
But sometimes it gives us a #2006 instead. This is documented, at least:
>"With some clients, you may also get a Lost connection to MySQL server during query error if the communication packet is too large."
http://dev.mysql.com/doc/refman//5.5/en/packet-too-large.html
Try to improve the error message to point at this as a possible explanation.
Test Plan: Faked an error, had it throw, read exception message. See also chatlog.
Reviewers: btrahan, skrul
Reviewed By: skrul
CC: aran
Differential Revision: https://secure.phabricator.com/D2923
Summary: See D2924.
Test Plan: Ran locks with blocking timeouts.
Reviewers: btrahan
Reviewed By: btrahan
CC: aran
Differential Revision: https://secure.phabricator.com/D2925