1
0
Fork 0
mirror of https://we.phorge.it/source/phorge.git synced 2024-12-05 05:02:44 +01:00
phorge-phorge/scripts/fact/manage_facts.php

23 lines
644 B
PHP
Raw Normal View History

Add a basic "fact" application Summary: Basic "Fact" application with some storage, part of a daemon, and a control binary. = Goals = The general idea is that we have various statistics we'd like to compute, like the frequency of image macros, reviewer responsiveness, task close rates, etc. Computing these on page load is expensive and messy. By building an ETL pipeline and running it in a daemon, we can precompute statistics and just pull them out of "stats" tables. One way to do this is just to completely hard-code everything, e.g. have a daemon that runs every hour which issues a big-ass query and dumps results into a table per-fact or per fact-group. But this has a bunch of drawbacks: adding new stuff to the pipeline is a pain, various fact aggregators can't share much code, updates are slow and expensive, we can never build generic graphs on top of it, etc. I'm hoping to build an ETL pipeline which is generic enough that we can use it for most things we're interested in without needing schema changes, and so that installs can use it also without needing schema changes, while still being specific enough that it's fast and we can build useful stuff on top of it. I'm not sure if this will actually work, but it would be cool if it does so I'm starting pretty generally and we'll see how far I get. I haven't built this exact sort of thing before so I might be way off. I'm basing the whole thing on analyzing entire objects, not analyzing changes to objects. So each part of the pipeline is handed an object and told "analyze this", not handed a change. It pretty much deletes all the old data about that thing and then writes new data. I think this is simpler to implement and understand, and it protects us from all sorts of weird issues where we end up with some kind of garbage in the DB and have to wipe the whole thing. = Facts = The general idea is that we extract "facts" out of objects, and then the various view interfaces just report those facts. This change has on type of fact, a "raw fact", which is directly derived from an object. These facts are concerete and relate specifically to the object they are derived from. Some examples of such facts might be: D123 has 9 comments. D123 uses macro "psyduck" 15 times. D123 adds 35 lines. D123 has 5 files. D123 has 1 object. D123 has 1 object of type "DREV". D123 was created at epoch timestamp 89812351235. D123 was accepted by @alincoln at epoch timestamp 8397981839. The fact storage looks like this: <factType, objectPHID, objectA, valueX, valueY, epoch> Currently, we supprot one optional secondary key (like a user PHID or macro PHID), two optional integer values, and an optional timestamp. We might add more later. Each fact type can use these fields if it wants. Some facts use them, others don't. For instance, this diff adds a "N:*" fact, which is just the count of total objects in the system. These facts just look like: <"N:*", "PHID-xxxx-yyyy", ...> ...where all other fields are ignored. But some of the more complex facts might look like: <"DREV:accept", "PHID-DREV-xxxx", "PHID-USER-yyyy", ..., ..., nnnn> # User 'yyyy' accepted at epoch 'nnnn'. <"FILE:macro", "PHID-DREV-xxxx", "PHID-MACR-yyyy", 17, ..., ...> # Object 'xxxx' uses macro 'yyyy' 17 times. Facts have no uniqueness constraints. For @vrana's reviewer responsiveness stuff, we can insert multiple rows for each reviewer, e.g. <"DREV:reviewed", "PHID-DREV-xxxx", "PHID-USER-yyyy", nnnn, ..., mmmm> # User 'yyyy' reviewed revision 'xxxx' after 'nnnn' seconds at 'mmmm'. The second value (valueY) is mostly because we need it if we sample anything (valueX = observed value, valueY = sample rate) but there might be other uses. We might need to add "objectB" at some point too -- currently we can't represent a fact like "User X used macro Y on revision Z", so it would be impossible to compute macro use rates //for a specific user// based on this schema. I think we can start here though and see how far we get. = Aggregated Facts = These aren't implemented yet, but the idea is that we can then take the "raw facts" and compute derived/aggregated/rollup facts based on the raw fact table. For example, the "count" fact can be aggregated to arrive at a count of all objects in the system. This stuff will live in a separate table which does have uniqueness constraints, and come in the next diff. We might need some kind of time series facts too, not sure about that. I think most of our use cases today are covered by raw facts + aggregated facts. Test Plan: Ran `bin/fact` commands and verified they seemed to do reasonable things. Reviewers: vrana, btrahan Reviewed By: vrana CC: aran, majak Maniphest Tasks: T1562 Differential Revision: https://secure.phabricator.com/D3078
2012-07-27 22:34:21 +02:00
#!/usr/bin/env php
<?php
$root = dirname(dirname(dirname(__FILE__)));
require_once $root.'/scripts/__init_script__.php';
$args = new PhutilArgumentParser($argv);
$args->setTagline(pht('manage fact configuration'));
Add a basic "fact" application Summary: Basic "Fact" application with some storage, part of a daemon, and a control binary. = Goals = The general idea is that we have various statistics we'd like to compute, like the frequency of image macros, reviewer responsiveness, task close rates, etc. Computing these on page load is expensive and messy. By building an ETL pipeline and running it in a daemon, we can precompute statistics and just pull them out of "stats" tables. One way to do this is just to completely hard-code everything, e.g. have a daemon that runs every hour which issues a big-ass query and dumps results into a table per-fact or per fact-group. But this has a bunch of drawbacks: adding new stuff to the pipeline is a pain, various fact aggregators can't share much code, updates are slow and expensive, we can never build generic graphs on top of it, etc. I'm hoping to build an ETL pipeline which is generic enough that we can use it for most things we're interested in without needing schema changes, and so that installs can use it also without needing schema changes, while still being specific enough that it's fast and we can build useful stuff on top of it. I'm not sure if this will actually work, but it would be cool if it does so I'm starting pretty generally and we'll see how far I get. I haven't built this exact sort of thing before so I might be way off. I'm basing the whole thing on analyzing entire objects, not analyzing changes to objects. So each part of the pipeline is handed an object and told "analyze this", not handed a change. It pretty much deletes all the old data about that thing and then writes new data. I think this is simpler to implement and understand, and it protects us from all sorts of weird issues where we end up with some kind of garbage in the DB and have to wipe the whole thing. = Facts = The general idea is that we extract "facts" out of objects, and then the various view interfaces just report those facts. This change has on type of fact, a "raw fact", which is directly derived from an object. These facts are concerete and relate specifically to the object they are derived from. Some examples of such facts might be: D123 has 9 comments. D123 uses macro "psyduck" 15 times. D123 adds 35 lines. D123 has 5 files. D123 has 1 object. D123 has 1 object of type "DREV". D123 was created at epoch timestamp 89812351235. D123 was accepted by @alincoln at epoch timestamp 8397981839. The fact storage looks like this: <factType, objectPHID, objectA, valueX, valueY, epoch> Currently, we supprot one optional secondary key (like a user PHID or macro PHID), two optional integer values, and an optional timestamp. We might add more later. Each fact type can use these fields if it wants. Some facts use them, others don't. For instance, this diff adds a "N:*" fact, which is just the count of total objects in the system. These facts just look like: <"N:*", "PHID-xxxx-yyyy", ...> ...where all other fields are ignored. But some of the more complex facts might look like: <"DREV:accept", "PHID-DREV-xxxx", "PHID-USER-yyyy", ..., ..., nnnn> # User 'yyyy' accepted at epoch 'nnnn'. <"FILE:macro", "PHID-DREV-xxxx", "PHID-MACR-yyyy", 17, ..., ...> # Object 'xxxx' uses macro 'yyyy' 17 times. Facts have no uniqueness constraints. For @vrana's reviewer responsiveness stuff, we can insert multiple rows for each reviewer, e.g. <"DREV:reviewed", "PHID-DREV-xxxx", "PHID-USER-yyyy", nnnn, ..., mmmm> # User 'yyyy' reviewed revision 'xxxx' after 'nnnn' seconds at 'mmmm'. The second value (valueY) is mostly because we need it if we sample anything (valueX = observed value, valueY = sample rate) but there might be other uses. We might need to add "objectB" at some point too -- currently we can't represent a fact like "User X used macro Y on revision Z", so it would be impossible to compute macro use rates //for a specific user// based on this schema. I think we can start here though and see how far we get. = Aggregated Facts = These aren't implemented yet, but the idea is that we can then take the "raw facts" and compute derived/aggregated/rollup facts based on the raw fact table. For example, the "count" fact can be aggregated to arrive at a count of all objects in the system. This stuff will live in a separate table which does have uniqueness constraints, and come in the next diff. We might need some kind of time series facts too, not sure about that. I think most of our use cases today are covered by raw facts + aggregated facts. Test Plan: Ran `bin/fact` commands and verified they seemed to do reasonable things. Reviewers: vrana, btrahan Reviewed By: vrana CC: aran, majak Maniphest Tasks: T1562 Differential Revision: https://secure.phabricator.com/D3078
2012-07-27 22:34:21 +02:00
$args->setSynopsis(<<<EOSYNOPSIS
**fact** __command__ [__options__]
Manage and debug Phabricator data extraction, storage and
configuration used to compute statistics.
EOSYNOPSIS
);
$args->parseStandardArguments();
$workflows = id(new PhutilClassMapQuery())
->setAncestorClass('PhabricatorFactManagementWorkflow')
->execute();
$workflows[] = new PhutilHelpArgumentWorkflow();
Add a basic "fact" application Summary: Basic "Fact" application with some storage, part of a daemon, and a control binary. = Goals = The general idea is that we have various statistics we'd like to compute, like the frequency of image macros, reviewer responsiveness, task close rates, etc. Computing these on page load is expensive and messy. By building an ETL pipeline and running it in a daemon, we can precompute statistics and just pull them out of "stats" tables. One way to do this is just to completely hard-code everything, e.g. have a daemon that runs every hour which issues a big-ass query and dumps results into a table per-fact or per fact-group. But this has a bunch of drawbacks: adding new stuff to the pipeline is a pain, various fact aggregators can't share much code, updates are slow and expensive, we can never build generic graphs on top of it, etc. I'm hoping to build an ETL pipeline which is generic enough that we can use it for most things we're interested in without needing schema changes, and so that installs can use it also without needing schema changes, while still being specific enough that it's fast and we can build useful stuff on top of it. I'm not sure if this will actually work, but it would be cool if it does so I'm starting pretty generally and we'll see how far I get. I haven't built this exact sort of thing before so I might be way off. I'm basing the whole thing on analyzing entire objects, not analyzing changes to objects. So each part of the pipeline is handed an object and told "analyze this", not handed a change. It pretty much deletes all the old data about that thing and then writes new data. I think this is simpler to implement and understand, and it protects us from all sorts of weird issues where we end up with some kind of garbage in the DB and have to wipe the whole thing. = Facts = The general idea is that we extract "facts" out of objects, and then the various view interfaces just report those facts. This change has on type of fact, a "raw fact", which is directly derived from an object. These facts are concerete and relate specifically to the object they are derived from. Some examples of such facts might be: D123 has 9 comments. D123 uses macro "psyduck" 15 times. D123 adds 35 lines. D123 has 5 files. D123 has 1 object. D123 has 1 object of type "DREV". D123 was created at epoch timestamp 89812351235. D123 was accepted by @alincoln at epoch timestamp 8397981839. The fact storage looks like this: <factType, objectPHID, objectA, valueX, valueY, epoch> Currently, we supprot one optional secondary key (like a user PHID or macro PHID), two optional integer values, and an optional timestamp. We might add more later. Each fact type can use these fields if it wants. Some facts use them, others don't. For instance, this diff adds a "N:*" fact, which is just the count of total objects in the system. These facts just look like: <"N:*", "PHID-xxxx-yyyy", ...> ...where all other fields are ignored. But some of the more complex facts might look like: <"DREV:accept", "PHID-DREV-xxxx", "PHID-USER-yyyy", ..., ..., nnnn> # User 'yyyy' accepted at epoch 'nnnn'. <"FILE:macro", "PHID-DREV-xxxx", "PHID-MACR-yyyy", 17, ..., ...> # Object 'xxxx' uses macro 'yyyy' 17 times. Facts have no uniqueness constraints. For @vrana's reviewer responsiveness stuff, we can insert multiple rows for each reviewer, e.g. <"DREV:reviewed", "PHID-DREV-xxxx", "PHID-USER-yyyy", nnnn, ..., mmmm> # User 'yyyy' reviewed revision 'xxxx' after 'nnnn' seconds at 'mmmm'. The second value (valueY) is mostly because we need it if we sample anything (valueX = observed value, valueY = sample rate) but there might be other uses. We might need to add "objectB" at some point too -- currently we can't represent a fact like "User X used macro Y on revision Z", so it would be impossible to compute macro use rates //for a specific user// based on this schema. I think we can start here though and see how far we get. = Aggregated Facts = These aren't implemented yet, but the idea is that we can then take the "raw facts" and compute derived/aggregated/rollup facts based on the raw fact table. For example, the "count" fact can be aggregated to arrive at a count of all objects in the system. This stuff will live in a separate table which does have uniqueness constraints, and come in the next diff. We might need some kind of time series facts too, not sure about that. I think most of our use cases today are covered by raw facts + aggregated facts. Test Plan: Ran `bin/fact` commands and verified they seemed to do reasonable things. Reviewers: vrana, btrahan Reviewed By: vrana CC: aran, majak Maniphest Tasks: T1562 Differential Revision: https://secure.phabricator.com/D3078
2012-07-27 22:34:21 +02:00
$args->parseWorkflows($workflows);