Summary:
Ref T12563. Before broadcasting messages from the server, store them in a history buffer.
A future change will let clients retrieve them.
Test Plan:
- Used the web frontend to look at the buffer, reloaded over time, sent messages. Saw buffer size go up as I sent messages and fall after 60 seconds.
- Set size to 4 messages, sent a bunch of messages, saw the buffer size max out at 4 messages.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T12563
Differential Revision: https://secure.phabricator.com/D17707
Summary:
Ref T6915. This allows multiple notification servers to talk to each other:
- Every server has a list of every other server, including itself.
- Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB".
- Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server.
- Servers do not retransmit messages that they've already seen (already marked with their fingerprint).
- Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen.
This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time.
The fingerprinting stops cycles, and stops servers from sending themselves copies of messages.
We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue.
Test Plan:
- Wrote two server configs.
- Started two servers.
- Told Phabricator about all four services.
- Loaded Chrome and Safari.
- Saw them connect to different servers.
- Sent messages in one, got notifications in the other (magic!).
- Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc.
(This pretty much just worked when I ran it the first time so I probably missed something?)
{F1218835}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T6915
Differential Revision: https://secure.phabricator.com/D15711
Summary:
Fixes T10697. This finishes bringing the rest of the config up to cluster power levels.
Phabricator is now given an arbitrarily long list of notification servers.
Each Aphlict server is given an arbitrarily long list of ports to run services on.
Users are free to make them meet in the middle by proxying whatever they want to whatever else they want.
This should also accommodate clustering fairly easily in the future.
Also rewrote the status UI and changed a million other things. 🐗
Test Plan:
{F1217864}
{F1217865}
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15703
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
Summary: Ref T7124. The local version of `this` in the handler of 'end' was incremented rather than the global one.
Test Plan: Sending test notifications did not increment the `messages.in` value before this patch even if it should have. With the patch sending test notifications does increment `messages.in`.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Maniphest Tasks: T7124
Differential Revision: https://secure.phabricator.com/D11648
Summary:
In particular, this changes the behavior of NodeJS in the following ways:
- Any attempt to get or modify the global object will result in an error.
- `null` values of `this` will no longer be evaluated to the global object and primitive values of this will not be converted to wrapper objects.
- Writing or deleting properties which have there writeable or configurable attributes set to false will now throw an error instead of failing silently.
- Adding a property to an object whose extensible attribute is false will also throw an error now.
- A functions arguments are not writeable so attempting to change them will now throw an error `arguments = [...]`.
- `with(){}` statements are gone.
- Use of `eval` is effectively banned.
- `eval` and `arguments` are not allowed as variable or function identifiers in any scope.
- The identifiers `implements`, `interface`, `let`, `package`, `private`, `protected`, `public`, `static` and `yield` are all now reserved for future use (roll on ES6).
Test Plan: Verified that Aphlict was still functional.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D11430
Summary:
This was broken in D11383. Basically, I had the `ws` module installed globally whilst testing, but the changes made do not work if the `ws` module is installed locally (i.e. in the `./support/aphlict/server/node_modules` directory). After poking around, it seems that this is due to the sandboxing that is done by `JX.require`.
A quick fix is to just //not// use `JX.require`, although you may have a better idea?
The error that is occurring is as follows:
```
<<< UNCAUGHT EXCEPTION! >>>
Error: Cannot find module 'ws'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at extra.require (/usr/src/phabricator/webroot/rsrc/externals/javelin/core/init_node.js:48:16)
at /usr/src/phabricator/support/aphlict/server/lib/AphlictClientServer.js:10:17
at Script.(anonymous function) [as runInNewContext] (vm.js:41:22)
at Object.JX.require (/usr/src/phabricator/webroot/rsrc/externals/javelin/core/init_node.js:58:6)
at Object.<anonymous> (/usr/src/phabricator/support/aphlict/server/aphlict_server.js:102:4)
at Module._compile (module.js:456:26)
>>> Server exited!
```
Test Plan: Now able to start the Aphlict server.
Reviewers: joshuaspence
Reviewed By: joshuaspence
Subscribers: Korvin, epriestley
Maniphest Tasks: T6987
Differential Revision: https://secure.phabricator.com/D11425
Summary: Tidy the Aphlict server by splitting the functionality into two main modules, `AphlictClientServer` and `AphlictAdminServer. There is still further tidying that could be done here, but I feel that this puts us in a much better place.
Test Plan: Sent notifications via `/notification/status/`.
Reviewers: epriestley, #blessed_reviewers
Reviewed By: epriestley, #blessed_reviewers
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D11383