Enable "strict" mode for NodeJS
Summary:
In particular, this changes the behavior of NodeJS in the following ways:
- Any attempt to get or modify the global object will result in an error.
- `null` values of `this` will no longer be evaluated to the global object and primitive values of this will not be converted to wrapper objects.
- Writing or deleting properties which have there writeable or configurable attributes set to false will now throw an error instead of failing silently.
- Adding a property to an object whose extensible attribute is false will also throw an error now.
- A functions arguments are not writeable so attempting to change them will now throw an error `arguments = [...]`.
- `with(){}` statements are gone.
- Use of `eval` is effectively banned.
- `eval` and `arguments` are not allowed as variable or function identifiers in any scope.
- The identifiers `implements`, `interface`, `let`, `package`, `private`, `protected`, `public`, `static` and `yield` are all now reserved for future use (roll on ES6).
Test Plan: Verified that Aphlict was still functional.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D11430
2015-01-19 21:41:46 +01:00
|
|
|
'use strict';
|
|
|
|
|
2015-01-19 20:46:14 +01:00
|
|
|
var JX = require('./javelin').JX;
|
2015-01-18 21:48:48 +01:00
|
|
|
|
2015-01-19 20:46:14 +01:00
|
|
|
require('./AphlictListenerList');
|
|
|
|
require('./AphlictLog');
|
2015-01-18 21:48:48 +01:00
|
|
|
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
var url = require('url');
|
2015-01-18 21:48:48 +01:00
|
|
|
var util = require('util');
|
|
|
|
var WebSocket = require('ws');
|
|
|
|
|
|
|
|
JX.install('AphlictClientServer', {
|
|
|
|
|
|
|
|
construct: function(server) {
|
Begin generalizing Aphlict server to prepare for clustering/sensible config file
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
2016-04-13 18:35:24 +02:00
|
|
|
server.on('request', JX.bind(this, this._onrequest));
|
|
|
|
|
2015-01-18 21:48:48 +01:00
|
|
|
this._server = server;
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
this._lists = {};
|
2017-04-17 23:05:29 +02:00
|
|
|
this._adminServers = [];
|
2015-01-18 21:48:48 +01:00
|
|
|
},
|
|
|
|
|
Begin generalizing Aphlict server to prepare for clustering/sensible config file
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
2016-04-13 18:35:24 +02:00
|
|
|
properties: {
|
|
|
|
logger: null,
|
2017-04-17 23:05:29 +02:00
|
|
|
adminServers: null
|
Begin generalizing Aphlict server to prepare for clustering/sensible config file
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
2016-04-13 18:35:24 +02:00
|
|
|
},
|
|
|
|
|
2015-01-18 21:48:48 +01:00
|
|
|
members: {
|
|
|
|
_server: null,
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
_lists: null,
|
|
|
|
|
2016-04-13 21:07:48 +02:00
|
|
|
getListenerList: function(instance) {
|
|
|
|
if (!this._lists[instance]) {
|
|
|
|
this._lists[instance] = new JX.AphlictListenerList(instance);
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
}
|
2016-04-13 21:07:48 +02:00
|
|
|
return this._lists[instance];
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
},
|
2015-01-18 21:48:48 +01:00
|
|
|
|
2017-04-17 23:05:29 +02:00
|
|
|
getHistory: function(age) {
|
|
|
|
var results = [];
|
|
|
|
|
|
|
|
var servers = this.getAdminServers();
|
|
|
|
for (var ii = 0; ii < servers.length; ii++) {
|
|
|
|
var messages = servers[ii].getHistory(age);
|
|
|
|
for (var jj = 0; jj < messages.length; jj++) {
|
|
|
|
results.push(messages[jj]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return results;
|
|
|
|
},
|
|
|
|
|
Begin generalizing Aphlict server to prepare for clustering/sensible config file
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
2016-04-13 18:35:24 +02:00
|
|
|
log: function() {
|
|
|
|
var logger = this.getLogger();
|
|
|
|
if (!logger) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
logger.log.apply(logger, arguments);
|
|
|
|
|
|
|
|
return this;
|
|
|
|
},
|
|
|
|
|
|
|
|
_onrequest: function(request, response) {
|
|
|
|
// The websocket code upgrades connections before they get here, so
|
|
|
|
// this only handles normal HTTP connections. We just fail them with
|
|
|
|
// a 501 response.
|
|
|
|
response.writeHead(501);
|
|
|
|
response.end('HTTP/501 Use Websockets\n');
|
|
|
|
},
|
|
|
|
|
2016-04-14 01:08:07 +02:00
|
|
|
_parseInstanceFromPath: function(path) {
|
|
|
|
// If there's no "~" marker in the path, it's not an instance name.
|
|
|
|
// Users sometimes configure nginx or Apache to proxy based on the
|
|
|
|
// path.
|
|
|
|
if (path.indexOf('~') === -1) {
|
|
|
|
return 'default';
|
|
|
|
}
|
|
|
|
|
|
|
|
var instance = path.split('~')[1];
|
|
|
|
|
|
|
|
// Remove any "/" characters.
|
|
|
|
instance = instance.replace(/\//g, '');
|
|
|
|
if (!instance.length) {
|
|
|
|
return 'default';
|
|
|
|
}
|
|
|
|
|
|
|
|
return instance;
|
|
|
|
},
|
|
|
|
|
2015-01-18 21:48:48 +01:00
|
|
|
listen: function() {
|
|
|
|
var self = this;
|
|
|
|
var server = this._server.listen.apply(this._server, arguments);
|
|
|
|
var wss = new WebSocket.Server({server: server});
|
|
|
|
|
2017-08-12 00:16:59 +02:00
|
|
|
// This function checks for upgradeReq which is only available in
|
|
|
|
// ws2 by default, not ws3. See T12755 for more information.
|
|
|
|
wss.on('connection', function(ws, request) {
|
|
|
|
if ('upgradeReq' in ws) {
|
|
|
|
request = ws.upgradeReq;
|
|
|
|
}
|
|
|
|
|
|
|
|
var path = url.parse(request.url).pathname;
|
2016-04-14 01:08:07 +02:00
|
|
|
var instance = self._parseInstanceFromPath(path);
|
2016-04-13 21:07:48 +02:00
|
|
|
|
|
|
|
var listener = self.getListenerList(instance).addListener(ws);
|
2015-01-18 21:48:48 +01:00
|
|
|
|
|
|
|
function log() {
|
Begin generalizing Aphlict server to prepare for clustering/sensible config file
Summary:
Ref T10697. Currently, `aphlict` takes a ton of command line flags to configure exactly one admin server and exactly one client server.
I want to replace this with a config file. Additionally, I plan to support:
- arbitrary numbers of listening client ports;
- arbitrary numbers of listening admin ports;
- SSL on any port.
For now, just transform the arguments to look like they're a config file. In the future, I'll load from a config file instead.
This greater generality will allow you to do stuff like run separate HTTP and HTTPS admin ports if you really want. I don't think there's a ton of use for this, but it tends to make the code cleaner anyway and there may be some weird cross-datacneter cases for it. Certainly, we undershot with the initial design and lots of users want to terminate SSL in nginx and run only HTTP on this server.
(Some sort-of-plausible use cases are running separate HTTP and HTTPS client servers, if your Phabricator install supports both, or running multiple HTTPS servers with different certificates if you have a bizarre VPN.)
Test Plan: Started Aphlict, connected to it, sent myself test notifications, viewed status page, reviewed logfile.
Reviewers: chad
Reviewed By: chad
Maniphest Tasks: T10697
Differential Revision: https://secure.phabricator.com/D15700
2016-04-13 18:35:24 +02:00
|
|
|
self.log(
|
2015-01-18 21:48:48 +01:00
|
|
|
util.format('<%s>', listener.getDescription()) +
|
|
|
|
' ' +
|
|
|
|
util.format.apply(null, arguments));
|
|
|
|
}
|
|
|
|
|
|
|
|
log('Connected from %s.', ws._socket.remoteAddress);
|
|
|
|
|
|
|
|
ws.on('message', function(data) {
|
|
|
|
log('Received message: %s', data);
|
|
|
|
|
|
|
|
var message;
|
|
|
|
try {
|
|
|
|
message = JSON.parse(data);
|
|
|
|
} catch (err) {
|
|
|
|
log('Message is invalid: %s', err.message);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (message.command) {
|
|
|
|
case 'subscribe':
|
|
|
|
log(
|
|
|
|
'Subscribed to: %s',
|
|
|
|
JSON.stringify(message.data));
|
|
|
|
listener.subscribe(message.data);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case 'unsubscribe':
|
|
|
|
log(
|
|
|
|
'Unsubscribed from: %s',
|
|
|
|
JSON.stringify(message.data));
|
|
|
|
listener.unsubscribe(message.data);
|
|
|
|
break;
|
|
|
|
|
2017-04-17 23:05:29 +02:00
|
|
|
case 'replay':
|
|
|
|
var age = message.data.age || 60000;
|
|
|
|
var min_age = (new Date().getTime() - age);
|
|
|
|
|
|
|
|
var old_messages = self.getHistory(min_age);
|
|
|
|
for (var ii = 0; ii < old_messages.length; ii++) {
|
|
|
|
var old_message = old_messages[ii];
|
|
|
|
|
|
|
|
if (!listener.isSubscribedToAny(old_message.subscribers)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
try {
|
|
|
|
listener.writeMessage(old_message);
|
|
|
|
} catch (error) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2017-04-18 02:00:46 +02:00
|
|
|
case 'ping':
|
|
|
|
var pong = {
|
|
|
|
type: 'pong'
|
|
|
|
};
|
|
|
|
|
|
|
|
try {
|
|
|
|
listener.writeMessage(pong);
|
|
|
|
} catch (error) {
|
|
|
|
// Ignore any issues here, we'll clean up elsewhere.
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2015-01-18 21:48:48 +01:00
|
|
|
default:
|
|
|
|
log(
|
|
|
|
'Unrecognized command "%s".',
|
|
|
|
message.command || '<undefined>');
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
2015-02-02 23:56:15 +01:00
|
|
|
ws.on('close', function() {
|
2016-04-13 21:07:48 +02:00
|
|
|
self.getListenerList(instance).removeListener(listener);
|
2015-01-18 21:48:48 +01:00
|
|
|
log('Disconnected.');
|
|
|
|
});
|
|
|
|
});
|
|
|
|
|
2016-04-13 21:07:48 +02:00
|
|
|
}
|
2015-01-18 21:48:48 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
});
|