Enable "strict" mode for NodeJS
Summary:
In particular, this changes the behavior of NodeJS in the following ways:
- Any attempt to get or modify the global object will result in an error.
- `null` values of `this` will no longer be evaluated to the global object and primitive values of this will not be converted to wrapper objects.
- Writing or deleting properties which have there writeable or configurable attributes set to false will now throw an error instead of failing silently.
- Adding a property to an object whose extensible attribute is false will also throw an error now.
- A functions arguments are not writeable so attempting to change them will now throw an error `arguments = [...]`.
- `with(){}` statements are gone.
- Use of `eval` is effectively banned.
- `eval` and `arguments` are not allowed as variable or function identifiers in any scope.
- The identifiers `implements`, `interface`, `let`, `package`, `private`, `protected`, `public`, `static` and `yield` are all now reserved for future use (roll on ES6).
Test Plan: Verified that Aphlict was still functional.
Reviewers: #blessed_reviewers, epriestley
Reviewed By: #blessed_reviewers, epriestley
Subscribers: Korvin, epriestley
Differential Revision: https://secure.phabricator.com/D11430
2015-01-19 21:41:46 +01:00
|
|
|
'use strict';
|
|
|
|
|
2015-01-19 20:46:14 +01:00
|
|
|
var JX = require('./javelin').JX;
|
2015-01-18 21:48:48 +01:00
|
|
|
|
2015-01-19 20:46:14 +01:00
|
|
|
require('./AphlictListenerList');
|
|
|
|
require('./AphlictLog');
|
2015-01-18 21:48:48 +01:00
|
|
|
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
var url = require('url');
|
2015-01-18 21:48:48 +01:00
|
|
|
var util = require('util');
|
|
|
|
var WebSocket = require('ws');
|
|
|
|
|
|
|
|
JX.install('AphlictClientServer', {
|
|
|
|
|
|
|
|
construct: function(server) {
|
|
|
|
this.setLogger(new JX.AphlictLog());
|
|
|
|
this._server = server;
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
this._lists = {};
|
2015-01-18 21:48:48 +01:00
|
|
|
},
|
|
|
|
|
|
|
|
members: {
|
|
|
|
_server: null,
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
_lists: null,
|
|
|
|
|
|
|
|
getListenerList: function(path) {
|
|
|
|
if (!this._lists[path]) {
|
|
|
|
this._lists[path] = new JX.AphlictListenerList(path);
|
|
|
|
}
|
|
|
|
return this._lists[path];
|
|
|
|
},
|
2015-01-18 21:48:48 +01:00
|
|
|
|
|
|
|
listen: function() {
|
|
|
|
var self = this;
|
|
|
|
var server = this._server.listen.apply(this._server, arguments);
|
|
|
|
var wss = new WebSocket.Server({server: server});
|
|
|
|
|
|
|
|
wss.on('connection', function(ws) {
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
var path = url.parse(ws.upgradeReq.url).pathname;
|
|
|
|
var listener = self.getListenerList(path).addListener(ws);
|
2015-01-18 21:48:48 +01:00
|
|
|
|
|
|
|
function log() {
|
|
|
|
self.getLogger().log(
|
|
|
|
util.format('<%s>', listener.getDescription()) +
|
|
|
|
' ' +
|
|
|
|
util.format.apply(null, arguments));
|
|
|
|
}
|
|
|
|
|
|
|
|
log('Connected from %s.', ws._socket.remoteAddress);
|
|
|
|
|
|
|
|
ws.on('message', function(data) {
|
|
|
|
log('Received message: %s', data);
|
|
|
|
|
|
|
|
var message;
|
|
|
|
try {
|
|
|
|
message = JSON.parse(data);
|
|
|
|
} catch (err) {
|
|
|
|
log('Message is invalid: %s', err.message);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (message.command) {
|
|
|
|
case 'subscribe':
|
|
|
|
log(
|
|
|
|
'Subscribed to: %s',
|
|
|
|
JSON.stringify(message.data));
|
|
|
|
listener.subscribe(message.data);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case 'unsubscribe':
|
|
|
|
log(
|
|
|
|
'Unsubscribed from: %s',
|
|
|
|
JSON.stringify(message.data));
|
|
|
|
listener.unsubscribe(message.data);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
log(
|
|
|
|
'Unrecognized command "%s".',
|
|
|
|
message.command || '<undefined>');
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
2015-02-02 23:56:15 +01:00
|
|
|
ws.on('close', function() {
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
self.getListenerList(path).removeListener(listener);
|
2015-02-02 23:56:15 +01:00
|
|
|
log('Disconnected.');
|
|
|
|
});
|
|
|
|
|
2015-01-18 21:48:48 +01:00
|
|
|
wss.on('close', function() {
|
Namespace Aphlict clients by request path, plus other fixes
Summary:
Fixes T7130. Fixes T7041. Fixes T7012.
Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable.
To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing.
Also fix two unrelated issues:
- Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes.
- Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably.
Test Plan:
- Hit notification status page, saw version bump and instance/path name.
- Saw instance/path name in client and server logs.
- Stopped server, saw reconnects after 2, 4, 16, ... seconds.
- Sent test notification; received test notification.
- Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push.
Reviewers: joshuaspence, btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7041, T7012, T7130
Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
|
|
|
self.getListenerList(path).removeListener(listener);
|
2015-01-18 21:48:48 +01:00
|
|
|
log('Disconnected.');
|
|
|
|
});
|
|
|
|
|
|
|
|
wss.on('error', function(err) {
|
|
|
|
log('Error: %s', err.message);
|
|
|
|
});
|
|
|
|
|
|
|
|
});
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
properties: {
|
|
|
|
logger: null,
|
|
|
|
}
|
|
|
|
|
|
|
|
});
|