Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 654 Support Alternative URLs with Production/Staging/Development Hosts Options #656

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions .github/workflows/s3-bucket.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: S3 Bucket Test

on:
push:
workflow_dispatch:

jobs:
test-on-os-node-matrix:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
node: [18, 20, 22]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
S3_BUCKET: ${{ secrets.S3_BUCKET }}

name: Test S3 Bucket - Node ${{ matrix.node }} on ${{ matrix.os }}

steps:
- name: Checkout ${{ github.ref }}
uses: actions/checkout@v4

- name: Setup node ${{ matrix.node }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}

- name: NPM Install
run: npm install

- name: Show Environment Info
run: |
printenv
node --version
npm --version

- name: Run S3 Tests (against ${{ env.S3_BUCKET }} bucket)
run: |
npm run bucket ${{ env.S3_BUCKET }}
npm run test:s3
if: ${{ env.S3_BUCKET != '' }}

109 changes: 73 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,25 +100,29 @@ This is a guide to configuring your module to use node-pre-gyp.
- Add `@mapbox/node-pre-gyp` to `dependencies`
- Add `aws-sdk` as a `devDependency`
- Add a custom `install` script
- Declare a `binary` object
- Declare a `binary` and spcify `host` object

This looks like:

```js
"dependencies" : {
"@mapbox/node-pre-gyp": "1.x"
},
"devDependencies": {
"aws-sdk": "2.x"
}
"scripts": {
"install": "node-pre-gyp install --fallback-to-build"
},
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/",
"host": "https://your_module.s3-us-west-1.amazonaws.com"
{
"dependencies":{
"@mapbox/node-pre-gyp":"1.x"
},
"devDependencies":{
"aws-sdk":"2.x"
},
"scripts":{
"install":"node-pre-gyp install --fallback-to-build"
},
"binary":{
"module_name":"your_module",
"module_path":"./lib/binding/",
"host":{
"endpoint":"https://your_module.s3-us-west-1.amazonaws.com"
}
}
}
```

For a full example see [node-addon-examples's package.json](https://github.com/springmeyer/node-addon-example/blob/master/package.json).
Expand Down Expand Up @@ -148,9 +152,9 @@ The location your native module is placed after a build. This should be an empty

Note: This property supports variables based on [Versioning](#versioning).

###### host
###### host (and host.endpoint)

A url to the remote location where you've published tarball binaries (must be `https` not `http`).
An object with atleast a single key `endpoint` defining the remote location where you've published tarball binaries (must be `https` not `http`).

It is highly recommended that you use Amazon S3. The reasons are:

Expand All @@ -162,13 +166,21 @@ Why then not require S3? Because while some applications using node-pre-gyp need

It should also be mentioned that there is an optional and entirely separate npm module called [node-pre-gyp-github](https://github.com/bchr02/node-pre-gyp-github) which is intended to complement node-pre-gyp and be installed along with it. It provides the ability to store and publish your binaries within your repositories GitHub Releases if you would rather not use S3 directly. Installation and usage instructions can be found [here](https://github.com/bchr02/node-pre-gyp-github), but the basic premise is that instead of using the ```node-pre-gyp publish``` command you would use ```node-pre-gyp-github publish```.

##### The `binary` object other optional S3 properties
This looks like:

If you are not using a standard s3 path like `bucket_name.s3(.-)region.amazonaws.com`, you might get an error on `publish` because node-pre-gyp extracts the region and bucket from the `host` url. For example, you may have an on-premises s3-compatible storage server, or may have configured a specific dns redirecting to an s3 endpoint. In these cases, you can explicitly set the `region` and `bucket` properties to tell node-pre-gyp to use these values instead of guessing from the `host` property. The following values can be used in the `binary` section:
```js
{
"binary": {
"host": {
"endpoint": "https://some-bucket.s3.us-east-1.amazonaws.com",
}
}
}
```

###### host
##### The `host` object other optional S3 properties

The url to the remote server root location (must be `https` not `http`).
If you are not using a standard s3 path like `bucket_name.s3(.-)region.amazonaws.com`, you might get an error on `publish` because node-pre-gyp extracts the region and bucket from the `host` url. For example, you may have an on-premises s3-compatible storage server, or may have configured a specific dns redirecting to an s3 endpoint. In these cases, you can explicitly set the `region` and `bucket` properties to tell node-pre-gyp to use these values instead of guessing from the `host` property. The following values can be used in the `binary` section:

###### bucket

Expand All @@ -182,6 +194,21 @@ Your S3 server region.

Set `s3ForcePathStyle` to true if the endpoint url should not be prefixed with the bucket name. If false (default), the server endpoint would be constructed as `bucket_name.your_server.com`.

For example using an alternate S3 compatible host:

```js
{
"binary": {
"host": {
"endpoint": "https://play.min.io",
"bucket": "node-pre-gyp-production",
"region": "us-east-1",
"s3ForcePathStyle": true
}
}
}
```

##### The `binary` object has optional properties

###### remote_path
Expand Down Expand Up @@ -309,28 +336,38 @@ If a a binary was not available for a given platform and `--fallback-to-build` w

#### 9) One more option

It may be that you want to work with two s3 buckets, one for staging and one for production; this
arrangement makes it less likely to accidentally overwrite a production binary. It also allows the production
environment to have more restrictive permissions than staging while still enabling publishing when
developing and testing.
It may be that you want to work with multiple s3 buckets, one for development, on for staging and one for production; such arrangement makes it less likely to accidentally overwrite a production binary. It also allows the production environment to have more restrictive permissions than development or staging while still enabling publishing when developing and testing.

The binary.host property can be set at execution time. In order to do so all of the following conditions
must be true.

- binary.host is falsey or not present
- binary.staging_host is not empty
- binary.production_host is not empty
To use that option set `staging_host` and/or `development_host` using settings similar to those used for `host`.

If any of these checks fail then the operation will not perform execution time determination of the s3 target.
```
{
"binary": {
"host": {
"endpoint": "https://dns.pointed.example.com",
"bucket": "obscured-production-bucket",
"region": "us-east-1",
"s3ForcePathStyle": true
}
"staging_host": {
"endpoint": "https://my-staging-bucket.s3.us-east-1.amazonaws.com",
},
"development_host": {
"endpoint": "https://play.min.io",
"bucket": "node-pre-gyp-development",
"region": "us-east-1",
"s3ForcePathStyle": true
}
}
}
```

If the command being executed is either "publish" or "unpublish" then the default is set to `binary.staging_host`. In all other cases
the default is `binary.production_host`.
Once a development and/or staging host is defined, if the command being executed is either "publish" or "unpublish" then it will default to the lower of the alternate hosts (development and if not present, staging). if the command being executed is either "install" or "info" it will default to the production host (specified by `host`).

The command-line options `--s3_host=staging` or `--s3_host=production` override the default. If `s3_host`
is present and not `staging` or `production` an exception is thrown.
To explicitly choose a host use command-line options `--s3_host=development`, `--s3_host=staging` or `--s3_host=production`, or set environment variable `node_pre_gyp_s3_host` to either `development`, `staging` or `production`. Note that the environment variable has priority over the the command line.

This allows installing from staging by specifying `--s3_host=staging`. And it requires specifying
`--s3_option=production` in order to publish to, or unpublish from, production, making accidental errors less likely.
This setup allows installing from development or staging by specifying `--s3_host=staging`. And it requires specifying `--s3_option=production` in order to publish to, or unpublish from, production, making accidental errors less likely.

## Node-API Considerations

Expand Down
7 changes: 7 additions & 0 deletions lib/install.js
Original file line number Diff line number Diff line change
Expand Up @@ -233,3 +233,10 @@ function install(gyp, argv, callback) {
});
}
}

// setting an environment variable: node_pre_gyp_mock_s3 to any value
// enables intercepting outgoing http requests to s3 (using nock) and
// serving them from a mocked S3 file system (using mock-aws-s3)
if (process.env.node_pre_gyp_mock_s3) {
require('./mock/http')();
}
6 changes: 0 additions & 6 deletions lib/main.js
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,6 @@ function run() {
return;
}

// set binary.host when appropriate. host determines the s3 target bucket.
const target = prog.setBinaryHostProperty(command.name);
if (target && ['install', 'publish', 'unpublish', 'info'].indexOf(command.name) >= 0) {
log.info('using binary.host: ' + prog.package_json.binary.host);
}

prog.commands[command.name](command.args, function(err) {
if (err) {
log.error(command.name + ' error');
Expand Down
39 changes: 39 additions & 0 deletions lib/mock/http.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
'use strict';

module.exports = exports = http_mock;

const fs = require('fs');
const path = require('path');
const nock = require('nock');
const os = require('os');

const log = require('npmlog');
log.disableProgress(); // disable the display of a progress bar
log.heading = 'node-pre-gyp'; // differentiate node-pre-gyp's logs from npm's

function http_mock() {
log.warn('mocking http requests to s3');

const basePath = `${os.tmpdir()}/mock`;

nock(new RegExp('([a-z0-9]+[.])*s3[.]us-east-1[.]amazonaws[.]com'))
.persist()
.get(() => true) //a function that always returns true is a catch all for nock
.reply(
(uri) => {
const bucket = 'npg-mock-bucket';
const mockDir = uri.indexOf(bucket) === -1 ? `${basePath}/${bucket}` : basePath;
const filepath = path.join(mockDir, uri.replace(new RegExp('%2B', 'g'), '+'));

try {
fs.accessSync(filepath, fs.constants.R_OK);
} catch (e) {
return [404, 'not found\n'];
}

// mock s3 functions write to disk
// return what is read from it.
return [200, fs.createReadStream(filepath)];
}
);
}
42 changes: 42 additions & 0 deletions lib/mock/s3.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
'use strict';

module.exports = exports = s3_mock;

const AWSMock = require('mock-aws-s3');
const os = require('os');

const log = require('npmlog');
log.disableProgress(); // disable the display of a progress bar
log.heading = 'node-pre-gyp'; // differentiate node-pre-gyp's logs from npm's

function s3_mock() {
log.warn('mocking s3 operations');

AWSMock.config.basePath = `${os.tmpdir()}/mock`;

const s3 = AWSMock.S3();

// wrapped callback maker. fs calls return code of ENOENT but AWS.S3 returns
// NotFound.
const wcb = (fn) => (err, ...args) => {
if (err && err.code === 'ENOENT') {
err.code = 'NotFound';
}
return fn(err, ...args);
};

return {
listObjects(params, callback) {
return s3.listObjects(params, wcb(callback));
},
headObject(params, callback) {
return s3.headObject(params, wcb(callback));
},
deleteObject(params, callback) {
return s3.deleteObject(params, wcb(callback));
},
putObject(params, callback) {
return s3.putObject(params, wcb(callback));
}
};
}
Loading