Skip to content

Commit

Permalink
Merge pull request #289 from rvagg/1.0.0-wip
Browse files Browse the repository at this point in the history
1.0.0-wip
  • Loading branch information
juliangruber committed Mar 17, 2015
2 parents acb715c + d689bb8 commit 79787f6
Show file tree
Hide file tree
Showing 19 changed files with 33 additions and 1,227 deletions.
3 changes: 1 addition & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,4 @@ notifications:
- [email protected]
- [email protected]
- [email protected]
script: npm run-script alltests

script: npm test
111 changes: 12 additions & 99 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,6 @@ db.put('name', 'LevelUP', function (err) {
* <a href="#createReadStream"><code>db.<b>createReadStream()</b></code></a>
* <a href="#createKeyStream"><code>db.<b>createKeyStream()</b></code></a>
* <a href="#createValueStream"><code>db.<b>createValueStream()</b></code></a>
* <a href="#createWriteStream"><code>db.<b>createWriteStream()</b></code></a>

### Special operations exposed by LevelDOWN

Expand All @@ -127,6 +126,9 @@ db.put('name', 'LevelUP', function (err) {
* <a href="#destroy"><code><b>leveldown.destroy()</b></code></a>
* <a href="#repair"><code><b>leveldown.repair()</b></code></a>

### Special Notes
* <a href="#writeStreams">What happened to <code><b>db.createWriteStream()</b></code></a>


--------------------------------------------------------
<a name="ctor"></a>
Expand Down Expand Up @@ -189,7 +191,7 @@ var db = levelup(memdown)

* `'compression'` *(boolean, default: `true`)*: If `true`, all *compressible* data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.

* `'cacheSize'` *(number, default: `8 * 1024 * 1024`)*: The size (in bytes) of the in-memory [LRU](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) cache with frequently used uncompressed block contents.
* `'cacheSize'` *(number, default: `8 * 1024 * 1024`)*: The size (in bytes) of the in-memory [LRU](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) cache with frequently used uncompressed block contents.

* `'keyEncoding'` and `'valueEncoding'` *(string, default: `'utf8'`)*: The encoding of the keys and values passed through Node.js' `Buffer` implementation (see [Buffer#toString()](http://nodejs.org/docs/latest/api/buffer.html#buffer_buf_tostring_encoding_start_end)).
<p><code>'utf8'</code> is the default encoding for both keys and values so you can simply pass in strings and expect strings from your <code>get()</code> operations. You can also pass <code>Buffer</code> objects as keys and/or values and conversion will be performed.</p>
Expand Down Expand Up @@ -251,7 +253,7 @@ db.get('foo', function (err, value) {

Encoding of the `key` object will adhere to the `'keyEncoding'` option provided to <a href="#ctor"><code>levelup()</code></a>, although you can provide alternative encoding settings in the options for `get()` (it's recommended that you stay consistent in your encoding of keys and values in a single store).

LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting `fillCache` to `false`.
LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting `fillCache` to `false`.

--------------------------------------------------------
<a name="del"></a>
Expand Down Expand Up @@ -455,104 +457,14 @@ db.createReadStream({ keys: false, values: true })
```

--------------------------------------------------------
<a name="createWriteStream"></a>
### db.createWriteStream([options])

A **WriteStream** can be obtained by calling the `createWriteStream()` method. The resulting stream is a complete Node.js-style [Writable Stream](http://nodejs.org/docs/latest/api/stream.html#stream_writable_stream) which accepts objects with `'key'` and `'value'` pairs on its `write()` method.

The WriteStream will buffer writes and submit them as a `batch()` operations where writes occur *within the same tick*.

```js
var ws = db.createWriteStream()

ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})

ws.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
ws.write({ key: 'dob', value: '16 February 1941' })
ws.write({ key: 'spouse', value: 'Kim Young-sook' })
ws.write({ key: 'occupation', value: 'Clown' })
ws.end()
```

The standard `write()`, `end()`, `destroy()` and `destroySoon()` methods are implemented on the WriteStream. `'drain'`, `'error'`, `'close'` and `'pipe'` events are emitted.

You can specify encodings both for the whole stream and individual entries:

To set the encoding for the whole stream, provide an options object as the first parameter to `createWriteStream()` with `'keyEncoding'` and/or `'valueEncoding'`.

To set the encoding for an individual entry:

```js
writeStream.write({
key : new Buffer([1, 2, 3])
, value : { some: 'json' }
, keyEncoding : 'binary'
, valueEncoding : 'json'
})
```

#### write({ type: 'put' })

If individual `write()` operations are performed with a `'type'` property of `'del'`, they will be passed on as `'del'` operations to the batch.

```js
var ws = db.createWriteStream()

ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})

ws.write({ type: 'del', key: 'name' })
ws.write({ type: 'del', key: 'dob' })
ws.write({ type: 'put', key: 'spouse' })
ws.write({ type: 'del', key: 'occupation' })
ws.end()
```

#### db.createWriteStream({ type: 'del' })

If the *WriteStream* is created with a `'type'` option of `'del'`, all `write()` operations will be interpreted as `'del'`, unless explicitly specified as `'put'`.

```js
var ws = db.createWriteStream({ type: 'del' })

ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})

ws.write({ key: 'name' })
ws.write({ key: 'dob' })
// but it can be overridden
ws.write({ type: 'put', key: 'spouse', value: 'Ri Sol-ju' })
ws.write({ key: 'occupation' })
ws.end()
```

#### Pipes and Node Stream compatibility

A ReadStream can be piped directly to a WriteStream, allowing for easy copying of an entire database. A simple `copy()` operation is included in LevelUP that performs exactly this on two open databases:

```js
function copy (srcdb, dstdb, callback) {
srcdb.createReadStream().pipe(dstdb.createWriteStream()).on('close', callback)
}
```
<a name="writeStreams"></a>
#### What happened to `db.createWriteStream`?

The ReadStream is also [fstream](https://github.com/isaacs/fstream)-compatible which means you should be able to pipe to and from fstreams. So you can serialize and deserialize an entire database to a directory where keys are filenames and values are their contents, or even into a *tar* file using [node-tar](https://github.com/isaacs/node-tar). See the [fstream functional test](https://github.com/rvagg/node-levelup/blob/master/test/functional/fstream-test.js) for an example. *(Note: I'm not really sure there's a great use-case for this but it's a fun example and it helps to harden the stream implementations.)*
`db.createWriteStream()` has been removed in order to provide a smaller and more maintainable core. It primarily existed to create symmetry with `db.createReadStream()` but through much [discussion](https://github.com/rvagg/node-levelup/issues/199), removing it was the best cause of action.

KeyStreams and ValueStreams can be treated like standard streams of raw data. If `'keyEncoding'` or `'valueEncoding'` is set to `'binary'` the `'data'` events will simply be standard Node `Buffer` objects straight out of the data store.
The main driver for this was performance. While `db.createReadStream()` performs well under most use cases, `db.createWriteStream()` was highly dependent on the application keys and values. Thus we can't provide a standard implementation and encourage more `write-stream` implementations to be created to solve the broad spectrum of use cases.

Check out the implementations that the community has already produced [here](https://github.com/rvagg/node-levelup/wiki/Modules#write-streams).

--------------------------------------------------------
<a name='approximateSize'></a>
Expand Down Expand Up @@ -614,7 +526,7 @@ require('leveldown').destroy('./huge.db', function (err) { console.log('done!')

> If a DB cannot be opened, you may attempt to call this method to resurrect as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function on a database that contains important information.
You will find information on the *repair* operation in the *LOG* file inside the store directory.
You will find information on the *repair* operation in the *LOG* file inside the store directory.

A `repair()` can also be used to perform a compaction of the LevelDB log into table files.

Expand Down Expand Up @@ -719,6 +631,7 @@ LevelUP is only possible due to the excellent work of the following contributors
<tr><th align="left">Matteo Collina</th><td><a href="https://github.com/mcollina">GitHub/mcollina</a></td><td><a href="https://twitter.com/matteocollina">Twitter/@matteocollina</a></td></tr>
<tr><th align="left">Pedro Teixeira</th><td><a href="https://github.com/pgte">GitHub/pgte</a></td><td><a href="https://twitter.com/pgte">Twitter/@pgte</a></td></tr>
<tr><th align="left">James Halliday</th><td><a href="https://github.com/substack">GitHub/substack</a></td><td><a href="https://twitter.com/substack">Twitter/@substack</a></td></tr>
<tr><th align="left">Jarrett Cruger</th><td><a href="https://github.com/jcrugzz">GitHub/jcrugzz</a></td><td><a href="https://twitter.com/jcrugzz">Twitter/@jcrugzz</a></td></tr>
</tbody></table>

### Windows
Expand Down
13 changes: 2 additions & 11 deletions lib/levelup.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ var EventEmitter = require('events').EventEmitter
, InitializationError = require('./errors').InitializationError

, ReadStream = require('./read-stream')
, WriteStream = require('./write-stream')
, util = require('./util')
, Batch = require('./batch')
, codec = require('./codec')
Expand Down Expand Up @@ -239,9 +238,8 @@ LevelUP.prototype.put = function (key_, value_, options, callback) {

callback = getCallback(options, callback)

if (key_ === null || key_ === undefined
|| value_ === null || value_ === undefined)
return writeError(this, 'put() requires key and value arguments', callback)
if (key_ === null || key_ === undefined)
return writeError(this, 'put() requires a key argument', callback)

if (maybeError(this, options, callback))
return
Expand Down Expand Up @@ -440,12 +438,6 @@ LevelUP.prototype.createValueStream = function (options) {
return this.createReadStream(extend(options, { keys: false, values: true }))
}

LevelUP.prototype.writeStream =
LevelUP.prototype.createWriteStream = function (options) {
//XXX is extend needed here?
return new WriteStream(extend(options), this)
}

LevelUP.prototype.toString = function () {
return 'LevelUP'
}
Expand All @@ -457,7 +449,6 @@ function utilStatic (name) {
}

module.exports = LevelUP
module.exports.copy = util.copy
// DEPRECATED: prefer accessing LevelDOWN for this: require('leveldown').destroy()
module.exports.destroy = utilStatic('destroy')
// DEPRECATED: prefer accessing LevelDOWN for this: require('leveldown').repair()
Expand Down
8 changes: 0 additions & 8 deletions lib/util.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,6 @@ var extend = require('xtend')
return eo
}())

function copy (srcdb, dstdb, callback) {
srcdb.readStream()
.pipe(dstdb.writeStream())
.on('close', callback ? callback : function () {})
.on('error', callback ? callback : function (err) { throw err })
}

function getOptions (levelup, options) {
var s = typeof options == 'string' // just an encoding
if (!s && options && options.encoding && !options.valueEncoding)
Expand Down Expand Up @@ -84,7 +77,6 @@ function isDefined (v) {

module.exports = {
defaultOptions : defaultOptions
, copy : copy
, getOptions : getOptions
, getLevelDOWN : getLevelDOWN
, dispatchError : dispatchError
Expand Down
Loading

0 comments on commit 79787f6

Please sign in to comment.