Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 2 #126

Open
wants to merge 24 commits into
base: release
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
167 changes: 138 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,36 +10,43 @@
<!-- TOC -->

- [About](#about)
- [Service](#service)
- [Server](#server)
- [Usage](#usage)
- [Batch calls](#batch-calls)
- [Authentication](#authentication)
- [Client](#client)
- [Usage](#usage-1)
- [Options](#options)
- [Parallelizing requests](#parallelizing-requests)
- [UI Frameworks](#ui-frameworks)
- [License](#license)

<!-- /TOC -->

## About

`feathers-batch` allows to batch multiple service requests into one. This is useful for minimizing client side requests to any Feathers API and can additionally speed up batched requests by only [performing authentication once](#authentication).
Feathers-batch is a high-performance library for Feathers applications that intelligently batches multiple service calls into one, reducing network overhead and improving response times. It optimizes authentication and seamlessly integrates with REST and Socket.io, simplifying batching and delivering faster and more efficient data processing without complex coding.

It also comes with a client side module that automatically collects API requests from a [Feathers client]() into a batch.
- Reduced Network Overhead: Most browsers limit concurrent HTTP requests to 6. By batching multiple service requests into a single batch call, the library significantly reduces the number of API requests made from the client to the server. This leads to a reduction in network overhead and improves the overall efficiency of data transfer.

- Improved Performance: Fewer API requests and reduced network latency result in faster response times. Batching helps optimize the performance of applications, especially when dealing with multiple parallel requests.

- Optimized Authentication: The library allows batching of authenticated requests, performing the authentication step only once for the entire batch. This reduces redundant authentication requests and enhances the processing speed of batched requests while maintaining security measures.

- Simplified Code: The library abstracts the complexity of batching multiple service requests, making it easier for developers to manage and optimize API calls without having to manually handle batching logic.

`feathers-batch` consists of two parts:

- The server side [batch service](#service) to execute batch calls
- The client side [batch client](#client) to collect parallel requests from a [Feathers client]() into a batch service request

```
```bash
npm install feathers-batch --save
```

## Service
## Server

The batch service is a normal Feathers service that executes the batch calls.
The `BatchService` on the server-side in feathers-batch is a custom service provided by the library. It is designed to handle batched service calls sent by the client. When a client makes a batch request, the `BatchService` processes and executes multiple service calls together in a single operation.

### Usage

Expand Down Expand Up @@ -86,10 +93,10 @@ The return value will be the information as returned by [Promise.allSettled](htt
[
{
"status": : "fulfilled",
"value": { /* user object returned by app.service('users').get(1) */ }
"value": { /* user object returned by app.service('users').get(1) */ }
}, {
"status": : "fulfilled",
"value": { /* page returned by app.service('messages').find({ query: { userId } }) */ }
"value": { /* page returned by app.service('messages').find({ query: { userId } }) */ }
}
]
```
Expand All @@ -100,17 +107,19 @@ If an error happened:
[
{
"status": : "fulfilled",
"value": { /* user object returned by app.service('users').get(1) */ }
"value": { /* user object returned by app.service('users').get(1) */ }
}, {
"status": : "rejected",
"reason": { /* error JSON or object with error message */ }
"reason": { /* error JSON or object with error message */ }
}
]
```

### Authentication

If you are batching authenticated requests, it is possible to perform the authentication step only once (instead of on every service call) in a batch by adding the [authenticate hook](https://docs.feathersjs.com/api/authentication/hook.html) to the batch service `create` method:
feathers-batch allows for optimizing authenticated requests within a batch by performing the authentication step only once. This reduces redundant authentication requests and improves processing efficiency, ensuring both security and performance in batch scenarios.

Add the [authenticate hook](https://docs.feathersjs.com/api/authentication/hook.html) to the batch service `create` method. Authentication will be called on the batch service and its results will be available in all of the batched requests.

```js
app.service('batch').hooks({
Expand All @@ -121,51 +130,133 @@ app.service('batch').hooks({
```

## Client
[Feathers Client](https://docs.feathersjs.com/api/client.html)

`feathers-batch` also exports a client side module that can be used with [Feathers on the client](https://docs.feathersjs.com/api/client.html) that automatically collects multiple requests that are made at the same time into a single batch call. This works for any transport mechanism (REST, Socket.io etc.).
The client-side module of feathers-batch empowers Feathers applications to optimize API requests from the browser by automatically batching multiple parallel requests into a single call. This capability is especially valuable because most browsers restrict the number of concurrent HTTP requests to a single domain to around six connections. By using feathers-batch, you can overcome this limitation and effectively batch requests, leading to reduced network overhead and improved performance.

### Usage

Batching on the client can be enabled like this:

To enable batching in feathers-batch, you should first configure `batchClient`. The `batchClient` function extends every REST/Socket service with batching capability, ensuring that all network services inherit batching behavior. The options provided to `batchClient` create a default `BatchManager` used for all REST/Socket services within the Feathers client.
```js
// If your module loader supports the `browser` package.json field
import { batchClient } from 'feathers-batch';
// Alternatively
import { batchClient } from 'feathers-batch/client';

const client = feathers();
// configure Feathers client here

// `batchClient` should be configured *after*
// any other application level hooks
// Use `batchClient` to enable batching for all services
client.configure(batchClient({
batchService: 'batch'
batchService: 'batch',
// Other options for batching
}));
```

Now you can continue to make normal service calls and whenever possible they will be automatically combined into a batch (see [parallelizing requests](#parallelizing-requests) for more information).
Next, you can use `batchHook` to configure batching on individual services if required. The `batchHook` allows you to fine-tune batching behavior for specific services independently. By applying the `batchHook` selectively, you can customize the batching process according to the unique requirements of each service.
```js
import { batchHook } from 'feathers-batch';

const usersService = client.service('users');
const messagesService = client.service('messages');

// Create a hook with custom configuration
const batch = batchHook({
batchService: 'batch',
// Other options for batching
})

usersService.hooks({
before: {
find: [batch],
get: [batch],
create: [batch],
update: [batch],
patch: [batch],
remove: [batch]
}
});

// You can share batches across services
messagesService.hooks({
before: {
find: [batch],
get: [batch],
create: [batch],
update: [batch],
patch: [batch],
remove: [batch]
}
});
```

With `batchHook`, you can customize batching for specific services without affecting other services, or you can share the same batch across specific services. You can even send batches to a different backend endpoint. The settings provided to `batchHook` will override the global settings of `batchClient`.

Once configured, you can continue to make regular service calls using the Feathers client. The client will automatically collect parallel requests and combine them into a single batch, ensuring efficient use of available connections and optimizing data transfer.

### Options

The following options are available for the `batchClient`:
When configuring feathers-batch, you have several options available to fine-tune the behavior of the batching process:

- **batchService** (required): The name of the batch service registered on the server-side. This option specifies the endpoint to which the batched requests will be sent.

- `batchService` (*required*) - The name of the batch service
- `exclude` (*optional*) - An array of service names that should be excluded from batching
- `timeout` (*optional*) (default: `50`) - The number of milliseconds to wait when collecting parallel requests.
- **exclude** (optional): An array of service names that should be excluded from batching. Alternatively, you can provide an async function that takes the context as an argument to dynamically decide whether to exclude a particular service call from batching. This option is useful when certain services should not be included in the batch for specific scenarios.

### Parallelizing requests
- **dedupe** (optional): A boolean indicating whether requests should be deduplicated in the batch. Alternatively, you can provide an async function that takes the context as an argument to determine whether to deduplicate a particular service call. Deduplication helps avoid redundant requests within the batch.

At the same time means e.g. multiple components making requests to the API in parallel. The following example will __NOT__ be collected into a batch since the calls run sequentially using `await`:
- **timeout** (optional): The number of milliseconds to wait when collecting parallel requests before creating a batch. The default value is 25. Adjusting the timeout can help balance the trade-off between batch size and responsiveness.

```js
client.configure(batchClient({
batchService: 'batch',
exclude: ['authentication'], // Exclude 'authentication' service from batching
dedupe: false, // disable deduplication of requests in the batch
timeout: 50 // Set the batch collection timeout to 50 milliseconds
}));
```

```js
client.configure(batchClient({
batchService: 'batch',
exclude: (context) => {
// Exclude 'admin' service from batching
return context.path === 'admin';
},
dedupe: async (context) => {
// Deduplicate 'users' service find requests within the batch
if (context.path === 'users' && context.method === 'find') {
return true;
}
return false;
},
timeout: 50 // Set the batch collection timeout to 50 milliseconds
}));
```

By using functions for `exclude` and `dedupe`, you gain flexibility in customizing which service calls to include or exclude from the batch, making it easy to handle different scenarios based on your application's needs. You can also use params to control batching on each individual service call.

```js
// Exclude service calls individually with params
await app.service('users').find({ batch: { exclude: true } });
await app.service('admin').get(1, { batch: { exclude: (context) => true } });

// Deduplicate service calls individually with params
await app.service('messages').get(1, { batch: { dedupe: true } });
await app.service('notifications').find({ batch: { dedupe: (context) => true } });
```

By setting these options on each service call, you can control which requests should be excluded from batching, ensuring they are processed individually. Additionally, you can deduplicate certain service calls to avoid redundancy within the batch, tailoring the batching behavior to suit specific requirements and further optimize the performance of your Feathers application.


### Parallelizing Requests

In feathers-batch, sequential requests are not automatically combined into a batch. When multiple service calls are made sequentially using await, each request is processed individually, similar to making separate API calls. Feathers-batch doesn't batch these requests together because it only collects parallel requests into a single batch call. Just use services as you normally would.

```js
// This works as expected
const user = await client.service('users').get(userId);
const messages = await client.service('messages').find({
query: { userId }
});
```

If the requests are not dependent on each other and you want to batch them, [Promise.all](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) needs to be used:
When using `Promise.all` to parallelize requests, feathers-batch will automatically detect and capture these concurrent requests as a batch. This is how `feathers-schema` and others resolvers handle promises, which means batching works in all resolvers and loaders.

```js
const [ user, messages ] = await Promise.all([
Expand All @@ -176,6 +267,24 @@ const [ user, messages ] = await Promise.all([
]);
```

## UI Frameworks

Feathers-batch seamlessly integrates with UI libraries like React and Vue, requiring no additional configuration. When components using a Feathers client to make service requests simultaneously, Feathers-batch automatically captures these requests and combines them into a single batch. This batching process occurs transparently behind the scenes, optimizing data retrieval without any manual intervention.

```js
// Given the User component fetches each user, the
// app will automatically batch all user requests
<ListGroup>
{userIds.map((userId) => {
return (
<ListItem>
<User userId={userId} />
</ListItem>
)
})}
</ListGroup>
```

## License

Copyright (c) 2020 Feathers contributors
Expand Down
Loading
Loading