Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: docs for fs related services #2397

Merged
merged 6 commits into from
Jun 1, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion core/examples/object.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ use opendal::Operator;
use opendal::Result;

/// Visit [`opendal::services`] for more service related config.
/// Visit [`opendal::Object`] for more object level APIs.
/// Visit [`opendal::Operator`] for more operator level APIs.
#[tokio::main]
async fn main() -> Result<()> {
let _ = tracing_subscriber::fmt()
Expand Down
2 changes: 1 addition & 1 deletion core/src/layers/prometheus.rs
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ use crate::*;
/// use prometheus::Encoder;
///
/// /// Visit [`opendal::services`] for more service related config.
/// /// Visit [`opendal::Object`] for more object level APIs.
/// /// Visit [`opendal::Operator`] for more operator level APIs.
/// #[tokio::main]
/// async fn main() -> Result<()> {
/// // Pick a builder and configure it.
Expand Down
48 changes: 1 addition & 47 deletions core/src/services/ftp/backend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -46,53 +46,7 @@ use crate::*;

/// FTP and FTPS services support.
///
/// # Capabilities
///
/// This service can be used to:
///
/// - [x] stat
/// - [x] read
/// - [x] write
/// - [x] create_dir
/// - [x] delete
/// - [ ] copy
/// - [ ] rename
/// - [x] list
/// - [ ] ~~scan~~
/// - [ ] ~~presign~~
/// - [ ] blocking
///
/// # Configuration
///
/// - `endpoint`: Set the endpoint for connection
/// - `root`: Set the work directory for backend
/// - `user`: Set the login user
/// - `password`: Set the login password
///
/// You can refer to [`FtpBuilder`]'s docs for more information
///
/// # Example
///
/// ## Via Builder
///
/// ```no_run
/// use anyhow::Result;
/// use opendal::services::Ftp;
/// use opendal::Object;
/// use opendal::Operator;
///
/// #[tokio::main]
/// async fn main() -> Result<()> {
/// // create backend builder
/// let mut builder = Ftp::default();
///
/// builder.endpoint("127.0.0.1");
///
/// let op: Operator = Operator::new(builder)?.finish();
/// let _obj: Object = op.object("test_file");
/// Ok(())
/// }
/// ```
#[doc = include_str!("docs.md")]
#[derive(Default)]
pub struct FtpBuilder {
endpoint: Option<String>,
Expand Down
44 changes: 44 additions & 0 deletions core/src/services/ftp/docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
## Capabilities

This service can be used to:

- [x] stat
- [x] read
- [x] write
- [x] create_dir
- [x] delete
- [ ] copy
- [ ] rename
- [x] list
- [ ] ~~scan~~
- [ ] ~~presign~~
- [ ] blocking

## Configuration

- `endpoint`: Set the endpoint for connection
- `root`: Set the work directory for backend
- `user`: Set the login user
- `password`: Set the login password

You can refer to [`FtpBuilder`]'s docs for more information

## Example

### Via Builder

```rust
use anyhow::Result;
use opendal::services::Ftp;
use opendal::Operator;

#[tokio::main]
async fn main() -> Result<()> {
let mut builder = Ftp::default();

builder.endpoint("127.0.0.1");

let op: Operator = Operator::new(builder)?.finish();
Ok(())
}
```
102 changes: 1 addition & 101 deletions core/src/services/hdfs/backend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,107 +34,7 @@ use crate::*;

/// [Hadoop Distributed File System (HDFS™)](https://hadoop.apache.org/) support.
///
/// A distributed file system that provides high-throughput access to application data.
///
/// # Capabilities
///
/// This service can be used to:
///
/// - [x] stat
/// - [x] read
/// - [x] write
/// - [x] create_dir
/// - [x] delete
/// - [ ] copy
/// - [ ] rename
/// - [x] list
/// - [ ] ~~scan~~
/// - [ ] ~~presign~~
/// - [x] blocking
///
/// # Differences with webhdfs
///
/// [Webhdfs][crate::services::Webhdfs] is powered by hdfs's RESTful HTTP API.
///
/// # Features
///
/// HDFS support needs to enable feature `services-hdfs`.
///
/// # Configuration
///
/// - `root`: Set the work dir for backend.
/// - `name_node`: Set the name node for backend.
///
/// Refer to [`HdfsBuilder`]'s public API docs for more information.
///
/// # Environment
///
/// HDFS needs some environment set correctly.
///
/// - `JAVA_HOME`: the path to java home, could be found via `java -XshowSettings:properties -version`
/// - `HADOOP_HOME`: the path to hadoop home, opendal relays on this env to discover hadoop jars and set `CLASSPATH` automatically.
///
/// Most of the time, setting `JAVA_HOME` and `HADOOP_HOME` is enough. But there are some edge cases:
///
/// - If meeting errors like the following:
///
/// ```shell
/// error while loading shared libraries: libjvm.so: cannot open shared object file: No such file or directory
/// ```
///
/// Java's lib are not including in pkg-config find path, please set `LD_LIBRARY_PATH`:
///
/// ```shell
/// export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}
/// ```
///
/// The path of `libjvm.so` could be different, please keep an eye on it.
///
/// - If meeting errors like the following:
///
/// ```shell
/// (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
/// ```
///
/// `CLASSPATH` is not set correctly or your hadoop installation is incorrect.
///
/// To set `CLASSPATH`:
/// ```shell
/// export CLASSPATH=$(find $HADOOP_HOME -iname "*.jar" | xargs echo | tr ' ' ':'):${CLASSPATH}
/// ```
///
/// # Example
///
/// ### Via Builder
///
/// ```no_run
/// use std::sync::Arc;
///
/// use anyhow::Result;
/// use opendal::services::Hdfs;
/// use opendal::Object;
/// use opendal::Operator;
///
/// #[tokio::main]
/// async fn main() -> Result<()> {
/// // Create fs backend builder.
/// let mut builder = Hdfs::default();
/// // Set the name node for hdfs.
/// builder.name_node("hdfs://127.0.0.1:9000");
/// // Set the root for hdfs, all operations will happen under this root.
/// //
/// // NOTE: the root must be absolute path.
/// builder.root("/tmp");
///
/// // `Accessor` provides the low level APIs, we will use `Operator` normally.
/// let op: Operator = Operator::new(builder)?.finish();
///
/// // Create an object handle to start operation on object.
/// let _: Object = op.object("test_file");
///
/// Ok(())
/// }
/// ```
#[doc = include_str!("docs.md")]
#[derive(Debug, Default)]
pub struct HdfsBuilder {
root: Option<String>,
Expand Down
97 changes: 97 additions & 0 deletions core/src/services/hdfs/docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
A distributed file system that provides high-throughput access to application data.

## Capabilities

This service can be used to:

- [x] stat
- [x] read
- [x] write
- [x] create_dir
- [x] delete
- [ ] copy
- [ ] rename
- [x] list
- [ ] ~~scan~~
- [ ] ~~presign~~
- [x] blocking

## Differences with webhdfs

[Webhdfs][crate::services::Webhdfs] is powered by hdfs's RESTful HTTP API.

## Features

HDFS support needs to enable feature `services-hdfs`.

## Configuration

- `root`: Set the work dir for backend.
- `name_node`: Set the name node for backend.

Refer to [`HdfsBuilder`]'s public API docs for more information.

## Environment

HDFS needs some environment set correctly.

- `JAVA_HOME`: the path to java home, could be found via `java -XshowSettings:properties -version`
- `HADOOP_HOME`: the path to hadoop home, opendal relays on this env to discover hadoop jars and set `CLASSPATH` automatically.

Most of the time, setting `JAVA_HOME` and `HADOOP_HOME` is enough. But there are some edge cases:

- If meeting errors like the following:

```shell
error while loading shared libraries: libjvm.so: cannot open shared object file: No such file or directory
```

Java's lib are not including in pkg-config find path, please set `LD_LIBRARY_PATH`:

```shell
export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}
```

The path of `libjvm.so` could be different, please keep an eye on it.

- If meeting errors like the following:

```shell
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
```

`CLASSPATH` is not set correctly or your hadoop installation is incorrect.

To set `CLASSPATH`:
```shell
export CLASSPATH=$(find $HADOOP_HOME -iname "*.jar" | xargs echo | tr ' ' ':'):${CLASSPATH}
```

## Example

### Via Builder

```rust
use std::sync::Arc;

use anyhow::Result;
use opendal::services::Hdfs;
use opendal::Operator;

#[tokio::main]
async fn main() -> Result<()> {
// Create fs backend builder.
let mut builder = Hdfs::default();
// Set the name node for hdfs.
builder.name_node("hdfs://127.0.0.1:9000");
// Set the root for hdfs, all operations will happen under this root.
//
// NOTE: the root must be absolute path.
builder.root("/tmp");

// `Accessor` provides the low level APIs, we will use `Operator` normally.
let op: Operator = Operator::new(builder)?.finish();

Ok(())
}
```
52 changes: 1 addition & 51 deletions core/src/services/ipfs/backend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,57 +34,7 @@ use crate::*;

/// IPFS file system support based on [IPFS HTTP Gateway](https://docs.ipfs.tech/concepts/ipfs-gateway/).
///
/// # Capabilities
///
/// This service can be used to:
///
/// - [x] stat
/// - [x] read
/// - [ ] ~~write~~
/// - [ ] ~~create_dir~~
/// - [ ] ~~delete~~
/// - [ ] ~~copy~~
/// - [ ] ~~rename~~
/// - [x] list
/// - [ ] ~~scan~~
/// - [ ] presign
/// - [ ] blocking
///
/// # Configuration
///
/// - `root`: Set the work directory for backend
/// - `endpoint`: Customizable endpoint setting
///
/// You can refer to [`IpfsBuilder`]'s docs for more information
///
/// # Example
///
/// ## Via Builder
///
/// ```no_run
/// use anyhow::Result;
/// use opendal::services::Ipfs;
/// use opendal::Object;
/// use opendal::Operator;
///
/// #[tokio::main]
/// async fn main() -> Result<()> {
/// // create backend builder
/// let mut builder = Ipfs::default();
///
/// // set the endpoint for OpenDAL
/// builder.endpoint("https://ipfs.io");
/// // set the root for OpenDAL
/// builder.root("/ipfs/QmPpCt1aYGb9JWJRmXRUnmJtVgeFFTJGzWFYEEX7bo9zGJ");
///
/// let op: Operator = Operator::new(builder)?.finish();
///
/// // Create an object handle to start operation on object.
/// let _: Object = op.object("test_file");
///
/// Ok(())
/// }
/// ```
#[doc = include_str!("docs.md")]
#[derive(Default, Clone, Debug)]
pub struct IpfsBuilder {
endpoint: Option<String>,
Expand Down
Loading