Skip to content

Commit

Permalink
feat/extern: add backend traits for extern support
Browse files Browse the repository at this point in the history
  • Loading branch information
MichaelHirn committed Nov 30, 2015
1 parent 958a9a7 commit f3d5017
Show file tree
Hide file tree
Showing 5 changed files with 43 additions and 12 deletions.
6 changes: 4 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
[package]
name = "collenchyma"
description = "fast, parallel, backend-agnostic computation on any hardware"
version = "0.0.2"
authors = ["Michael Hirn <[email protected]>"]
version = "0.0.3"
authors = [
"Michael Hirn <[email protected]>",
"Maximilian Goisser <[email protected]"]

repository = "https://github.com/autumnai/collenchyma"
homepage = "https://github.com/autumnai/collenchyma"
Expand Down
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,16 @@ code for the machine you deploy to. Collenchyma does not require OpenCL or Cuda
on the machine and automatically falls back to the native host CPU, making your
application highly flexible and fast to build.

Collenchyma was started at [Autumn][autumn] to support the Machine Intelligence
Framework [Leaf][leaf] with backend-agnostic, state-of-the-art performance.

* __Parallelizing Performance__<br/>
The biggest benefit to using special purpose devices for computations, such as
GPUs, is the ability to greater parallelize operations. Collenchyma makes it
easy to parallelize computation on your machine, using all the available cores.
Collenchyma makes it easy to parallelize computations on your machine, putting
all the available cores of your CPUs/GPUs to use.
Collenchyma also provides optimized operations for the most popular operations,
such as BLAS, that you can use right away to speed up your application.
Highly-optimized computation libraries like open-BLAS and cuDNN can be dropped
in.

* __Easily Extensible__<br/>
Writing custom operations for GPU execution becomes easier with Collenchyma, as
Expand All @@ -25,15 +29,12 @@ overhead. Therefore extending the Backend becomes a straight-forward process of
defining the kernels and mounting them on the Backend.

* __Butter-smooth Builds__<br/>
A Collenchyma does not require the installation of various frameworks and
As Collenchyma does not require the installation of various frameworks and
libraries, it will not add significantly to the build time of your application.
Collenchyma checks at run-time if these frameworks can be used and gracefully
falls back to the standard, native host CPU if they are not.
No long and painful build procedures for you or your users.

Collenchyma was started at [Autumn][autumn] to support the Machine Intelligence
Framework [Leaf][leaf] with backend-agnostic, state-of-the-art performance.

For more information,

* see Collenchyma's [Documentation](http://autumnai.github.io/collenchyma)
Expand All @@ -53,7 +54,7 @@ For more information,
If you're using Cargo, just add Collenchyma to your Cargo.toml:

[dependencies]
collenchyma = "0.0.2"
collenchyma = "0.0.3"

If you're using [Cargo Edit][cargo-edit], you can call:

Expand Down
22 changes: 21 additions & 1 deletion src/backend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@

use error::Error;
use framework::IFramework;
use frameworks::{Native, OpenCL};
use frameworks::{Native, OpenCL, Cuda};
use device::{IDevice, DeviceType};
use libraries::blas::IBlas;

Expand Down Expand Up @@ -97,6 +97,26 @@ impl<F: IFramework + Clone> Backend<F> {
}
}

/// Describes a Backend.
///
/// Serves as a marker trait and helps for extern implementation.
pub trait IBackend {
/// Represents the Framework of a Backend.
type F: IFramework + Clone;
}

impl IBackend for Backend<Native> {
type F = Native;
}

impl IBackend for Backend<OpenCL> {
type F = OpenCL;
}

impl IBackend for Backend<Cuda> {
type F = Cuda;
}

impl IBlas<f32> for Backend<OpenCL> {
type B = ::frameworks::opencl::Program;

Expand Down
2 changes: 1 addition & 1 deletion src/libraries/blas.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ use memory::MemoryType;
use shared_memory::SharedMemory;
use binary::IBinary;
use device::DeviceType;
use num::traits::Float;
use libraries::Float;

/// Provides the functionality for a backend to support Basic Linear Algebra Subprogram operations.
pub trait IBlas<F: Float> {
Expand Down
8 changes: 8 additions & 0 deletions src/libraries/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,16 @@
//! own backend-agnostic operations, too.
//!
//! [program]: ../program/index.html
//! [blas]: http://www.netlib.org/blas/
//! [cudnn]: https://developer.nvidia.com/cudnn

pub use self::numeric_helpers::Float;

pub mod blas;
/// Describes the Library numeric types and traits.
pub mod numeric_helpers {
pub use num::traits::*;
}

#[derive(Debug)]
/// Defines a high-level library Error.
Expand Down

0 comments on commit f3d5017

Please sign in to comment.