Skip to content

Commit

Permalink
Clean stable_diffusion and add missing .md language blocks (#2635)
Browse files Browse the repository at this point in the history
  • Loading branch information
zachgk committed Jun 6, 2023
1 parent fda82ee commit a45473c
Show file tree
Hide file tree
Showing 48 changed files with 126 additions and 124 deletions.
2 changes: 1 addition & 1 deletion android/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The minimum API level for DJL Android is 26.

In gradle, you can add the 5 modules in your dependencies:

```
```groovy
dependencies {
implementation platform("ai.djl:bom:0.22.1")
Expand Down
18 changes: 9 additions & 9 deletions android/pytorch-native/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Follow this setup guide in order to run DJL apps on an Android. In order to succ

## Prerequisites

```
```sh
# Run the following command (assume you have python3 installed already)
export PYTHON=python3

Expand All @@ -20,7 +20,7 @@ This will install the android-sdk on your machine as well as python3. It sets th

### Linux (Ubuntu 20.04) android-sdk install

```
```sh
# install python and Android sdk
sudo apt-get install android-sdk python3

Expand All @@ -33,7 +33,7 @@ sudo chown -R ubuntu:ubuntu $ANDROID_HOME

### Mac android-sdk install

```
```sh
# install python and Android sdk
brew install android-sdk

Expand All @@ -48,7 +48,7 @@ sudo chown -R $USER $ANDROID_HOME

Find latest command line only tools: [https://developer.android.com/studio#downloads](https://developer.android.com/studio#downloads:~:text=Command%20line%20tools%20only)

```
```sh
# create directory for Android command line tools
mkdir -p $ANDROID_HOME/cmdline-tools
cd $ANDROID_HOME/cmdline-tools
Expand All @@ -68,7 +68,7 @@ mv cmdline-tools tools

See GitHub actions to ensure latest NDK_VERSION: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml](https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml)

```
```sh
# set Android NDK version and install it
export NDK_VERSION=21.1.6352462
echo "y" | sudo ${ANDROID_HOME}/cmdline-tools/tools/bin/sdkmanager --install "ndk;${NDK_VERSION}"
Expand All @@ -78,7 +78,7 @@ echo "y" | sudo ${ANDROID_HOME}/cmdline-tools/tools/bin/sdkmanager --install "nd

See: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml](https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml)

```
```sh
# cd into whatever directory holds your djl directory
export PYTORCH_VERSION=1.13.0
export ANDROID_NDK=${ANDROID_HOME}/ndk/${NDK_VERSION}
Expand Down Expand Up @@ -106,7 +106,7 @@ See: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/nativ

This command unzips all the files we zipped in the previous code block. It puts them into the directories where the DJL build expects to find them when it compiles.

```
```sh
cd ../djl/engines/pytorch/pytorch-native

# to avoid download PyTorch native from S3, manually unzip PyTorch native
Expand All @@ -132,7 +132,7 @@ See: [ https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/publ

The final command in this code block `./gradlew pTML` is optional. It stores a local copy of the DJL snapshot in your maven directory. If not done, then the app will pull the snapshot release of DJL from Sonatype.

```
```sh
# move into djl/android directory
cd ../../../android

Expand All @@ -153,7 +153,7 @@ See: [https://github.com/deepjavalibrary/djl-demo/tree/master/android/pytorch_an

From Android Studio, with an emulator turned on, run the following commands

```
```sh
cd djl-demo/android/pytorch_android/style_transfer_cyclegan
./gradlew iD
```
6 changes: 3 additions & 3 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can use the [docker file](https://github.com/deepjavalibrary/djl/blob/master
Please note that this docker will only work with Windows server 2019 by default. If you want it to work with other
versions of Windows, you need to pass the version as an argument as follows:

```
```bash
docker build --build-arg version=<YOUR_VERSION>
```

Expand All @@ -20,7 +20,7 @@ This docker file is a modification of the one provided by NVIDIA in
By default this sets up a container using Ubuntu 18.04 and CUDA 11.6.2. You can build the container with other versions as follows,
but keep in mind the TensorRT software requirements outlined [here](https://github.com/NVIDIA/TensorRT#prerequisites):

```
```bash
docker build --build-arg OS_VERSION=<YOUR_VERSION> --build-arg CUDA_VERSION=<YOUR_VERSION>
```

Expand All @@ -29,4 +29,4 @@ To run the container, we recommend using `nvidia-docker run ...` to ensure cuda
We recommend that you follow the setup steps in the [TensorRT guide](https://github.com/NVIDIA/TensorRT) if you
need access to the full suite of tools TensorRT provides, such as `trtexec` which can convert onnx models to
uff tensorrt models. When following that guide, make sure to use the DJL provided
[docker file](https://github.com/deepjavalibrary/djl/blob/master/docker/tensorrt/Dockerfile) to enable JDK11 in the docker container.
[docker file](https://github.com/deepjavalibrary/djl/blob/master/docker/tensorrt/Dockerfile) to enable JDK11 in the docker container.
4 changes: 2 additions & 2 deletions docs/create_serving_ready_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ There are two ways to supply configurations to the `Translator`:

Here is an example:

```
```config
# serving.properties can be used to define model's metadata, all the arguments will be
# passed to TranslatorFactory to create proper Translator
Expand All @@ -73,7 +73,7 @@ softmax=true

You can customize Translator's behavior with Criteria, for example:

```
```java
Criteria<Image, Classifications> criteria = Criteria.builder()
.setTypes(Image.class, Classifications.class) // defines input and output data type
.optApplication(Application.CV.IMAGE_CLASSIFICATION) // spcific model's application
Expand Down
2 changes: 1 addition & 1 deletion docs/cv_utils.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The [DJL OpenCV extension](../extensions/opencv/README.md) provides better perfo
java's built-in ImageIO. You only need to add it into your project and DJL will automatically
pick it up:

```
```xml
<dependency>
<groupId>ai.djl.opencv</groupId>
<artifactId>opencv</artifactId>
Expand Down
4 changes: 2 additions & 2 deletions docs/development/configure_logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ to your project to enable logging (slf4j-simple is not recommended for productio

For Maven:

```
```xml
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
Expand Down Expand Up @@ -60,7 +60,7 @@ If you want to use other logging framework such as `logback`, you can just add t

or for Maven:

```
```xml
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
Expand Down
8 changes: 4 additions & 4 deletions docs/development/development_guideline.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ For larger topics which do not have a corresponding javadoc section, they should

This project uses a gradle wrapper, so you don't have to install gradle on your machine. You can just call the gradle wrapper using the following command:

```
```bash
./gradlew
```

Expand All @@ -100,19 +100,19 @@ If you are developing with an IDE, you can run a test by selecting the test and

From the command line, you can run the following command to run a test:

```
```bash
./gradlew :<module>:run -Dmain=<class_name> --args ""
```

For example, if you would like to run the complete integration test, you can use the following command:

```
```bash
./gradlew :integration:run -Dmain=ai.djl.integration.IntegrationTest
```

To run an individual integration test from the command line, use the following:

```
```bash
./gradlew :integration:run --args="-c <class_name> -m <method_name>"
```

Expand Down
8 changes: 4 additions & 4 deletions docs/development/profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In the future, we are considering to design a unified APIs and output unified fo

By setting the following environment variable, it generates `profile.json` after executing the code.

```
```bash
export MXNET_PROFILER_AUTOSTART=1
```

Expand All @@ -30,7 +30,7 @@ DJL have integrated PyTorch C++ profiler API and expose `JniUtils.startProfile`
Wrap the code snippet you want to profile in between `JniUtils.startProfile` and `JniUtils.stopProfile`.
Here is an example.

```
```java
try (ZooModel<Image, Classifications> model = criteria.loadModel()) {
try (Predictor<Image, Classifications> predictor = model.newPredictor()) {
Image image = ImageFactory.getInstance()
Expand All @@ -47,7 +47,7 @@ try (ZooModel<Image, Classifications> model = criteria.loadModel()) {
The output format is composed of operator execution record.
Each record contains `name`(operator name), `dur`(time duration), `shape`(input shapes), `cpu mem`(cpu memory footprint).

```
```json
{
"name": "aten::empty",
"ph": "X",
Expand All @@ -65,7 +65,7 @@ Each record contains `name`(operator name), `dur`(time duration), `shape`(input

When loading a model, the profiler can be enabled by specifying the desired filepath in the criteria:

```
```java
Criteria<Image, Classifications> criteria =
Criteria.builder()
.optOption("profilerOutput", "build/testOrtProfiling")
Expand Down
2 changes: 1 addition & 1 deletion docs/development/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ If the issue continues to persist, you can use the [docker file](https://github.
Please note that this docker will only work with Windows server 2019 by default. If you want it to work with other
versions of Windows, you need to pass the version as an argument as follows:

```
```bash
docker build --build-arg version=<YOUR_VERSION>
```

Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ run on a single GPU by default, unless the user specifies otherwise.
During training, if you wish to train on multiple GPUs or if you wish to limit the number of GPUs to be used (you may want to limit the number of GPU for smaller datasets), you have to configure the `TrainingConfig` to do so by
setting the devices. For example, if you have 7 GPUs available, and you want the `Trainer` to train on 5 GPUs, you can configure it as follows.

```
```java
int maxNumberOfGpus = 5;
TrainingConfig config = new DefaultTrainingConfig(initializer, loss)
.setOptimizer(optimizer)
Expand Down
4 changes: 2 additions & 2 deletions docs/load_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ to narrow down your search condition and locate the model you want to load.
DJL Builder convention. The methods start with `set` are required fields, and `opt` for optional fields.
You must call `setType()` method when creating a `Criteria` object:

```
```java
Criteria<Image, Classifications> criteria = Criteria.builder()
.setTypes(Image.class, Classifications.class)
.build();
Expand Down Expand Up @@ -95,7 +95,7 @@ naming the model file name to be the same as the directory or archive file.
If your model file located in a sub-folder of the model directory or has a different name,
you can specify modelName by `.optModelName()` in criteria:

```
```java
Criteria<Image, Classifications> criteria = Criteria.builder()
.optModelName("traced_model/resnet18.pt") // specify model file prefix
```
Expand Down
6 changes: 3 additions & 3 deletions docs/mxnet/how_to_convert_your_model_to_symbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
DJL currently supports symbolic model loading from MXNet.
A gluon [HybridBlock](https://mxnet.apache.org/api/python/docs/api/gluon/hybrid_block.html) can be converted into a symbol for loading by doing as follows:

```
```python
from mxnet import nd
from mxnet.gluon import nn

Expand All @@ -30,7 +30,7 @@ These can be loaded in DJL.
In real applications, you may want to create and train a HybridBlock before exporting it.
The code block below shows how you can convert a [GluonCV](https://gluon-cv.mxnet.io/) pretrained model:

```
```python
import mxnet as mx
from gluoncv import model_zoo

Expand All @@ -52,7 +52,7 @@ It is always recommended enabling the static settings when exporting Apache MXNe

If you run hybridize without `static_alloc=True, static_shape=True`:

```
```python
net.hybridize()
```

Expand Down
10 changes: 5 additions & 5 deletions docs/paddlepaddle/how_to_create_paddlepaddle_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ just go to the following link:

Then we find "代码示例" section here:

```
```python
import paddlehub as hub
import cv2

Expand All @@ -40,7 +40,7 @@ please replace the `'/PATH/TO/IMAGE'` to your local image path.

Then, all we need to do is appending one more line to the previous code:

```
```python
module.save_inference_model(dirname="model/mobilenet")
```

Expand All @@ -59,7 +59,7 @@ Finally, you can directly feed the `mobilenet.zip` file in DJL for inference tas

As a summary, here is the pattern for you to save the model in the rest of PaddleHub:

```
```python
import paddlehub as hub

model = hub.Module(name="modelname")
Expand All @@ -77,7 +77,7 @@ Firstly let's assume you have code, and you already load the pretrained weight.

For imperative model trained using Paddle 2.0 like below:

```
```python
class LinearNet(nn.Layer):
def __init__(self):
super(LinearNet, self).__init__()
Expand Down Expand Up @@ -107,7 +107,7 @@ is `inference.*` since DJL will only find files with this prefix.

For Paddle model created before 2.0, it is usually in Symbolic form:

```
```python
import paddle

paddle.enable_static()
Expand Down
10 changes: 5 additions & 5 deletions docs/paddlepaddle/how_to_create_paddlepaddle_model_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ PaddlePaddle的模型来源有很多种。你可以选择直接从 PaddleHub 下

然后在 "代码示例" 找到代码

```
```python
import paddlehub as hub
import cv2

Expand All @@ -39,7 +39,7 @@ result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])

接下来,我们只需要添加以下一行到之前的代码上:

```
```python
module.save_inference_model(dirname="model/mobilenet")
```

Expand All @@ -58,7 +58,7 @@ module.save_inference_model(dirname="model/mobilenet")

总结, 以下两行就是在 PaddleHub 中转换模型的泛用模版:

```
```python
import paddlehub as hub

model = hub.Module(name="modelname")
Expand All @@ -76,7 +76,7 @@ model.save_inference_model(dirname="model/modelname")

Paddle 2.0 的动态图模型可用如下代码表达:

```
```python
class LinearNet(nn.Layer):
def __init__(self):
super(LinearNet, self).__init__()
Expand Down Expand Up @@ -106,7 +106,7 @@ paddle.jit.save(layer, path)

对于 2.0 以前的Paddle模型, 它们会是静态图的格式:

```
```python
import paddle

paddle.enable_static()
Expand Down
2 changes: 1 addition & 1 deletion docs/pytorch/how_to_convert_your_model_to_torchscript.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ You can trace by using the `torch.traceModule` function.

To run inference with such model in DJL, you could provide a placeholder NDArray like below:

```
```java
NDArray array = NDManager.create("");
array.setName("module_method:get_text_features");
```
Expand Down
Loading

0 comments on commit a45473c

Please sign in to comment.