From a45473caa009e996db68ff32f0053517d2a55494 Mon Sep 17 00:00:00 2001 From: Zach Kimberg Date: Tue, 6 Jun 2023 16:27:24 -0700 Subject: [PATCH] Clean stable_diffusion and add missing .md language blocks (#2635) --- android/README.md | 2 +- android/pytorch-native/README.md | 18 +++++++------- docker/README.md | 6 ++--- docs/create_serving_ready_model.md | 4 ++-- docs/cv_utils.md | 2 +- docs/development/configure_logging.md | 4 ++-- docs/development/development_guideline.md | 8 +++---- docs/development/profiler.md | 8 +++---- docs/development/troubleshooting.md | 2 +- docs/faq.md | 2 +- docs/load_model.md | 4 ++-- .../how_to_convert_your_model_to_symbol.md | 6 ++--- .../how_to_create_paddlepaddle_model.md | 10 ++++---- .../how_to_create_paddlepaddle_model_zh.md | 10 ++++---- ...ow_to_convert_your_model_to_torchscript.md | 2 +- docs/telemetry.md | 6 ++--- engines/mxnet/jnarator/README.md | 2 +- .../onnxruntime/onnxruntime-engine/README.md | 2 +- .../paddlepaddle-engine/README.md | 2 +- .../paddlepaddle-native/README.md | 6 ++--- engines/pytorch/pytorch-engine/README.md | 2 +- engines/pytorch/pytorch-native/README.md | 18 +++++++------- examples/docs/BERT_question_and_answer.md | 2 +- examples/docs/action_recognition.md | 2 +- examples/docs/biggan.md | 2 +- examples/docs/clip_image_text.md | 2 +- examples/docs/face_detection.md | 2 +- examples/docs/face_recognition.md | 4 ++-- examples/docs/image_classification.md | 2 +- examples/docs/instance_segmentation.md | 2 +- examples/docs/mask_detection.md | 2 +- examples/docs/neural_machine_translation.md | 2 +- examples/docs/object_detection.md | 2 +- ...t_detection_with_tensorflow_saved_model.md | 4 ++-- examples/docs/pose_estimation.md | 2 +- examples/docs/semantic_segmentation.md | 6 ++--- examples/docs/sentiment_analysis.md | 2 +- examples/docs/stable_diffusion.md | 24 ++++++++++--------- examples/docs/super_resolution.md | 2 +- examples/docs/train_amazon_review_ranking.md | 6 ++--- examples/docs/train_captcha.md | 4 ++-- examples/docs/train_cifar10_resnet.md | 8 +++---- examples/docs/train_mnist_mlp.md | 4 ++-- examples/docs/train_pikachu_ssd.md | 4 ++-- examples/docs/train_transfer_fresh_fruit.md | 20 ++++++++-------- examples/docs/whisper_speech_text.md | 4 ++-- integration/README.md | 4 ++-- jupyter/README.md | 6 ++--- 48 files changed, 126 insertions(+), 124 deletions(-) diff --git a/android/README.md b/android/README.md index a9c42449b8a..41f258e9c37 100644 --- a/android/README.md +++ b/android/README.md @@ -14,7 +14,7 @@ The minimum API level for DJL Android is 26. In gradle, you can add the 5 modules in your dependencies: -``` +```groovy dependencies { implementation platform("ai.djl:bom:0.22.1") diff --git a/android/pytorch-native/README.md b/android/pytorch-native/README.md index 29c38d6573e..5d6c9172d8f 100644 --- a/android/pytorch-native/README.md +++ b/android/pytorch-native/README.md @@ -7,7 +7,7 @@ Follow this setup guide in order to run DJL apps on an Android. In order to succ ## Prerequisites -``` +```sh # Run the following command (assume you have python3 installed already) export PYTHON=python3 @@ -20,7 +20,7 @@ This will install the android-sdk on your machine as well as python3. It sets th ### Linux (Ubuntu 20.04) android-sdk install -``` +```sh # install python and Android sdk sudo apt-get install android-sdk python3 @@ -33,7 +33,7 @@ sudo chown -R ubuntu:ubuntu $ANDROID_HOME ### Mac android-sdk install -``` +```sh # install python and Android sdk brew install android-sdk @@ -48,7 +48,7 @@ sudo chown -R $USER $ANDROID_HOME Find latest command line only tools: [https://developer.android.com/studio#downloads](https://developer.android.com/studio#downloads:~:text=Command%20line%20tools%20only) -``` +```sh # create directory for Android command line tools mkdir -p $ANDROID_HOME/cmdline-tools cd $ANDROID_HOME/cmdline-tools @@ -68,7 +68,7 @@ mv cmdline-tools tools See GitHub actions to ensure latest NDK_VERSION: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml](https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml) -``` +```sh # set Android NDK version and install it export NDK_VERSION=21.1.6352462 echo "y" | sudo ${ANDROID_HOME}/cmdline-tools/tools/bin/sdkmanager --install "ndk;${NDK_VERSION}" @@ -78,7 +78,7 @@ echo "y" | sudo ${ANDROID_HOME}/cmdline-tools/tools/bin/sdkmanager --install "nd See: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml](https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/native_s3_pytorch_android.yml) -``` +```sh # cd into whatever directory holds your djl directory export PYTORCH_VERSION=1.13.0 export ANDROID_NDK=${ANDROID_HOME}/ndk/${NDK_VERSION} @@ -106,7 +106,7 @@ See: [https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/nativ This command unzips all the files we zipped in the previous code block. It puts them into the directories where the DJL build expects to find them when it compiles. -``` +```sh cd ../djl/engines/pytorch/pytorch-native # to avoid download PyTorch native from S3, manually unzip PyTorch native @@ -132,7 +132,7 @@ See: [ https://github.com/deepjavalibrary/djl/blob/master/.github/workflows/publ The final command in this code block `./gradlew pTML` is optional. It stores a local copy of the DJL snapshot in your maven directory. If not done, then the app will pull the snapshot release of DJL from Sonatype. -``` +```sh # move into djl/android directory cd ../../../android @@ -153,7 +153,7 @@ See: [https://github.com/deepjavalibrary/djl-demo/tree/master/android/pytorch_an From Android Studio, with an emulator turned on, run the following commands -``` +```sh cd djl-demo/android/pytorch_android/style_transfer_cyclegan ./gradlew iD ``` diff --git a/docker/README.md b/docker/README.md index 168606c1fd1..5b5bd01be2b 100644 --- a/docker/README.md +++ b/docker/README.md @@ -9,7 +9,7 @@ You can use the [docker file](https://github.com/deepjavalibrary/djl/blob/master Please note that this docker will only work with Windows server 2019 by default. If you want it to work with other versions of Windows, you need to pass the version as an argument as follows: -``` +```bash docker build --build-arg version= ``` @@ -20,7 +20,7 @@ This docker file is a modification of the one provided by NVIDIA in By default this sets up a container using Ubuntu 18.04 and CUDA 11.6.2. You can build the container with other versions as follows, but keep in mind the TensorRT software requirements outlined [here](https://github.com/NVIDIA/TensorRT#prerequisites): -``` +```bash docker build --build-arg OS_VERSION= --build-arg CUDA_VERSION= ``` @@ -29,4 +29,4 @@ To run the container, we recommend using `nvidia-docker run ...` to ensure cuda We recommend that you follow the setup steps in the [TensorRT guide](https://github.com/NVIDIA/TensorRT) if you need access to the full suite of tools TensorRT provides, such as `trtexec` which can convert onnx models to uff tensorrt models. When following that guide, make sure to use the DJL provided -[docker file](https://github.com/deepjavalibrary/djl/blob/master/docker/tensorrt/Dockerfile) to enable JDK11 in the docker container. \ No newline at end of file +[docker file](https://github.com/deepjavalibrary/djl/blob/master/docker/tensorrt/Dockerfile) to enable JDK11 in the docker container. diff --git a/docs/create_serving_ready_model.md b/docs/create_serving_ready_model.md index ec79b59d162..a36f086b20d 100644 --- a/docs/create_serving_ready_model.md +++ b/docs/create_serving_ready_model.md @@ -46,7 +46,7 @@ There are two ways to supply configurations to the `Translator`: Here is an example: -``` +```config # serving.properties can be used to define model's metadata, all the arguments will be # passed to TranslatorFactory to create proper Translator @@ -73,7 +73,7 @@ softmax=true You can customize Translator's behavior with Criteria, for example: -``` +```java Criteria criteria = Criteria.builder() .setTypes(Image.class, Classifications.class) // defines input and output data type .optApplication(Application.CV.IMAGE_CLASSIFICATION) // spcific model's application diff --git a/docs/cv_utils.md b/docs/cv_utils.md index ce0ef182adf..95d309620bb 100644 --- a/docs/cv_utils.md +++ b/docs/cv_utils.md @@ -20,7 +20,7 @@ The [DJL OpenCV extension](../extensions/opencv/README.md) provides better perfo java's built-in ImageIO. You only need to add it into your project and DJL will automatically pick it up: -``` +```xml ai.djl.opencv opencv diff --git a/docs/development/configure_logging.md b/docs/development/configure_logging.md index a64c326f24e..5a9552ade1c 100644 --- a/docs/development/configure_logging.md +++ b/docs/development/configure_logging.md @@ -23,7 +23,7 @@ to your project to enable logging (slf4j-simple is not recommended for productio For Maven: -``` +```xml org.slf4j slf4j-simple @@ -60,7 +60,7 @@ If you want to use other logging framework such as `logback`, you can just add t or for Maven: -``` +```xml ch.qos.logback logback-classic diff --git a/docs/development/development_guideline.md b/docs/development/development_guideline.md index a4d8e42b39a..55ddb3a83c3 100644 --- a/docs/development/development_guideline.md +++ b/docs/development/development_guideline.md @@ -74,7 +74,7 @@ For larger topics which do not have a corresponding javadoc section, they should This project uses a gradle wrapper, so you don't have to install gradle on your machine. You can just call the gradle wrapper using the following command: -``` +```bash ./gradlew ``` @@ -100,19 +100,19 @@ If you are developing with an IDE, you can run a test by selecting the test and From the command line, you can run the following command to run a test: -``` +```bash ./gradlew ::run -Dmain= --args "" ``` For example, if you would like to run the complete integration test, you can use the following command: -``` +```bash ./gradlew :integration:run -Dmain=ai.djl.integration.IntegrationTest ``` To run an individual integration test from the command line, use the following: -``` +```bash ./gradlew :integration:run --args="-c -m " ``` diff --git a/docs/development/profiler.md b/docs/development/profiler.md index 54e7e702da3..6db5739483c 100644 --- a/docs/development/profiler.md +++ b/docs/development/profiler.md @@ -11,7 +11,7 @@ In the future, we are considering to design a unified APIs and output unified fo By setting the following environment variable, it generates `profile.json` after executing the code. -``` +```bash export MXNET_PROFILER_AUTOSTART=1 ``` @@ -30,7 +30,7 @@ DJL have integrated PyTorch C++ profiler API and expose `JniUtils.startProfile` Wrap the code snippet you want to profile in between `JniUtils.startProfile` and `JniUtils.stopProfile`. Here is an example. -``` +```java try (ZooModel model = criteria.loadModel()) { try (Predictor predictor = model.newPredictor()) { Image image = ImageFactory.getInstance() @@ -47,7 +47,7 @@ try (ZooModel model = criteria.loadModel()) { The output format is composed of operator execution record. Each record contains `name`(operator name), `dur`(time duration), `shape`(input shapes), `cpu mem`(cpu memory footprint). -``` +```json { "name": "aten::empty", "ph": "X", @@ -65,7 +65,7 @@ Each record contains `name`(operator name), `dur`(time duration), `shape`(input When loading a model, the profiler can be enabled by specifying the desired filepath in the criteria: -``` +```java Criteria criteria = Criteria.builder() .optOption("profilerOutput", "build/testOrtProfiling") diff --git a/docs/development/troubleshooting.md b/docs/development/troubleshooting.md index 558c01b1ec5..6b526ad64c5 100644 --- a/docs/development/troubleshooting.md +++ b/docs/development/troubleshooting.md @@ -93,7 +93,7 @@ If the issue continues to persist, you can use the [docker file](https://github. Please note that this docker will only work with Windows server 2019 by default. If you want it to work with other versions of Windows, you need to pass the version as an argument as follows: -``` +```bash docker build --build-arg version= ``` diff --git a/docs/faq.md b/docs/faq.md index b7152619b79..2a6b9e257d7 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -47,7 +47,7 @@ run on a single GPU by default, unless the user specifies otherwise. During training, if you wish to train on multiple GPUs or if you wish to limit the number of GPUs to be used (you may want to limit the number of GPU for smaller datasets), you have to configure the `TrainingConfig` to do so by setting the devices. For example, if you have 7 GPUs available, and you want the `Trainer` to train on 5 GPUs, you can configure it as follows. -``` +```java int maxNumberOfGpus = 5; TrainingConfig config = new DefaultTrainingConfig(initializer, loss) .setOptimizer(optimizer) diff --git a/docs/load_model.md b/docs/load_model.md index 8d0cdf26f30..0a84e25d901 100644 --- a/docs/load_model.md +++ b/docs/load_model.md @@ -24,7 +24,7 @@ to narrow down your search condition and locate the model you want to load. DJL Builder convention. The methods start with `set` are required fields, and `opt` for optional fields. You must call `setType()` method when creating a `Criteria` object: -``` +```java Criteria criteria = Criteria.builder() .setTypes(Image.class, Classifications.class) .build(); @@ -95,7 +95,7 @@ naming the model file name to be the same as the directory or archive file. If your model file located in a sub-folder of the model directory or has a different name, you can specify modelName by `.optModelName()` in criteria: -``` +```java Criteria criteria = Criteria.builder() .optModelName("traced_model/resnet18.pt") // specify model file prefix ``` diff --git a/docs/mxnet/how_to_convert_your_model_to_symbol.md b/docs/mxnet/how_to_convert_your_model_to_symbol.md index 79b1319db27..be178afe437 100644 --- a/docs/mxnet/how_to_convert_your_model_to_symbol.md +++ b/docs/mxnet/how_to_convert_your_model_to_symbol.md @@ -3,7 +3,7 @@ DJL currently supports symbolic model loading from MXNet. A gluon [HybridBlock](https://mxnet.apache.org/api/python/docs/api/gluon/hybrid_block.html) can be converted into a symbol for loading by doing as follows: -``` +```python from mxnet import nd from mxnet.gluon import nn @@ -30,7 +30,7 @@ These can be loaded in DJL. In real applications, you may want to create and train a HybridBlock before exporting it. The code block below shows how you can convert a [GluonCV](https://gluon-cv.mxnet.io/) pretrained model: -``` +```python import mxnet as mx from gluoncv import model_zoo @@ -52,7 +52,7 @@ It is always recommended enabling the static settings when exporting Apache MXNe If you run hybridize without `static_alloc=True, static_shape=True`: -``` +```python net.hybridize() ``` diff --git a/docs/paddlepaddle/how_to_create_paddlepaddle_model.md b/docs/paddlepaddle/how_to_create_paddlepaddle_model.md index 8c920d6d256..042acbd2d61 100644 --- a/docs/paddlepaddle/how_to_create_paddlepaddle_model.md +++ b/docs/paddlepaddle/how_to_create_paddlepaddle_model.md @@ -25,7 +25,7 @@ just go to the following link: Then we find "代码示例" section here: -``` +```python import paddlehub as hub import cv2 @@ -40,7 +40,7 @@ please replace the `'/PATH/TO/IMAGE'` to your local image path. Then, all we need to do is appending one more line to the previous code: -``` +```python module.save_inference_model(dirname="model/mobilenet") ``` @@ -59,7 +59,7 @@ Finally, you can directly feed the `mobilenet.zip` file in DJL for inference tas As a summary, here is the pattern for you to save the model in the rest of PaddleHub: -``` +```python import paddlehub as hub model = hub.Module(name="modelname") @@ -77,7 +77,7 @@ Firstly let's assume you have code, and you already load the pretrained weight. For imperative model trained using Paddle 2.0 like below: -``` +```python class LinearNet(nn.Layer): def __init__(self): super(LinearNet, self).__init__() @@ -107,7 +107,7 @@ is `inference.*` since DJL will only find files with this prefix. For Paddle model created before 2.0, it is usually in Symbolic form: -``` +```python import paddle paddle.enable_static() diff --git a/docs/paddlepaddle/how_to_create_paddlepaddle_model_zh.md b/docs/paddlepaddle/how_to_create_paddlepaddle_model_zh.md index 6148a1e31e7..74e5dec634f 100644 --- a/docs/paddlepaddle/how_to_create_paddlepaddle_model_zh.md +++ b/docs/paddlepaddle/how_to_create_paddlepaddle_model_zh.md @@ -24,7 +24,7 @@ PaddlePaddle的模型来源有很多种。你可以选择直接从 PaddleHub 下 然后在 "代码示例" 找到代码 -``` +```python import paddlehub as hub import cv2 @@ -39,7 +39,7 @@ result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')]) 接下来,我们只需要添加以下一行到之前的代码上: -``` +```python module.save_inference_model(dirname="model/mobilenet") ``` @@ -58,7 +58,7 @@ module.save_inference_model(dirname="model/mobilenet") 总结, 以下两行就是在 PaddleHub 中转换模型的泛用模版: -``` +```python import paddlehub as hub model = hub.Module(name="modelname") @@ -76,7 +76,7 @@ model.save_inference_model(dirname="model/modelname") Paddle 2.0 的动态图模型可用如下代码表达: -``` +```python class LinearNet(nn.Layer): def __init__(self): super(LinearNet, self).__init__() @@ -106,7 +106,7 @@ paddle.jit.save(layer, path) 对于 2.0 以前的Paddle模型, 它们会是静态图的格式: -``` +```python import paddle paddle.enable_static() diff --git a/docs/pytorch/how_to_convert_your_model_to_torchscript.md b/docs/pytorch/how_to_convert_your_model_to_torchscript.md index 7387884d18a..4dd4b3102d7 100644 --- a/docs/pytorch/how_to_convert_your_model_to_torchscript.md +++ b/docs/pytorch/how_to_convert_your_model_to_torchscript.md @@ -64,7 +64,7 @@ You can trace by using the `torch.traceModule` function. To run inference with such model in DJL, you could provide a placeholder NDArray like below: -``` +```java NDArray array = NDManager.create(""); array.setName("module_method:get_text_features"); ``` diff --git a/docs/telemetry.md b/docs/telemetry.md index 715f48f98b8..d6ff9b20bc1 100644 --- a/docs/telemetry.md +++ b/docs/telemetry.md @@ -7,18 +7,18 @@ the system is collected or retained. To opt out of usage tracking for DJL, you can set the `OPT_OUT_TRACKING` environment variable: -``` +```bash export OPT_OUT_TRACKING=true ``` or Java System property: -``` +```java System.setProperty("OPT_OUT_TRACKING", "true") ``` Usage tracking is also disable in `offline` mode: -``` +```java System.setProperty("offline", "true") ``` diff --git a/engines/mxnet/jnarator/README.md b/engines/mxnet/jnarator/README.md index 9c7b9c92c10..4a10bc33daa 100644 --- a/engines/mxnet/jnarator/README.md +++ b/engines/mxnet/jnarator/README.md @@ -17,7 +17,7 @@ walks through the tree to find C API calls and generates their corresponding Jav The following example demonstrates how to use this module in the Apache MXNet module: -``` +```groovy task jnarator(dependsOn: ":jnarator:jar") { doLast { File jnaGenerator = project(":jnarator").jar.outputs.files.singleFile diff --git a/engines/onnxruntime/onnxruntime-engine/README.md b/engines/onnxruntime/onnxruntime-engine/README.md index a4f8dd9666d..5908a05e547 100644 --- a/engines/onnxruntime/onnxruntime-engine/README.md +++ b/engines/onnxruntime/onnxruntime-engine/README.md @@ -80,7 +80,7 @@ Maven: Gradle: -``` +```groovy implementation("ai.djl.onnxruntime:onnxruntime-engine:0.22.1") { exclude group: "com.microsoft.onnxruntime", module: "onnxruntime" } diff --git a/engines/paddlepaddle/paddlepaddle-engine/README.md b/engines/paddlepaddle/paddlepaddle-engine/README.md index 7ad07dfd3c1..e2270862d7c 100644 --- a/engines/paddlepaddle/paddlepaddle-engine/README.md +++ b/engines/paddlepaddle/paddlepaddle-engine/README.md @@ -73,7 +73,7 @@ For macOS, you can use the following library: To use Linux packages, users are also required to set `LD_LIBRARY_PATH` to the folder: -``` +```sh LD_LIBRARY_PATH=$HOME/.djl.ai/paddle/2.2.2--linux-x86_64 ``` diff --git a/engines/paddlepaddle/paddlepaddle-native/README.md b/engines/paddlepaddle/paddlepaddle-native/README.md index 4a0ac954c65..595e51d949b 100644 --- a/engines/paddlepaddle/paddlepaddle-native/README.md +++ b/engines/paddlepaddle/paddlepaddle-native/README.md @@ -7,7 +7,7 @@ You need to install `cmake` and C++ compiler on your machine in order to build ### Linux -``` +```sh apt install cmake g++ ``` @@ -21,13 +21,13 @@ Use the following task to build PaddlePaddle JNI library: ### Mac/Linux -``` +```sh ./gradlew compileJNI ``` ### Windows -``` +```cmd gradlew compileJNI ``` diff --git a/engines/pytorch/pytorch-engine/README.md b/engines/pytorch/pytorch-engine/README.md index 210f78b1d4a..4332f7afd44 100644 --- a/engines/pytorch/pytorch-engine/README.md +++ b/engines/pytorch/pytorch-engine/README.md @@ -76,7 +76,7 @@ It will automatically determine the appropriate jars for your system based on th If you are running on an older operating system (like CentOS 7), you have to use [precxx11 build](#for-pre-cxx11-build) or set system property to auto select for precxx11 binary: -``` +```java System.setProperty("PYTORCH_PRECXX11", "true"); ``` diff --git a/engines/pytorch/pytorch-native/README.md b/engines/pytorch/pytorch-native/README.md index 4e860656cf5..607ac06531b 100644 --- a/engines/pytorch/pytorch-native/README.md +++ b/engines/pytorch/pytorch-native/README.md @@ -9,7 +9,7 @@ You need to install `cmake` and C++ compiler on your machine in order to build ### Linux -``` +```sh apt-get install -y locales cmake curl unzip software-properties-common ``` @@ -19,13 +19,13 @@ Use the following task to build PyTorch JNI library: ### Mac/Linux -``` +```sh ./gradlew compileJNI ``` ### Windows -``` +```cmd gradlew compileJNI ``` @@ -38,14 +38,14 @@ Use the following task to build pytorch JNI library for GPU: ### Mac/Linux -``` +```sh # compile CUDA 11.X version of JNI ./gradlew compileJNI -Pcu11 ``` ## Windows -``` +```cmd # compile CUDA 11.X version of JNI gradlew compileJNI -Pcu11 ``` @@ -53,7 +53,7 @@ gradlew compileJNI -Pcu11 ### Format C++ code It uses clang-format to format the code. -``` +```sh ./gradlew formatCpp ``` @@ -107,7 +107,7 @@ To implement a simple pytorch feature, generally you can do the following steps. 1. Find the c-api in torch library for the feature to add. This can be done by searching in the document like [this](https://pytorch.org/cppdocs/api/function_namespaceat_1a854b1b19549a17f87a69b5f6b1134e22.html?highlight=bmm) or searching in the torch cpp source code. 2. Implement the JNI and api's in Java. The JNI can then be compiled with gradle commands. Here is the commands you can use on cpu machine, to compile JNI and run it with java api. - ``` + ```sh cd engines/pytorch/pytorch-native ./gradlew cleanJNI compileJNI @@ -118,7 +118,7 @@ To implement a simple pytorch feature, generally you can do the following steps. **Note**: In case your need test with GPU, the compilation needs to be the following: - ``` + ```sh ./gradlew cleanJNI ./gradlew compileJNI -Pcu11 ``` @@ -128,7 +128,7 @@ To implement a simple pytorch feature, generally you can do the following steps. Run the following tasks - ``` + ```sh ./gradlew fJ fC checkstyleMain checkstyleTest pmdMain pmdTest ./gradlew test ``` diff --git a/examples/docs/BERT_question_and_answer.md b/examples/docs/BERT_question_and_answer.md index 63995e47783..9c65bb31742 100644 --- a/examples/docs/BERT_question_and_answer.md +++ b/examples/docs/BERT_question_and_answer.md @@ -31,7 +31,7 @@ Follow [setup](../../docs/development/setup.md) to configure your development en ### Run Inference -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.BertQaInference ``` diff --git a/examples/docs/action_recognition.md b/examples/docs/action_recognition.md index 224d7df6213..72b585de37a 100644 --- a/examples/docs/action_recognition.md +++ b/examples/docs/action_recognition.md @@ -20,7 +20,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.ActionRecognition ``` diff --git a/examples/docs/biggan.md b/examples/docs/biggan.md index 8b79f2a95c5..646b9071a46 100644 --- a/examples/docs/biggan.md +++ b/examples/docs/biggan.md @@ -28,7 +28,7 @@ int[] input = {100, 207, 971, 970, 933}; ### Build the project and run Use the following commands to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.BigGAN ``` diff --git a/examples/docs/clip_image_text.md b/examples/docs/clip_image_text.md index 815dd0e453e..68970eb09dc 100644 --- a/examples/docs/clip_image_text.md +++ b/examples/docs/clip_image_text.md @@ -20,7 +20,7 @@ We expect cats text will win based on the image. ## Run the example -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.clip.ImageTextComparison ``` diff --git a/examples/docs/face_detection.md b/examples/docs/face_detection.md index 250472b7bbd..c5c6874dd97 100644 --- a/examples/docs/face_detection.md +++ b/examples/docs/face_detection.md @@ -24,7 +24,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.face.RetinaFaceDetection ./gradlew run -Dmain=ai.djl.examples.inference.face.LightFaceDetection diff --git a/examples/docs/face_recognition.md b/examples/docs/face_recognition.md index 08c7b609ee1..4e8e0d60b11 100644 --- a/examples/docs/face_recognition.md +++ b/examples/docs/face_recognition.md @@ -25,7 +25,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.face.FeatureExtraction ``` @@ -36,7 +36,7 @@ Your output should look like the following: [INFO ] - [-0.04026184, -0.019486362, -0.09802659, 0.01700999, 0.037829027, ...] ``` -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.face.FeatureComparison ``` diff --git a/examples/docs/image_classification.md b/examples/docs/image_classification.md index 2238078cec7..1f515f9680f 100644 --- a/examples/docs/image_classification.md +++ b/examples/docs/image_classification.md @@ -30,7 +30,7 @@ You can find the following image in your project test resource folder: `src/test Run the project by using the following command: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.ImageClassification ``` diff --git a/examples/docs/instance_segmentation.md b/examples/docs/instance_segmentation.md index f325b42bad1..1c3559768f8 100644 --- a/examples/docs/instance_segmentation.md +++ b/examples/docs/instance_segmentation.md @@ -22,7 +22,7 @@ You can find the image used in this example in project test resource folder: `sr ### Build the project and run -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.InstanceSegmentation ``` diff --git a/examples/docs/mask_detection.md b/examples/docs/mask_detection.md index 43ae4152818..303ed5219d6 100644 --- a/examples/docs/mask_detection.md +++ b/examples/docs/mask_detection.md @@ -30,7 +30,7 @@ We use the following image as input: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.MaskDetection ``` diff --git a/examples/docs/neural_machine_translation.md b/examples/docs/neural_machine_translation.md index 146be906245..e508dc04802 100644 --- a/examples/docs/neural_machine_translation.md +++ b/examples/docs/neural_machine_translation.md @@ -31,7 +31,7 @@ Follow [setup](../../docs/development/setup.md) to configure your development en ### Run Inference -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.NeuralMachineTranslation ``` diff --git a/examples/docs/object_detection.md b/examples/docs/object_detection.md index 6c58a6f758d..7d0898128b9 100644 --- a/examples/docs/object_detection.md +++ b/examples/docs/object_detection.md @@ -24,7 +24,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.ObjectDetection ``` diff --git a/examples/docs/object_detection_with_tensorflow_saved_model.md b/examples/docs/object_detection_with_tensorflow_saved_model.md index 810e60b13b1..d02ed663c17 100644 --- a/examples/docs/object_detection_with_tensorflow_saved_model.md +++ b/examples/docs/object_detection_with_tensorflow_saved_model.md @@ -21,7 +21,7 @@ The pre-trained SSD model can be found [here](http://download.tensorflow.org/mod You'll find a folder named ```ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model```. You need to specify model name to let ModelZoo load the model from right location: -``` +```java Criteria criteria = Criteria.builder() .setTypes(Image.class, DetectedObjects.class) .optModelUrls(modelUrl) @@ -43,7 +43,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.ObjectDetectionWithTensorflowSavedModel ``` diff --git a/examples/docs/pose_estimation.md b/examples/docs/pose_estimation.md index e93ad3c8fcf..c8b4df2cd4f 100644 --- a/examples/docs/pose_estimation.md +++ b/examples/docs/pose_estimation.md @@ -20,7 +20,7 @@ You can find the image used in this example in the project test resource folder: ### Build the project and run Use the following command to run the project: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.PoseEstimation ``` diff --git a/examples/docs/semantic_segmentation.md b/examples/docs/semantic_segmentation.md index c1d242df5be..2e22d6d963d 100644 --- a/examples/docs/semantic_segmentation.md +++ b/examples/docs/semantic_segmentation.md @@ -22,7 +22,7 @@ You can find the image used in this example in project test resource folder: `sr ### Build the project and run -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.SemanticSegmentation ``` @@ -49,13 +49,13 @@ You can find the image used in this example in project test resource folder: `sr In the `SemanticSegmentation.java` file, find the `predict()` method. Change the `imageFile` path to look like this: -``` +```jav Path imageFile = Paths.get("src/test/resources/dog_bike_car.jpg"); ``` ### Build the project and run -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.InstanceSegmentation ``` diff --git a/examples/docs/sentiment_analysis.md b/examples/docs/sentiment_analysis.md index 81dbba9290a..c83e2f4c1ef 100644 --- a/examples/docs/sentiment_analysis.md +++ b/examples/docs/sentiment_analysis.md @@ -27,7 +27,7 @@ Follow [setup](../../docs/development/setup.md) to configure your development en ### Run Inference -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.SentimentAnalysis ``` diff --git a/examples/docs/stable_diffusion.md b/examples/docs/stable_diffusion.md index 98b9ee81aea..7eb544646ee 100644 --- a/examples/docs/stable_diffusion.md +++ b/examples/docs/stable_diffusion.md @@ -1,20 +1,22 @@ ## Stable Diffusion in DJL +[Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) is an open-source model +developed by Stability.ai. It aimed to produce images (artwork, pictures, etc.) based on +an input sentence and images. + +This example is a basic reimplementation of Stable Diffusion in Java. +It can be run with CPU or GPU using the PyTorch engine. + Java solution Developed by: + - Tyler (Github: tosterberg) - Calvin (Github: mymagicpower) - Qing (GitHub: lanking520) -[Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) is a open-source model -developed by Stability.ai. It aimed to produce artworks based on -the input sentences and images. - -This example is a basic reimplementation of Stable Diffusion in Java. -The example can be both running in CPU/GPU with PyTorch engine. - ## Model Architecture -We took 4 components from the original Stable Diffusion models and traced them in PyTorch: +We took four components from the original Stable Diffusion models and traced them in PyTorch: + - Text Encoder: The CLIP encoder used for text embedding generation - Image Encoder: The VAE encoder to build image to embedding - Image Decoder: The VAE decoder to convert embedding to image @@ -22,10 +24,10 @@ We took 4 components from the original Stable Diffusion models and traced them i ## Getting started -We recommend to run the model on GPU devices, CPU generation is slow. +We recommend running the model on GPU devices because CPU generation is slow. To run this example, just do: -``` +```bash cd examples ./gradlew run -Dmain=ai.djl.examples.inference.stablediffusion.ImageGeneration ``` @@ -42,7 +44,7 @@ Output: ## Conversion script -Use the below script to get the exported model +Use the below script to export the model: ```python from diffusers import EulerDiscreteScheduler, UNet2DConditionModel, AutoencoderKL diff --git a/examples/docs/super_resolution.md b/examples/docs/super_resolution.md index db138104d01..1d9ea6b1d79 100644 --- a/examples/docs/super_resolution.md +++ b/examples/docs/super_resolution.md @@ -28,7 +28,7 @@ List input = Arrays.asList( ### Build the project and run Use the following commands to run the project: -``` +```bash cd examples ./gradlew run -Dmain=ai.djl.examples.inference.sr.SuperResolution ``` diff --git a/examples/docs/train_amazon_review_ranking.md b/examples/docs/train_amazon_review_ranking.md index 1875177dcc0..e9a16b30db9 100644 --- a/examples/docs/train_amazon_review_ranking.md +++ b/examples/docs/train_amazon_review_ranking.md @@ -12,11 +12,11 @@ Follow [setup](../../docs/development/setup.md) to configure your development en ## Train the model -In this example, we used [GluonNLP pretrained DistilBert](https://nlp.gluon.ai/model_zoo/bert/index.html) model followed by a simple MLP layer. -The input is the BERT formatted tokens and output is the star rating. +In this example, we used the [GluonNLP pretrained DistilBert](https://nlp.gluon.ai/model_zoo/bert/index.html) model followed by a simple MLP layer. +The input is the BERT formatted tokens and the output is the star rating. We recommend using GPU for training since CPU training is slow with this dataset. -``` +```bash cd examples ./gradlew run -Dmain=ai.djl.examples.training.transferlearning.TrainAmazonReviewRanking --args="-e 2 -b 8 -g 1" ``` diff --git a/examples/docs/train_captcha.md b/examples/docs/train_captcha.md index 93e14b37054..bc39116f5c4 100644 --- a/examples/docs/train_captcha.md +++ b/examples/docs/train_captcha.md @@ -14,7 +14,7 @@ To configure your development environment, follow [setup](../../docs/development The following command trains the model for two epochs. The trained model is saved in the `build/model` folder. -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainCaptcha ``` @@ -48,7 +48,7 @@ The results show that you reached 88 percent validation accuracy at the end of t You can also run the example with your own arguments. For example, you can train for five epochs using batch size 64 and save the model to a specified folder `mlp_model` using the following command: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainCaptcha --args="-e 5 -b 64 -o mlp_model" ``` diff --git a/examples/docs/train_cifar10_resnet.md b/examples/docs/train_cifar10_resnet.md index 1bcc0ffcf92..cfaf03f8a61 100644 --- a/examples/docs/train_cifar10_resnet.md +++ b/examples/docs/train_cifar10_resnet.md @@ -35,7 +35,7 @@ For example, you can create ResNet50 using the following code: To run the example, use the following command: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.transferlearning.TrainResnetWithCifar10 --args="-e 10 -b 32 -g 1" ``` @@ -49,7 +49,7 @@ Models are trained in Python and exported to `.symbol`(model architecture) and ` To run the example using MXNet model, use the option `-s` as shown in the following command: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.transferlearning.TrainResnetWithCifar10 --args="-e 10 -b 32 -g 1 -s -p" ``` @@ -86,7 +86,7 @@ They come with powerful Nvidia GPUs, and include pre-installed drivers and all d For example, on an [p3.16xlarge](https://aws.amazon.com/ec2/instance-types/) instance with [Ubuntu Deep Learning Base AMI](https://aws.amazon.com/marketplace/pp/Amazon-Web-Services-Deep-Learning-Base-AMI-Amazon-/B077GFM7L7), run the following command to check the GPU status, driver information, and CUDA version. -``` +```sh nvidia-smi ``` @@ -140,7 +140,7 @@ Usually, you use `32*number_of_gpus`, so each GPU will get a data batch size of Run the following command to train using 4 GPUs: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.transferlearning.TrainResnetWithCifar10 --args="-e 10 -b 128 -g 4 -p" ``` diff --git a/examples/docs/train_mnist_mlp.md b/examples/docs/train_mnist_mlp.md index 700e8302af3..72b591d062a 100644 --- a/examples/docs/train_mnist_mlp.md +++ b/examples/docs/train_mnist_mlp.md @@ -19,7 +19,7 @@ To configure your development environment, follow [setup](../../docs/development The following command trains the model for two epochs. The trained model is saved in the `build/model` folder. -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainMnist ``` @@ -54,7 +54,7 @@ The results show that you reached 96.93 percent validation accuracy at the end o You can also run the example with your own arguments. For example, you can train for five epochs using batch size 64 and save the model to a specified folder `mlp_model` using the following command: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainMnist --args="-e 5 -b 64 -o mlp_model" ``` diff --git a/examples/docs/train_pikachu_ssd.md b/examples/docs/train_pikachu_ssd.md index a0daf21e485..2bf7a3af475 100644 --- a/examples/docs/train_pikachu_ssd.md +++ b/examples/docs/train_pikachu_ssd.md @@ -19,7 +19,7 @@ Follow [setup](../../docs/development/setup.md) to configure your development en ### Build the project and run it The following command trains the model for 2 epochs. The trained model is saved in the following folder: `build/model`. -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainPikachu ``` @@ -51,7 +51,7 @@ Validating: 100% |████████████████████ You can also run the example with your own arguments, for example, to train 5 epochs using batch size 64, and save it to a specified folder `ssd_model`: -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.training.TrainPikachu --args="-e 5 -b 64 -o ssd_model" ``` diff --git a/examples/docs/train_transfer_fresh_fruit.md b/examples/docs/train_transfer_fresh_fruit.md index 7813968fbea..a4802c67733 100644 --- a/examples/docs/train_transfer_fresh_fruit.md +++ b/examples/docs/train_transfer_fresh_fruit.md @@ -84,7 +84,7 @@ options to configure the model. Among them, `trainParam` is an option specific f learning (or model retraining). Setting it "false" will freeze the parameter in the loaded embedding layer (or model), and "true" will be the other way around. -``` +```java String modelUrl = "/EXPORT_PATH/resnet18_embedding.pt"; Criteria criteria = Criteria.builder() .setTypes(NDList.class, NDList.class) @@ -102,7 +102,7 @@ the output dimension of which is the number of classes, i.e., 2 in this task. We block model to contain the embedding and fully connected layer. The final output is a SoftMax function to get class probability, as shown below. -``` +```java Block blocks = new SequentialBlock() .add(baseBlock) .addSingleton(nd -> nd.squeeze(new int[] {2, 3})) // squeeze the size-1 dimensions from the baseBlock @@ -117,7 +117,7 @@ function (`SoftmaxCrossEntropy` in this case), the evaluation metric (`Accuracy` training listener which is used to fetch the training monitoring data, and so on. In our task, they are specified as shown below. -``` +```java private static DefaultTrainingConfig setupTrainingConfig(Block baseBlock) { String outputDir = "build/fruits"; SaveModelTrainingListener listener = new SaveModelTrainingListener(outputDir); @@ -146,7 +146,7 @@ in the embedding layer is not changed too much. This assignment of learning rate `learningRateTracker`, which is then fed into the `learningRateTracker` option in `Optimizer`, as shown below. -``` +```java // Customized learning rate float lr = 0.001f; FixedPerVarTracker.Builder learningRateTrackerBuilder = FixedPerVarTracker.builder().setDefaultValue(lr); @@ -161,14 +161,14 @@ config.optOptimizer(optimizer); After this step, a training configuration is returned by `setupTrainingConfig` function. It is then used to set the trainer. -``` +```java Trainer trainer = model.newTrainer(config); -``` +``` Next, the trainer is initialized by the following code, where the parameters' shape and initial value in each block will be specified. The `inputShape` has to be known beforehand. -``` +```java int batchSize = 32; Shape inputShape = new Shape(batchSize, 3, 224, 224); trainer.initialize(inputShape); @@ -176,7 +176,7 @@ trainer.initialize(inputShape); **Data loading.** The data is loaded and preprocessed with the following function. -``` +```java private static RandomAccessDataset getData(String usage, int batchSize) throws TranslateException, IOException { float[] mean = {0.485f, 0.456f, 0.406f}; @@ -211,7 +211,7 @@ are called. In DJL, during the creation of `Model` and `ZooModel resources (e.g., memories in the assigned in PyTorch) are allocated. These resources are managed by `NDManager` which inherits `AutoCloseable` class. -``` +```java EasyTrain.fit(trainer, numEpoch, datasetTrain, datasetTest); model.save(Paths.get("SAVE_PATH"), "transferFreshFruit"); @@ -242,7 +242,7 @@ the `FreshFruit` dataset. The full experiment code is available In this experiment, the training dataset size needs to be controlled and randomly chosen. This part is implemented as below, where `cut` is the size of the training data. -``` +```java List batchIndexList = new ArrayList<>(); try (NDManager manager = NDManager.newBaseManager()) { NDArray indices = manager.randomPermutation(dataset.size()); diff --git a/examples/docs/whisper_speech_text.md b/examples/docs/whisper_speech_text.md index 245a5b8b1c5..55c7f67a3db 100644 --- a/examples/docs/whisper_speech_text.md +++ b/examples/docs/whisper_speech_text.md @@ -12,7 +12,7 @@ https://resources.djl.ai/audios/jfk.flac ## Run the example -``` +```sh cd examples ./gradlew run -Dmain=ai.djl.examples.inference.whisper.SpeechToTextGeneration ``` @@ -54,4 +54,4 @@ transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[ print("Traced: " + transcription) torch.jit.save(traced_model, "whisper_en.pt") -``` \ No newline at end of file +``` diff --git a/integration/README.md b/integration/README.md index 1874c8fdfff..1e71289065a 100644 --- a/integration/README.md +++ b/integration/README.md @@ -12,12 +12,12 @@ When running the integration tests, code coverage is also collected. The easiest ## Switch Engine for tests You can switch the engine through setting the system property `ai.djl.default_engine`: -``` +```bash ./gradlew build -Dai.djl.default_engine= ``` ### Windows PowerShell -``` +```bash ..\gradlew build "-Dai.djl.default_engine=" ``` diff --git a/jupyter/README.md b/jupyter/README.md index 3b7617ff970..17b0a9c9405 100644 --- a/jupyter/README.md +++ b/jupyter/README.md @@ -56,7 +56,7 @@ You may want to use docker for simple installation or you are using Windows. ### Run docker image -``` +```sh cd jupyter docker run -itd -p 127.0.0.1:8888:8888 -v $PWD:/home/jupyter deepjavalibrary/jupyter ``` @@ -67,14 +67,14 @@ You can open the `http://localhost:8888` to see the hosted instance on docker. You can read [Dockerfile](https://github.com/deepjavalibrary/djl/blob/master/jupyter/Dockerfile) for detail. To build docker image: -``` +```sh cd jupyter docker build -t deepjavalibrary/jupyter . ``` ### Run docker compose -``` +```sh cd jupyter docker-compose build docker-compose up -d