fdir > v3.0 closely follows builder pattern to make an instance of the crawler fluently. So instead of doing:
fdir.sync("path/to/dir", {
includeBasePath: true,
});
You will simply do:
new fdir()
.withBasePath()
.crawl("path/to/dir")
.sync();
Make sure you have Node (any version) installed with npm/yarn.
Using yarn
:
$ yarn add fdir
Using npm
:
$ npm install fdir
You will also need to import fdir at the top of your file, like this:
ES5 Require
const { fdir } = require("fdir");
ES6 Import
import { fdir } from "fdir";
const crawler = new fdir();
const files = crawler.crawl("/path/to/dir").sync();
Easy, peasy!
The crawler options are in the form of methods. Each method returns the current instance of the crawler to enable fluency/method chaining.
Example:
const crawler = new fdir()
.withBasePath()
.withDirs()
.withMaxDepth(5);
Use this to add the base path to each output path.
By default, fdir does not add the base path to the output. For example, if you crawl
node_modules
, the output will contain only the filenames.
Usage
const crawler = new fdir().withBasePath();
Use this to also add the directories to the output.
For example, if you are crawling
node_modules
, the output will only contain the files ignoring the directories includingnode_modules
itself.
Usage
const crawler = new fdir().withDirs();
Use this to follow all symlinks recursively.
Parameters:
resolvePaths: boolean
— By default,fdir
returns original paths to files irrespective of whether they are inside a symlinked directory or not. If you want the paths to be relative to the symlink, set this flag tofalse
. (Default istrue
).
NOTE: This will affect crawling performance.
Usage
// to resolve all symlinked paths to their original path
const crawler = new fdir().withSymlinks({ resolvePaths: true });
// to disable path resolution
const crawler = new fdir().withSymlinks({ resolvePaths: false });
Use this to limit the maximum depth fdir will crawl to before stopping.
By default, fdir crawls recursively until the last directory.
Usage
const crawler = new fdir().withMaxDepth(5);
Use this to limit the maximum number of files fdir will crawl to before stopping.
Usage
const crawler = new fdir().withMaxFiles(100);
Use this to get full absolute paths in the output.
By default, fdir returns filenames.
Usage
const crawler = new fdir().withFullPaths();
Use this to get paths relative to the root directory in the output.
Usage
const crawler = new fdir().withRelativePaths();
Use this to set the path separator in the output.
Usage
const crawler = new fdir().withPathSeparator("/");
Use this to pass an AbortSignal
to the crawler.
Usage
const controller = new AbortController();
const crawler = new fdir().withAbortSignal(controller.signal);
Use this if you want to handle all errors manually.
By default, fdir handles and supresses all errors including permission, non-existent directory ones.
Usage
const crawler = new fdir().withErrors();
Return only the number of files and directories. Might be a little faster.
Usage
const crawler = new fdir().onlyCounts();
Output
Using this will affect the output structure. In place of a simple array of file paths you will get an object containing the counts of files and directories. For example:
const { files, dirs } = new fdir().onlyCounts().sync();
Ignore all files and return only the directory paths. Might be a little faster.
Usage
const crawler = new fdir().onlyDirs();
Normalize the given directory path using path.normalize
.
Since
path.normalize
is not always needed and is quite resource intensive (relatively), fdir includes a flag for it.
Usage
const crawler = new fdir().normalize();
Group all files by directory.
This does not give a tree-like output.
Usage
const crawler = new fdir().group();
Output
Using this will affect the output structure. In place of a simple array of string
file paths you will get an array of Group
:
type Group = { dir: string; files: string[] };
Applies a glob
filter to all files and only adds those that satisfy it.
Uses picomatch underneath. To keep fdir dependency free, it is up to the user to install
picomatch
manually.
Usage
// only get js and md files
const crawler = new fdir().glob("./**/*.js", "./**/*.md");
The same as glob
but allows you to pass options to the matcher.
Usage
// only get js and md files
const crawler = new fdir().globWithOptions(["**/*.js", "**/*.md"], {
strictSlashes: true
});
Uses the specified glob function to match files against the provided glob pattern.
Usage
// using picomatch or a similar library
import picomatch from 'picomatch';
const crawler = new fdir().withGlobFunction(picomatch);
// using a custom function
const customGlob = (patterns: string | string[]) => {
return (test: string): boolean => test.endsWith('.js');
};
const crawler = new fdir().withGlobFunction(customGlob);
Applies a filter to all directories and files and only adds those that satisfy the filter.
Multiple filters are joined using AND.
The function receives two parameters: the first is the path of the item, and the second is a flag that indicates whether the item is a directory or not.
Usage
// only get hidden & .js files
const crawler = new fdir()
.filter((path, isDirectory) => path.startsWith("."))
.filter((path, isDirectory) => path.endsWith(".js"));
Applies an exclusion filter to all directories and only crawls those that do not satisfy the condition. Useful for speeding up crawling if you know you can ignore some directories.
The function receives two parameters: the first is the name of the directory, and the second is the path to it.
Currently, you can apply only one exclusion filter per crawler. This might change.
Usage
// do not crawl into hidden directories
const crawler = new fdir().exclude((dirName, dirPath) =>
dirName.startsWith(".")
);
Prepare the crawler. This should be called at the end after all the configuration has been done.
Parameters
dirPath: string
- The path of the directory to start crawling from
Returns
APIBuilder
Usage
const crawler = new fdir().withBasePath().crawl("path/to/dir");
fdir currently includes 3 APIs (i.e. 3 ways of crawling a directory).
- Asynchronous with
Promise
- Asynchronous with callback
- Synchronous
Stream API will be added soon.
Crawl the directory asynchronously using Promise
.
Usage
const files = await new fdir()
.withBasePath()
.withDirs()
.crawl("/path/to/dir")
.withPromise();
Crawl the directory asynchronously using callback.
Usage
new fdir()
.withBasePath()
.withDirs()
.crawl("/path/to/dir")
.withCallback((files) => {
// do something with files here
});
Crawl the directory synchronously.
Note about performance: Sync performance is much, much slower than async performance. Only use this if absolutely necessary.
Usage
const files = new fdir()
.withBasePath()
.withDirs()
.crawl("/path/to/dir")
.sync();
Some people have raised issues saying method chaining is not recommended and/or good, so I have added this as an alternative.
It is now possible to pass an Options
object to crawlWithOptions
:
new fdir()
.crawlWithOptions("path/to/dir", {
includeBasePath: true,
})
.sync();
List of supported options:
type Options = {
includeBasePath?: boolean;
includeDirs?: boolean;
normalizePath?: boolean;
maxDepth?: number;
maxFiles?: number;
resolvePaths?: boolean;
suppressErrors?: boolean;
group?: boolean;
onlyCounts?: boolean;
filters: FilterFn[];
resolveSymlinks?: boolean;
useRealPaths?: boolean;
excludeFiles?: boolean;
excludeSymlinks?: boolean;
exclude?: ExcludeFn;
relativePaths?: boolean;
pathSeparator: PathSeparator;
signal?: AbortSignal;
globFunction?: Function;
};