A Ruby gem for tokenizing, parsing, and transforming regular expressions.
- Multilayered
- A scanner/tokenizer based on Ragel
- A lexer that produces a "stream" of Token objects
- A parser that produces a "tree" of Expression objects (OO API)
- Runs on Ruby 2.x, 3.x and JRuby runtimes
- Recognizes Ruby 1.8, 1.9, 2.x and 3.x regular expressions See Supported Syntax
For examples of regexp_parser in use, see Example Projects.
- Ruby >= 2.0
- Ragel >= 6.0, but only if you want to build the gem or work on the scanner.
Install the gem with:
gem install regexp_parser
Or, add it to your project's Gemfile
:
gem 'regexp_parser', '~> X.Y.Z'
See the badge at the top of this README or rubygems for the the latest version number.
The three main modules are Scanner, Lexer, and Parser. Each of them provides a single method that takes a regular expression (as a Regexp object or a string) and returns its results. The Lexer and the Parser accept an optional second argument that specifies the syntax version, like 'ruby/2.0', which defaults to the host Ruby version (using RUBY_VERSION).
Here are the basic usage examples:
require 'regexp_parser'
Regexp::Scanner.scan(regexp)
Regexp::Lexer.lex(regexp)
Regexp::Parser.parse(regexp)
All three methods accept a block as the last argument, which, if given, gets called with the results as follows:
-
Scanner: the block gets passed the results as they are scanned. See the example in the next section for details.
-
Lexer: the block gets passed the tokens one by one as they are scanned. The result of the block is returned.
-
Parser: after completion, the block gets passed the root expression. The result of the block is returned.
All three methods accept either a Regexp
or String
(containing the pattern)
- if a String is passed,
options
can be supplied:
require 'regexp_parser'
Regexp::Parser.parse(
"a+ # Recognizes a and A...",
options: ::Regexp::EXTENDED | ::Regexp::IGNORECASE
)
A Ragel-generated scanner that recognizes the cumulative syntax of all supported syntax versions. It breaks a given expression's text into the smallest parts, and identifies their type, token, text, and start/end offsets within the pattern.
The following scans the given pattern and prints out the type, token, text and start/end offsets for each token found.
require 'regexp_parser'
Regexp::Scanner.scan(/(ab?(cd)*[e-h]+)/) do |type, token, text, ts, te|
puts "type: #{type}, token: #{token}, text: '#{text}' [#{ts}..#{te}]"
end
# output
# type: group, token: capture, text: '(' [0..1]
# type: literal, token: literal, text: 'ab' [1..3]
# type: quantifier, token: zero_or_one, text: '?' [3..4]
# type: group, token: capture, text: '(' [4..5]
# type: literal, token: literal, text: 'cd' [5..7]
# type: group, token: close, text: ')' [7..8]
# type: quantifier, token: zero_or_more, text: '*' [8..9]
# type: set, token: open, text: '[' [9..10]
# type: set, token: range, text: 'e-h' [10..13]
# type: set, token: close, text: ']' [13..14]
# type: quantifier, token: one_or_more, text: '+' [14..15]
# type: group, token: close, text: ')' [15..16]
A one-liner that uses map on the result of the scan to return the textual parts of the pattern:
Regexp::Scanner.scan(/(cat?([bhm]at)){3,5}/).map { |token| token[2] }
# => ["(", "cat", "?", "(", "[", "b", "h", "m", "]", "at", ")", ")", "{3,5}"]
-
The scanner performs basic syntax error checking, like detecting missing balancing punctuation and premature end of pattern. Flavor validity checks are performed in the lexer, which uses a syntax object.
-
If the input is a Ruby Regexp object, the scanner calls #source on it to get its string representation. #source does not include the options of the expression (m, i, and x). To include the options in the scan, #to_s should be called on the Regexp before passing it to the scanner or the lexer. For the parser, however, this is not necessary. It automatically exposes the options of a passed Regexp in the returned root expression.
-
To keep the scanner simple(r) and fairly reusable for other purposes, it does not perform lexical analysis on the tokens, sticking to the task of identifying the smallest possible tokens and leaving lexical analysis to the lexer.
-
The MRI implementation may accept expressions that either conflict with the documentation or are undocumented, like
{}
and]
(unescaped). The scanner will try to support as many of these cases as possible.
Defines the supported tokens for a specific engine implementation (aka a flavor). Syntax classes act as lookup tables, and are layered to create flavor variations. Syntax only comes into play in the lexer.
The following fetches syntax objects for Ruby 2.0, 1.9, 1.8, and checks a few of their implementation features.
require 'regexp_parser'
ruby_20 = Regexp::Syntax.for 'ruby/2.0'
ruby_20.implements? :quantifier, :zero_or_one # => true
ruby_20.implements? :quantifier, :zero_or_one_reluctant # => true
ruby_20.implements? :quantifier, :zero_or_one_possessive # => true
ruby_20.implements? :conditional, :condition # => true
ruby_19 = Regexp::Syntax.for 'ruby/1.9'
ruby_19.implements? :quantifier, :zero_or_one # => true
ruby_19.implements? :quantifier, :zero_or_one_reluctant # => true
ruby_19.implements? :quantifier, :zero_or_one_possessive # => true
ruby_19.implements? :conditional, :condition # => false
ruby_18 = Regexp::Syntax.for 'ruby/1.8'
ruby_18.implements? :quantifier, :zero_or_one # => true
ruby_18.implements? :quantifier, :zero_or_one_reluctant # => true
ruby_18.implements? :quantifier, :zero_or_one_possessive # => false
ruby_18.implements? :conditional, :condition # => false
Syntax objects can also be queried about their complete and relative feature sets.
require 'regexp_parser'
ruby_20 = Regexp::Syntax.for 'ruby/2.0' # => Regexp::Syntax::V2_0_0
ruby_20.added_features # => { conditional: [...], ... }
ruby_20.removed_features # => { property: [:newline], ... }
ruby_20.features # => { anchor: [...], ... }
- Variations on a token, for example a named group with angle brackets (< and >) vs one with a pair of single quotes, are specified with an underscore followed by two characters appended to the base token. In the previous named group example, the tokens would be :named_ab (angle brackets) and :named_sq (single quotes). These variations are normalized by the syntax to :named.
Sits on top of the scanner and performs lexical analysis on the tokens that it emits. Among its tasks are; breaking quantified literal runs, collecting the emitted token attributes into Token objects, calculating their nesting depth, normalizing tokens for the parser, and checking if the tokens are implemented by the given syntax version.
See the Token Objects wiki page for more information on Token objects.
The following example lexes the given pattern, checks it against the Ruby 1.9 syntax, and prints the token objects' text indented to their level.
require 'regexp_parser'
Regexp::Lexer.lex(/a?(b(c))*[d]+/, 'ruby/1.9') do |token|
puts "#{' ' * token.level}#{token.text}"
end
# output
# a
# ?
# (
# b
# (
# c
# )
# )
# *
# [
# d
# ]
# +
A one-liner that returns an array of the textual parts of the given pattern. Compare the output with that of the one-liner example of the Scanner; notably how the sequence 'cat' is treated. The 't' is separated because it's followed by a quantifier that only applies to it.
Regexp::Lexer.scan(/(cat?([b]at)){3,5}/).map { |token| token.text }
# => ["(", "ca", "t", "?", "(", "[", "b", "]", "at", ")", ")", "{3,5}"]
-
The syntax argument is optional. It defaults to the version of the Ruby interpreter in use, as returned by RUBY_VERSION.
-
The lexer normalizes some tokens, as noted in the Syntax section above.
Sits on top of the lexer and transforms the "stream" of Token objects emitted
by it into a tree of Expression objects represented by an instance of the
Expression::Root
class.
See the Expression Objects wiki page for attributes and methods.
This example uses the tree traversal method #each_expression
and the method #strfregexp
to print each object in the tree.
include_root = true
indent_offset = include_root ? 1 : 0
tree.each_expression(include_root) do |exp|
puts exp.strfregexp("%>> %c", indent_offset)
end
# Output
# > Regexp::Expression::Root
# > Regexp::Expression::Literal
# > Regexp::Expression::Group::Capture
# > Regexp::Expression::Literal
# > Regexp::Expression::Group::Capture
# > Regexp::Expression::Literal
# > Regexp::Expression::Literal
# > Regexp::Expression::Group::Named
# > Regexp::Expression::CharacterSet
Note: quantifiers do not appear in the output because they are members of the Expression class. See the next section for details.
Another example, using #traverse
for a more fine-grained tree traversal:
require 'regexp_parser'
regex = /a?(b+(c)d)*(?<name>[0-9]+)/
tree = Regexp::Parser.parse(regex, 'ruby/2.1')
tree.traverse do |event, exp|
puts "#{event}: #{exp.type} `#{exp.to_s}`"
end
# Output
# visit: literal `a?`
# enter: group `(b+(c)d)*`
# visit: literal `b+`
# enter: group `(c)`
# visit: literal `c`
# exit: group `(c)`
# visit: literal `d`
# exit: group `(b+(c)d)*`
# enter: group `(?<name>[0-9]+)`
# visit: set `[0-9]+`
# exit: group `(?<name>[0-9]+)`
See the traverse.rb and strfregexp.rb files under lib/regexp_parser/expression/methods
for more information on these methods.
The three modules support all the regular expression syntax features of Ruby 1.8, 1.9, 2.x and 3.x:
Note that not all of these are available in all versions of Ruby
Syntax Feature | Examples | ⋯ |
---|---|---|
Alternation | a|b|c |
✓ |
Anchors | \A , ^ , \b |
✓ |
Character Classes | [abc] , [^\\] , [a-d&&aeiou] , [a=e=b] |
✓ |
Character Types | \d , \H , \s |
✓ |
Cluster Types | \R , \X |
✓ |
Conditional Exps. | (?(cond)yes-subexp) , (?(cond)yes-subexp|no-subexp) |
✓ |
Escape Sequences | \t , \\+ , \? |
✓ |
Free Space | whitespace and # Comments (x modifier) |
✓ |
Grouped Exps. | ⋱ | |
Assertions | ⋱ | |
Lookahead | (?=abc) |
✓ |
Negative Lookahead | (?!abc) |
✓ |
Lookbehind | (?<=abc) |
✓ |
Negative Lookbehind | (?<!abc) |
✓ |
Atomic | (?>abc) |
✓ |
Absence | (?~abc) |
✓ |
Back-references | ⋱ | |
Named | \k<name> |
✓ |
Nest Level | \k<n-1> |
✓ |
Numbered | \k<1> |
✓ |
Relative | \k<-2> |
✓ |
Traditional | \1 through \9 |
✓ |
Capturing | (abc) |
✓ |
Comments | (?# comment text) |
✓ |
Named | (?<name>abc) , (?'name'abc) |
✓ |
Options | (?mi-x:abc) , (?a:\s\w+) , (?i) |
✓ |
Passive | (?:abc) |
✓ |
Subexp. Calls | \g<name> , \g<1> |
✓ |
Keep | \K , (ab\Kc|d\Ke)f |
✓ |
Literals (utf-8) | Ruby , ルビー , روبي |
✓ |
POSIX Classes | [:alpha:] , [:^digit:] |
✓ |
Quantifiers | ⋱ | |
Greedy | ? , * , + , {m,M} |
✓ |
Reluctant (Lazy) | ?? , *? , +? [1] |
✓ |
Possessive | ?+ , *+ , ++ [1] |
✓ |
String Escapes | ⋱ | |
Control [2] | \C-C , \cD |
✓ |
Hex | \x20 , \x{701230} |
✓ |
Meta [2] | \M-c , \M-\C-C , \M-\cC , \C-\M-C , \c\M-C |
✓ |
Octal | \0 , \01 , \012 |
✓ |
Unicode | \uHHHH , \u{H+ H+} |
✓ |
Unicode Properties | (Unicode 15.0.0) | ⋱ |
Age | \p{Age=5.2} , \P{age=7.0} , \p{^age=8.0} |
✓ |
Blocks | \p{InArmenian} , \P{InKhmer} , \p{^InThai} |
✓ |
Classes | \p{Alpha} , \P{Space} , \p{^Alnum} |
✓ |
Derived | \p{Math} , \P{Lowercase} , \p{^Cased} |
✓ |
General Categories | \p{Lu} , \P{Cs} , \p{^sc} |
✓ |
Scripts | \p{Arabic} , \P{Hiragana} , \p{^Greek} |
✓ |
Simple | \p{Dash} , \p{Extender} , \p{^Hyphen} |
✓ |
[1]: Ruby does not support lazy or possessive interval quantifiers.
Any +
or ?
that follows an interval quantifier will be treated as another,
chained quantifier. See also #3,
#69.
[2]: As of Ruby 3.1, meta and control sequences are pre-processed to hex
escapes when used in Regexp literals,
so they will only reach the scanner and will only be emitted if a String or a Regexp
that has been built with the ::new
constructor is scanned.
Some Regexp options are not relevant to parsing. The option o
modifies how Ruby
deduplicates the Regexp object and does not appear in its source or options.
Other such modifiers include the encoding modifiers e
, n
, s
and u
See.
These are not seen by the scanner.
The following features are not currently enabled for Ruby by its regular expressions library (Onigmo). They are not supported by the scanner.
See something missing? Please submit an issue
Note: Attempting to process expressions with unsupported syntax features can raise an error, or incorrectly return tokens/objects as literals.
To run the tests simply run rake from the root directory.
The default task generates the scanner's code from the Ragel source files and runs all the specs, thus it requires Ragel to be installed.
Note that changes to Ragel files will not be reflected when running rspec
on its own,
so to run individual tests you might want to run:
rake ragel:rb && rspec spec/scanner/properties_spec.rb
Building the scanner and the gem requires Ragel to be installed. The build tasks will automatically invoke the 'ragel:rb' task to generate the Ruby scanner code.
The project uses the standard rubygems package tasks, so:
To build the gem, run:
rake build
To install the gem from the cloned project, run:
rake install
Projects using regexp_parser.
-
capybara is an integration testing tool that uses regexp_parser to convert Regexps to css/xpath selectors.
-
js_regex converts Ruby regular expressions to JavaScript-compatible regular expressions.
-
meta_re is a regular expression preprocessor with alias support.
-
mutant manipulates your regular expressions (amongst others) to see if your tests cover their behavior.
-
repper is a regular expression pretty-printer and formatter for Ruby.
-
rubocop is a linter for Ruby that uses regexp_parser to lint Regexps.
-
twitter-cldr-rb is a localization helper that uses regexp_parser to generate examples of postal codes.
Documentation and books used while working on this project.
- Mastering Regular Expressions, By Jeffrey E.F. Friedl (2nd Edition) book
- Regular Expression Flavor Comparison link
- Enumerating the strings of regular languages link
- Stack Overflow Regular Expressions FAQ link
- Unicode Explained, By Jukka K. Korpela. book
- Unicode Derived Properties link
- Unicode Property Aliases link
- Unicode Regular Expressions link
- Unicode Standard Annex #44 link
Copyright (c) 2010-2024 Ammar Ali. See LICENSE file for details.