Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DSL evaluation for Natural - please take a look #98

Open
dzw1999 opened this issue May 5, 2024 · 0 comments
Open

DSL evaluation for Natural - please take a look #98

dzw1999 opened this issue May 5, 2024 · 0 comments

Comments

@dzw1999
Copy link

dzw1999 commented May 5, 2024

Hello, greetings from the software engineering group from Beihang University, China. We have been working on the evaluation of Domain Specific Modeling Languages (DSMLs ), the main purpose is to find out the flaws of the design of the DSML. To test our evaluation framework, we evaluated your software design, mainly based on your ecore and xtext files, and found out that even though the overall quality of your software design is very good, there still exists some minor problems. The details are as follows:

LLMS-based evaluation

We did some LLMS-based evaluation, mainly focusing on the model part. Generally speaking, we try to make LLM act as a domain expert to provide multidimensional evaluations of your language design.

Model Completeness

We give your ecore design to LLM and ask the LLM to guess which domain your language is designed for. And then we ask LLM to add possible missing elements to your language (perhaps not taken into account in the first version of the design, but can be considered for inclusion in future versions). The results are as follows:

These metamodels seem to belong to the field of software engineering, particularly in the testing frameworks related to Behavior Driven Development (BDD). In the BDD method, specific syntax is commonly used to describe the expected behavior of software, such as the Gherkin syntax, where the elements and relationships have strong similarities.

In order to cover this field more comprehensively, we may need to add the following key meta models and their relationships:

Element:

Parameter: name; Value;
Hook: type; Order;
Comment: text;

Relationship:

ScenarioOutline (1)<- parameters ->Parameters (n)
Step (1)<- parameters ->Parameter (n)
Feature (1)<- books ->Hook (n)
Step (1)<- comment ->Comment (1)
Scenario (1)<- books ->Hook (n)
AbstractScenario (1)<- comment ->Comment (n)

These supplements ensure that various aspects of the BDD framework, such as parametric testing (by using Parameters in ScenarioOutline), hooks (which allow code to be executed before and after executing tasks such as Scenario or Step), and comments (which allow additional information to be provided for specific parts), are taken into account.

We are not sure if these issues actually constitute a problem, the decision to fix them or not is still up to your team, looing forward to your respond, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant