-
-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature][Core] Implement a generalised evaluation script #89
Comments
Some links:
|
Current schema for evaluation: In the tempalte.json, we include a Solution section, with the data for each question ([qNo] : [[Answers,mark1],[Answers,mark2],['DEFAULT',Default_marks]]) as a general set of rules with more details as given below: Schema flexibility:
Note: order of concatenations is governed by "vals" array from template.json Edit: many changes in this schema. Follow evaluation_schema.py file for latest schema |
@Rohan-G are you available to work on this further? |
Yep I can continue working on this now. |
Note: there are many changes in the above schema. I'll be populating many examples in the existing samples as a demonstration |
Is your feature request related to a problem? Please describe.
Need support for a customisable marking scheme with negative marking, section-wise marking, bonus questions, etc. Answer keys may exist for a test booklet code. We can also support custom logic via eval scripts for handling any other cases.
Describe the solution you'd like
Basic evaluation scheme can be stored in template.json file
Criteria for Answer Schema:
- It should allow simplest way to grade +1 -0 case
- Also for +4 -1 case, it should be of minimal effort
- In all other cases we want users to modify 3-4 keys
Describe alternatives you've considered
Right now we can solve this by applying Excel formulae(link to sample sheet) over the output results file. It works fine for simple cases which still requires knowledge about writing formulae in excel.
Additional context
NA
The text was updated successfully, but these errors were encountered: