You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Author, Hi, I have a question that is confusing me, in the original work of PURE, the performance of Rel and Rel+ on SciERC dataset is 50.1 and 36.8 respectively, what is the metric you are using in GPT-RE that can reach 68.45, thanks!
The text was updated successfully, but these errors were encountered:
@zeal2000 Basically, they are working on "Relation Classification" rather than "extraction". They defined their task in section 2.1 . In the PURE or PL-Marker paper, they are working on the end-to-end relation extraction, which has two steps, 1) extracting the entities and 2) classifying the potential relations of entity pairs.
The input of this paper is sentence + entity information.
@zeal2000 Basically, they are working on "Relation Classification" rather than "extraction". They defined their task in section 2.1 . In the PURE or PL-Marker paper, they are working on the end-to-end relation extraction, which has two steps, 1) extracting the entities and 2) classifying the potential relations of entity pairs. The input of this paper is sentence + entity information.
Dear Author, Hi, I have a question that is confusing me, in the original work of PURE, the performance of Rel and Rel+ on SciERC dataset is 50.1 and 36.8 respectively, what is the metric you are using in GPT-RE that can reach 68.45, thanks!
The text was updated successfully, but these errors were encountered: