Which protein regulate p65 (UniProt:Q04206), and how? Show me the provenence!
The BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical documents, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.
As its fourth edition, in this year, the focus is moved from IE to KB construction as an application case of IE, which means the goal of IE is not IE itself any more but to make IE useful for KB construction. In the end, GE task 2016 aims at delivering a KB on NFkB to the public.
A preliminary version of a NFkB KB is prepared at http://bionlp.dbcls.jp/sparql/. It is a SPARQL endpoint populated with information pieces extracted from the banchmark data set of GE task 2016. The goal of GE task 2016 to enrich it, using knowledge pieces extracted from any other resources.
Note that GE task 2016 is organized with more emphasis on "contribution" than on "competition".
Participants can provide contribution from any aspect of knowledge extraction. It may be event extraction, static relation extraction, coreference resolution, and so on.
Contribution will be evaluated in terms of how many new knowledge pieces can be contributed by individual participanting systems. For example, a participant may think about addressing coreference resolution, utilzing a publicly available event extraction system, e.g. TEE, as a baseline event extraction system, and in the end claiming that how many new knowledge pieces can be extracted by adding the coreference resolution system to TEE.
We call this way of evaluation a "KB evaluation" to distinguish it from the "IE evaluation" which was performed as official evaluation of previous editions of GE task. In short, while an IE evaluation will count correctly predicted pieces of annotations, a KB evaluation will count the number of correct answers to pre-defined questions.
To get a more idea about the "KB evaluation", users are referred to the paper:
Jin-Dong Kim, Jung-jae Kim, Xu Han and Dietrich Rebholz-Schuhmann, Extending the evaluation of Genia Event task toward knowledge base construction and comparison to Gene Regulation Ontology task, BMC Bioinformatics, BioMed Central, 16(Suppl 10):S3, 2015.
For more detail of the task, please visit the task homepage.
Benchmark reference data set: http://pubannotation.org/projects/bionlp-st-ge-2016-reference
Benchmark test data set: http://pubannotation.org/projects/bionlp-st-ge-2016-test
Test data and evaluation service are released.
Using the online evaluation system, participants will be able to immediately get evaluation of their submission.
In the end, the final results and evaluation score need to be submitted together with a paper for the shared task workshop.
We are sorry for the change of schedule.
Considering current state of resource preparation and remaining time, we determine it is the best solution to go to deliver all the efforts of participants to the community.
We hope it is all fine with you, and thank you very much for your cooperation.