Evaluation code for various unsupervised automated metrics for Natural Language Generation.
The nlg-eval tool is designed to evaluate various unsupervised automated metrics for Natural Language Generation (NLG). It allows users to input a hypothesis file and one or more reference files to output values of metrics. The tool supports metrics such as BLEU, METEOR, ROUGE, CIDEr, SPICE, SkipThought cosine similarity, Embedding Average cosine similarity, Vector Extrema cosine similarity, and Greedy Matching score.
The nlg-eval tool is a comprehensive solution for evaluating unsupervised automated metrics for Natural Language Generation tasks. With support for various metrics and flexible usage options through Python API or command line, it offers convenience and accuracy in assessing the quality of generated text. The tool's setup process is detailed, requiring a specific environment and dependencies to be installed, ensuring proper functionality. Users can efficiently assess NLG models and compare their performance using the metrics provided by nlg-eval.
Meteor.js is a full-stack JavaScript platform that simplifies web application development by allowing developers to use a single codebase for both the client and server sides. It provides an integrated set of technologies, including real-time data updates, a reactive templating engine, and a built-in package management system, streamlining the process of building modern and scalable web applications.