ISSN 2063-5346
For urgent queries please contact : +918130348310


Main Article Content

Dr.K.Gowsic M.E.,Ph.D.,.,T. Arundhathi, S. Monesha, N. Karthick, S. Manoj kumar
» doi: : 10.48047/ecb/2023.12.12.192


An HumRRO, we have successfully used natural language processing (NLP) to generate test items for a variety of assessment types. Based on this expertise, we have developed an innovative interface for on- demand automated generation of test items using finely tuned natural language understanding and generation (NLU/NLG) models. The NLU aspect of NLP allows computers to understand the nuance inherent in human speech, which feeds into NLG models that can write natural-sounding language in this case, variable test items that reflects such nuance. Our interface is user-friendly, designed to be understood by item or test developers without prior experience with machine learning or natural language processing. It fine tunes a natural language model on human-written test items, automatically generates new items from this model, and programmatically evaluates the quality of the generated items. When developing content for high-stakes, high-volume testing programs, the circumstances are quite different. Developers must routinely amass and maintain a large bank of items to feed multiple forms that are administered for a finite amount of time before they are replaced with other forms. In addition, the issue of content overlap/redundancy in the item bank becomes more salient as hundreds if not thousands of items must be developed to measure the same set of competencies or knowledge domains. The sheer volume of unique content needed, coupled with development timelines that are typically quite aggressive, necessitates a more strategic development process that focuses on process/operations efficiency, standardization, and waste reduction.

Article Details