Retrieving text assessment results

Assessment results can be retrieved by calling the /results endpoint. See Waiting for results below for information on polling this endpoint. An alternative approach using webhooks is also available if a client would prefer to be notified when assessment has completed, rather than calling the /results endpoint to find out.

A sample results retrieval call is shown below. See the subsequent sections for details of the request and response formats.

curl -H "Authorization: Token token=YOUR_TOKEN" https://api.englishlanguageitutoring.com/v2.3.0/account/YOUR_ACCOUNT_ID/text/test-id-1/results

# response
# {
#   "type": "results_not_ready",
#   "code": 200
# }
#
# or
#
# {
#  "type": "success",
#  "code": 200,
#  "overall_score": 2.64069,
#  "sentence_scores": [
#    [0, 44, -1],
#    [46, 61, -0.457825],
#    [62, 75, -1]
#  ],
#  "suspect_tokens": [],
#  "textual_errors": [
#    [15, 20, "famous", "S"],
#    [62, 64, "I'm", "MP"],
#    [69, 74, "sure", "S"]
#  ]
# }

Request

GET /VERSION/account/ACCOUNT_ID/text/ID/results

Parameters:

Parameter Required? Description
version Yes The desired API version.
account_id Yes Your API account ID.
id Yes the unique ID specified in the original submission using the PUT /account/1234/text/abc123 API call (abc123 in this example).

Response

Results successfully retrieved

HTTP status code: 200

Example response body JSON:

{
 "type": "success", "code": 200, "overall_score": 7.3,
 "score_dimensions": {"prompt_relevance": 3.0},
 "sentence_scores": [[0, 5, -0.23], [6, 42, 0.56]],
 "suspect_tokens": [[0, 5], [40, 42]],
 "textual_errors": [[0, 5, "Greetings", "S"], [32, 35, "the", "MD+"]],
 "text_stats": {"r1": 0.333333, "r2": 0.103448, "r3": 0.0, "lcs": 7.0, "feature_count": 344.0, "word_count": 36.0}
}

Some of the attributes can be absent or empty, depending on the assessment pipeline in use. This is indicated in the attribute's description in the following table.

Attribute name Format Description
type always "success"
code always 200
overall_score Floating-point number The overall score for the piece of text. The range varies depending on the scoring model being used, for example, the default CEFR-based scale is 0.0 to 13.0; the IELTS scale is 0.0 to 9.0. See Scoring Scales for further details.
score_dimensions JSON object This attribute may not be present, depending on the assessment pipeline being used. If present, the only possible attribute is currently prompt_relevance (a number between 0.0 and 5.0 indicating how well the answer text relates to the question text, where 0.0 is the lowest relevance and 5.0 is the highest).
sentence_scores Array A score for each sentence within the piece of text. The array may be empty. If not empty, it contains further arrays for each sentence in which the 3 elements are: the integer index of the sentence start, the integer index of the sentence end and a floating-point score between -1.0 and 1.0
suspect_tokens Array Tokens (generally words) which have been identified as possibly incorrect/sub-optimal but for which the system has no suggested correction. The array may be empty. If not empty, it contains an array for each suspect token in which the 2 elements are the integer index of the start of the token and the integer index of the end of the token
textual_errors Array Errors identified within the piece of text for which the system can suggest a correction. The array may be empty. If not empty, it contains an array for each error in which the 4 elements are: the integer index of the start of the error, the integer index of the end of the error, the suggested correction and the error code. Refer to the appendix for a list of error codes.
text_stats JSON object This attribute may not be present, depending on the assessment pipeline being used. If present, please note that each of the attributes within may or may not be present. The attributes are:
r1 (floating-point number): The word overlap between the question and answer text, as a proportion of the answer text
r2 (floating-point number): The bigram overlap between the question and answer text, as a proportion of the answer text
r3 (floating-point number): The trigram overlap between the question and answer text, as a proportion of the answer text
lcs (integer): The longest common subsequence shared by the question and answer
feature_count (integer): A count of the features found in the answer
word_count (integer): A count of the words found in the answer

Results not retrieved

In addition to the general possible responses outlined earlier in this document, there are a few specific reasons why results may not be retrieved.

Reason HTTP status code JSON response
Results are not yet ready. Wait at least 1 second and try again. See also Waiting for results below. 200 {"type": "results_not_ready", "estimated_seconds_to_completion": 5.7, "code": 200}
There was insufficient English text in the answer to assign a score 200 {"type": "failure", "message": "insufficient_english_text", "code": 200}
A sentence in the answer was so long that assessment was unable to be completed 200 {"type": "failure", "message": "sentence_too_long", "code": 200}
A token (word) in the answer was so long that assessment was unable to be completed 200 {"type": "failure", "message": "token_too_long", "code": 200}
An unspecified error meant assessment of the answer was unable to be completed 200 {"type": "failure", "message": "unspecified_error", "code": 200}
No submission found with the specified id 404 {"type": "error", "code": 404, "message":"id not found"}
Waiting for results

The system generally takes a few seconds to assess a piece of text. If results for a piece of text are not available when this API endpoint is called, the anticipated time remaining until the results will be available is returned in the estimated_seconds_to_completion response attribute. A client which wants to receive results as soon as possible (for example, because it needs to return results to its users as quickly as possible) should not poll in a tight loop, but must wait at least 1 second before requesting results again. A client which does not need results as quickly as possible can of course choose to wait an arbitrary amount of time before requesting results again. In either case, a more sophisticated approach might take into account the estimated seconds to completion, instead of polling at a fixed time interval. However, the estimated seconds to completion is only a guide. Assessment of a particular piece of text may be faster or slower than expected, depending on its characteristics and whether there are already other texts awaiting assessment. If it is slower than expected, the estimated seconds to completion could reach 0 and remain there until the text has completed assessment. An alternative approach using webhooks is also available if a client would prefer to be notified when assessment has completed, rather than calling the /results endpoint.

results matching ""

    No results matching ""