Get Report Data by Question

GET /report/:reportID/model/:questionID



Enables the retrieval of report data for a specific question in a report (associated with a single survey).

Resource URL

https://api.ideasystem.org/v1/report/reportID/model/questionID

Parameters

report_id
required
 
Example: /v1/report/1234/model/54321
A unique report identifier in the IDEA system, used to select which report data to retrieve.
question_id
required
 
Example: /v1/report/1234/model/54321
A unique question identifier in the IDEA system, used to select which question data to retrieve.

Query Parameters

demographic_group_id
optional
 
Example: /v1/report/1234/model/54321?demographic_group_id=9
An optional filter used to narrow results down to responses from just one demographic group. This is only applicable for surveys using the Feedback System for Administrators. And only valid when demographic sub-groups were selected.

Response

The response will be an HTTP 200 along with a JSON body that contains the report data for the given question in the given report. If the report (report_id) or question (question_id) cannot be found, an HTTP 404 (Not Found) will be returned along with an error message in a JSON body. All other errors will return an HTTP 500 (Internal Server Error).

Response Parameters (JSON Body)

answers
Example: "answers": [...]
An array of String values that are answers to the question. This is only provided when the data is associated with an open-ended question.
self-rating
Example: "self_rating": 1
If this survey uses gap analysis, this represents the rating that the survey subject (information form respondent) gave to themselves. If gap analysis was not used, this value will not be included.

tally
Example: "tally": { ... }
A collection of counted data.
tally.omit
Example: "omit": 1
The number of respondents who omitted a response to the question.
tally.cannot_judge
Example: "cannot_judge": 1
The number of respondents that chose 'Cannot Judge' as their answer to the question.
tally.response
Example: "response": 34
The number of respondents that answered this question; this will not include those that chose 'Cannot Judge'.

results
Example: "results": { ... }
A collection of aggregated, calculated, and comparison data. Including:
  • results for this survey (result)
  • comparative results for discipline (discipline_result)
  • comparative results for institution (institution_result)
  • comparative results for the IDEA database (idea_result)

results.result
Example: "result": { ... }
Calculated values for all responses to the associated survey question.
results.result.raw
Example: "raw": { ... }
Raw calculated data.
results.result.raw.mean
Example: "mean": 3.5
Raw mean (average selected value).
results.result.raw.tscore
Example: "tscore": 78.0
Raw mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.raw.mean, idea_result.raw.mean, and IDEA standard deviation.
results.result.raw.standard_deviation
Example: "standard_deviation": 0.7
Raw standard deviation. In some cases, this will not be provided (will be null or non-existent). One case in particular is when there is only 1 respondent or only 1 respondent replied to the question associated with this response data point.
results.result.raw.method_comp
Example: "method_comp": -0.12345678
This value is calculated for teaching method questions on surveys using the diagnostic rater form by subtracting a comparison group average from the raw mean value. The comparison group average is calculated using classes of similar size and student motivation.
results.result.raw.percent_positive
Example: "percent_positive": 78.12
Percent of respondents selecting a positive response.
results.result.raw.percent_negative
Example: "percent_negative": 5.07
Percent of respondents selecting a negative response.
results.result.adjusted.
Example: "adjusted": { ... }
Adjusted calculated data; this attempts to separate the contributions of the teacher from the contributions of the extraneous factors to student learning. For more information about adjusted scores, see IDEAedu.org/AdjustedScores.
results.result.adjusted.mean
Example: "mean": 3.5
Adjusted mean.
results.result.adjusted.tscore
Example: "tscore": 78.0
Adjusted mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.adjusted.mean, idea_result.raw.mean, and IDEA standard deviation.

results.discipline_result
Example: "discipline_result": { ... }
Calculated values to compare this survey to other surveys in the same discipline.
results.discipline_result.raw
Example: "raw": { ... }
Raw discipline data.
results.discipline_result.raw.mean
Example: "mean": 3.5
Raw discipline group average.
results.discipline_result.raw.tscore
Example: "tscore": 78.0
Raw mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.raw.mean, discipline_result.raw.mean, and Discipline standard deviation.
results.discipline_result.adjusted
Example: "adjusted": { ... }
Adjusted calculated data; this attempts to separate the contributions of the teacher from the contributions of the extraneous factors to student learning. For more information about adjusted scores, see IDEAedu.org/AdjustedScores.
results.discipline_result.adjusted.mean
Example: "mean": 3.5
Adjusted discipline group average.
results.discipline_result.adjusted.tscore
Example: "tscore": 78.0
Adjusted mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.adjusted.mean, discipline_result.adjusted.mean, and Discipline adjusted standard deviation.

results.institution_result
Example: "institution_result": { ... }
Calculated values to compare this survey to other surveys in the same institution.
results.institution_result.raw
Example: "raw": { ... }
Raw institution calculated data.
results.institution_result.raw.mean
Example: "mean": 3.5
Raw institution group average.
results.institution_result.raw.tscore
Example: "tscore": 78.0
Raw mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.raw.mean, institution_result.raw.mean, and Institution standard deviation.
results.institution_result.adjusted
Example: "adjusted": { ... }
Adjusted calculated data; this attempts to separate the contributions of the teacher from the contributions of the extraneous factors to student learning. For more information about adjusted scores, see IDEAedu.org/AdjustedScores.
results.institution_result.adjusted.mean
Example: "mean": 3.5
Adjusted institution group average.
results.institution_result.adjusted.tscore
Example: "tscore": 78.0
Adjusted mean score statistically transformed from the 5-point scale to a standard scale with a mean of 50 and standard deviation of 10. Calculated using result.adjusted.mean, institution_result.adjusted.mean, and Institution adjusted standard deviation.

results.idea_result
Example: "idea_result": { ... }
Calculated values to compare this survey to all other IDEA surveys.
results.idea_result.raw
Example: "raw": { ... }
Raw IDEA calculated data.
results.idea_result.raw.mean
Example: "mean": 3.5
Raw IDEA group average (grand mean).

formative
Example: "formative": { ... }
Formative data for a teaching method. This object will only appear for teaching method questions.
formative.suggested_action
Example: "suggested_action": "Strength to retain"
Suggested action based on comparisons with ratings for classes of similar size and level of student motivation.
  • "Consider increasing use" means you employed the method less frequently than those teaching similar classes.
  • "Retain current use or consider increasing" means you employed the method with typical frequency.
  • "Strength to retain" means you employed the method more frequently than those teaching similar classes.
More detailed suggestions are in the Interpretive Guide, POD-IDEA Notes, and POD-IDEA Learning Notes.
formative.related_objectives
Example: "related_objectives": [4563, 6894, 535]
List of unique IDEA question identifiers for objectives (important or essential), most related to this teaching method.

response_option_data_map
Example: "response_option_data_map": [...]
A map of response options to response option values. This is only included for scaled questions (not for open-ended questions).


Example - Get Response Data for an Open Question

This request will retrieve the answers to an open question that has the question_id of 54321 in a report that has a report_id of 1234. It will contain an array of answers that are String values.

Request

GET /v1/report/1234/model/54321

Response (as JSON)

HTTP 200
{
  "answers": [
    "This is the answer to an open question.",
    "This is a longer answer to an open question but it isn't too long.",
    "There are times that this will likely get very, very, very long-winded so we need to be able to handle large String values."
	]
}


Example - Get Response Data for a Scaled Question

This request will retrieve the answers to a scaled question that has the question_id of 54321 in a report that has a report_id of 1234. It will contain a collection of aggregated and calculated data.

Request

GET /v1/report/1234/model/54321

Response (as JSON)

HTTP 200
{
    "self_rating": 1, //Optional self-rating value; Only used when gap analysis has been selected.
    "tally": {
        "omit" : 12,
        "cannot_judge": 2,
        "response": 34
    },
    "results": {
        "result": {
            "raw": {
                "mean": 4.2,
                "tscore": 78.0,
                "standard_deviation": 0.7,
                "method_comp": -0.23456,
                "percent_positive": 69,
                "percent_negative": 12
            },
            "adjusted": {
                "mean": 4.4,
                "tscore": 79.0
            }
        },
        "discipline_result": {
            "raw": {
                "mean": 4.2,
                "tscore": 78.0
            },
            "adjusted": {
                "mean": 4.4,
                "tscore": 79.0
            }
        },
        "institution_result": {
            "raw": {
                "mean": 4.2,
                "tscore": 78.0
            },
            "adjusted": {
                "mean": 4.4,
                "tscore": 79.0
            }
        },
        "idea_result": {
            "raw": {
                "mean": 3.8,
            }
        }
    },
    "formative": { //Only appears when this item/question is a teaching method.
        "suggested_action": "Strength to retain",
        "related_objectives": [4563, 6894, 535]
    },
    "response_option_data_map": [
    //Contains scaled response options.
    //Each contains a count and calculated values (response rate, frequency, ...)
		"option1": {
			"count": 32,
			"rate": 98.7
		},
		"option2": {
			"count": 1,
			"rate": 0.1
		},
		"option3": {
			"count": 2,
			"rate": 0.2
		}, ...
	]
}


Example - Get Response Data for a Multiple Choice Multiple Answer Question

This request will retrieve the answers to a Multiple Choice Multiple Answer (MCMA) question that has the question_id of 54321 in a report that has a report_id of 1234. It will contain a collection of response counts. This result differs from a Scaled question in that it lacks any aggregate data as well as response rates as these values are not meaninful for MCMA questions. Also NOTE: The tally numbers will not necessarily correspond with the response counts because each Respondent can provide multiple Responses for questions of this type.

Request

GET /v1/report/1234/model/54321

Response (as JSON)

HTTP 200
{
    "tally": {
        "response": 7,
        "omit": 3,
        "cannot_judge": 0
    },
    "response_option_data_map": {
        "0": {
            "count": 2
        },
        "1": {
            "count": 1
        },
        "2": {
            "count": 2
        },
        "3": {
            "count": 2
        },
        "4": {
            "count": 3
        },
        "5": {
            "count": 3
        },
        "6": {
            "count": 3
        },
        "7": {
            "count": 1
        },
        "8": {
            "count": 3
        },
        "9": {
            "count": 8
        }
    }
}