How do you know if you are succeeding without measuring the right outcomes?

You don’t.

That’s why we are always refining the way we collect data to be as accurate as possible. For years, we’ve evaluated what parameters provide the most meaningful data, and what kind of metrics only serve to paint an incomplete picture.

The short answer

Collecting high-quality educational data is difficult. We follow a series of guidelines within our course and within our data collection to collect numerous points of information across a wide range of students, at different time intervals and different times of the year. We combine this with qualitative data we collect through feedback, testimonial and informal discussion to extrapolate on specific key outcomes.

While most companies will select their data to show the best marketing message, our data is cleaned as per the general standard for statistical analysis in scientific interpretation. There is no need for us to “play” with data so strategically since our results confidently speak for themselves. The data we collect is geared towards eventual academic publication.

The long answer

Challenges with educational data

The challenge of collecting high quality data is an obstacle faced by every organisation working in the educational space.

Even in established fields such as secondary school education, many of the outcome measurements are heavily confounded and doubtfully meaningful.

For example, when reviewing educational literature, the majority of secondary schools educational research uses student achievement of the curriculum as a standard outcome. Interventions or practices that produce a higher pass or excellence rate of achievement are regarded as more effective. However, if we compare the achievement within the secondary school curriculum with achievement in tertiary education or later in life, the outcomes become much more obscure.

While a strong trend exists for basic achievements such as attendance, pass/fail marks and basic numeracy and literacy, it is harder to find statistically significant trends for different levels of achievement in higher achievement brackets. A student who achieves in the top 20% may go on to achieve in the top 10% in tertiary education, while someone in the top 10% may equally fall to the top 20%.

Furthermore, funding for differentiating high levels of achievement is lower as it presents less of a societal problem. As a result, research is much more sparse and many of our current assumptions about secondary school achievement and later-life success are extrapolations.

Unsurprisingly, research in the last couple of decades shows that higher achievement in earlier education can produce lower outcomes later in life. This “early-peaking” phenomenon is called the fade-out effect. It is seen prominently in early childhood education but also in secondary schools where students achieve high grades due to dependency on tutoring. When a high-level of support is not available, these tutoring-dependent students struggle disproportionately due to a lack of competency in self-sustainable learning skills.

This problem is even greater in fields like academic skills development where there are no established norms or best practice guidelines.

Our conclusions

We have identified, through both experience and review of the literature, that the following guidelines must be followed for an educational program to create student success long-term (secondary school, tertiary education and beyond):

  • External dependency, such as subject tutoring, should be avoided where access to educational resources is adequate
  • Metacognition should be facilitated, especially with regards to self-awareness and meta-learning
  • Approaches and skills should create consistent results at different levels of education and baseline academic achievement (i.e. the skills are adaptive, foundational and transferrable)
  • Student intrinsic motivation should be strongly considered and facilitated sustainably where possible
  • Expert “mimicry” (i.e. the superficial appearance of knowledge mastery through memorisation) should be avoided wherever possible
  • Fixed mindsets, especially around learning, should be actively discouraged while growth mindsets should be encouraged

Due to a lack of current best-practice and the numerous flaws in educational data collection, metrics should also be:

  • Numerous (to reduce random and non-random error)
  • Longitudinally measured (to avoid sampling error, confounding with point measurements and to improve temporality in causality analysis)
  • Deliberately redundant (to increase the accuracy of overlapping domains of measurement and partially single-blind the responding student through obfuscation)
  • Compulsory for students (to reduce volunteer bias)
  • Both process-based and outcome-based (to allow more accurate extrapolation of the specificity of association between process and effect)
  • Qualitative as well as quantitative or semi-qualitative (to allow free expression where objective metrics would fall short)

Finally, most companies use data strategically as a marketing tool. Especially in the educational space where most data is not peer-reviewed or even verifiable, many companies will illegally falsify data to claim statistics of impressive achievement. Indeed, the list of popular educational companies that have been sued for this reason is sadly and disturbingly long.

Therefore, we endeavour, where possible, to collect, process and analyse our data in line with what would be acceptable for the standards of statistical analysis by the global scientific community.

Our metrics

We collect metrics on students in several ways:

  • Surveys and questionnaires during free events for non-members
    • Hours spent studying per week
    • Hours of sleep per night
    • Hours of time spent on non-academic activities per week
    • All of the above for exam vs non-exam season
  • Surveys and questionnaires before membership commencement
  • Compulsory screening survey upon membership commencement, involving an examination of:
    • Deep processing ability
    • Metacognition with regards to metalearning
    • Active learning skills
    • Growth mindset
  • Compulsory screening quiz in the first stage of course progress, involving an examination of:
    • Fixed mindset with regards to metalearning
    • Active learning skills that facilitate deep processing
    • Note-taking with regards to optimisation of intrinsic cognitive load and reduction of extraneous cognitive load
    • Independent learning skills and adaptive resource usage
  • Progress quizzes at four separate time points throughout the course, each approximately two to four weeks apart, involving an examination of:
    • Results on any recent assessments
    • Semi-subjective rating of the helpfulness of the most recent course section
    • Duration of work through the most recent course section
    • Semi-subjective rating of change in overall studying efficiency
    • Semi-subjective rating of change in overall academic confidence
    • Semi-subjective rating of change in overall studying speed
    • Willingness to recommend the course to others
    • Subjective free-text entry on any other feedback, comments or testimonial

We also perform in-depth analysis of informal conversation, noting that students’ language and terminology usage can be a proxy measurement of knowledge mastery, as per the field of semiotics. This data is collected from:

  • Phone discussions with members and non-members during consultation
  • Messages on public channels and private messages in the iCanStudy community private discord server
  • Email interactions between students and iCanStudy staff

Though it is difficult to quantify, a significant portion of our data is extracted from the thousands of informal interactions we have with our students through the above modes.

Other References

Abenavoli, R. M. (2019). The Mechanisms and Moderators of “Fade-Out”: Towards Understanding Why the Skills of Early Childhood Program Participants Converge Over Time With the Skills of Other Children. Psychological Bulletin, 145(12), 1103–1127. doi: 10.1037/bul0000212

Bai, Y., Ladd, H. F., Muschkin, C. G., & Dodge, K. A. (2020). Long-term effects of early childhood programs through eighth grade: Do the effects fade out or grow? Children & Youth Services Review, 112, N.PAG. doi: 10.1016/j.childyouth.2020.104890

Cooper, E. (2010). Tutoring Center Effectiveness: The Effect of Drop-In Tutoring. Journal of College Reading & Learning, 40(2), 21–34. doi: 10.1080/10790195.2010.10850328

Scott E. Carrell and James E. West, “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors,” Journal of Political Economy 118, no. 3 (June 2010): 409-432. doi: 10.1086/653808

Sharon Ishiki Hendriksen, Lifen Yang, Barbara Love & Mary C. Hall (2005) Assessing Academic Support: The Effects of Tutoring on Student Learning Outcomes, Journal of College Reading and Learning, 35:2, 56-65, DOI: 10.1080/10790195.2005.10850173

About the Author

Share this post

Share on facebook
Share on email

Invite & Earn

X
Signup to start sharing your link
Signup