After a much-needed and long-planned break because of a sightseeing weekend in Hamburg, it’s back to DALMOOC and a look back on week 6. Thanks to Ryan’s comments/solution about the problematic „logistic regression“ question, I went on with the math tutor assignment of week 5 and was able to finish it before going to Hamburg. Week 6 has another one of those external resource math tutor things… and I haven’t completed it yet: Questions 1 to 4 were doable with Excel, but for Question 5, I haven’t got an idea how to start (maybe some time later).
Week 6 seemed a little bit easier to understand than week 5 because of the videos which explained some of the topics of week 5. However, I can’t describe the topics of week 6 in detail although I spent many hours with the 8 videos. So for now, it’s just the overview of main aspects.
For me, MOOCs which go on for longer than 4-6 weeks, are hard to manage because I would like to have some leisure time again… In my opinion, you have to be very motivated to stay on a MOOC with semester-like duration – either you really need the content / certificate or otherwise you do it because you like it very much. One of my main MOOC incentives are lecturers / facilitators who really care for what they are doing and really have something to say – like in this MOOC.
„There’s no perfect way to get indicators of student behavior that you can completely trust. It’s not truth, it’s ground truth“ – that sums it up pretty well. Sources of Ground Truth are self-report (although not common for labeling behavior), field observations, text replays and video coding.
In week 6 we heard about Feature Engineering, which „is the art of creating predictor variables“ and the „least well-studied but most important part for developing prediction models which otherwise won’t be any good“. For that you can consider papers from other researchers (there’s a lot of literature about features which were used and worked or didn’t work) and take a set of pre-existing variables (this is faster), but thinking about your variables is likely to lead to better models. These steps would be 1) Process of brainstorming features 2) Deciding what features to create 3) Creating them 4) Studying their impact on model goodness 5) Iterating on features if useful and 6) Going back to step 3 or 1
A big part of week 6 was about metrics for classifiers: Accurary, Kappa (I’ll keep in mind that for data mined models, typically a Kappa 0.3-0.5 is considered „good enough to call the model better than chance and publishable“), ROC (= Receiver-Operating Characteristic Curve), A‘, Precision and Recall. For each of these we got additional information, formulas, examples and details which might come in handy in the future.
The metrics for regressors include Linear correlation (= Pearson’s correlation), MAD/MAE (= Mean Absolute Deviation/Error), RMSE (= Root Mean Squared Error) and Information Criteria like BiC (Bayesian Information Citeria) and AIC.
Which metrics to use?
There is a saying that the idea of looking for a single best measure to choose between classifiers is wrong-headed and you could say the same for regressors. Advice: Try to understand your model across multiple dimensions and that involves using multiple metrics.
Another aspect of week 6 was Knowledge Engineering (= rational modeling, cognitive modeling). Knowledge Engineering is where your model is created by a smart human being (not a computer as a data-mined model) who is carefully studying the data and becomes deeply familiar with the target construct and understands the relevant theory and how it applies. Knowledge Engineering can even achieve higher construct validity than data mining. A good example is Aleven’s model of students‘ help-seeking. On the other hand, unfortunately, there are cases where people didn’t do it carefully in order to just get a quick result – this has a negative influence on science and maybe even student outcomes because of wrong interventions. It is hard to know if the knowledge engineering was done very good, because the work is in the researcher’s brain and the process usually invisible; it is easier to tell with data mining models.
Feature Engineering is very closely related to Knowledge Engineering, it’s not an either-or.
There are many types of validity and it is important to address them all: Generalizability (= Does your model remain predictive when used in a new data set?) / Ecological validity (= Do your findings apply to real-life-situations outside of research settings?) / Construct validity (= Does your model actually measure what it was intended to measure?) / Predictive validity (= Does your model predict not just the present but the future as well?) / Substantive validiy (= Do your results matter? Are you modelling a construct that matters?) / Content validity (= Does the test cover the full domain it is meant to cover?) / Conclusion validity (= Are your conclusions justified based on the evidence?)
The videos for week 6 can be found in https://www.youtube.com/user/dalmooc and as MOOT (Massive Online Open Textbook) „Big Data and Education“ on Ryan Baker’s website: http://www.columbia.edu/~rsb2162/bigdataeducation.html
That’s it for week 6, and we are already in week 7 with the topic „Text Mining“. The Google Hangout of week 7 is on Thursday 2 a.m. local time – so, again, it will be the archived version. Sadly one disadvantage of an international MOOC – on the other hand, I enjoy immensely the diversity and internationality of the students 🙂