Skip to content

Monthly Archives: August 2015

Publishable places: conferences

Below is a list of some of the more prestigious conferences at which our research may be published.

The list was derived from and Australian collection entitled computingĀ research andĀ education

http://portal.core.edu.au/

accordingly many of the conferences have a specific focus on computer science education or cognate academic areas, however some of them are more general Ā They are ranked in order Ā of category (A to C) and alphabetic within each category as shown below.

these are well established conferences predominantly under the ACM or IEEE banner. BecauseĀ MOOCs are a relatively new phenomena, Ā specialist conferences in this area do not have a track record and do not appear in thisĀ rating.

Conferences such as EC Tel and the e-groups conference have established themselves as being important.

In addition there are various conferences related to the study of higher education in particular the annual SHRE conference which are of value.

Category A

ACM Special Interest Group on Computer Science Education Conference
Annual Conference on Innovation and Technology in Computer Science Education
International Conference on Artificial Intelligence in Education

Category B

Educational Data Mining
Information Systems Education Conference
International Computing Education Research Workshop
International Conference on Computers in Education
International Conference on Informatics Education and Research
World Conference on Educational Multimedia, Hypermedia and Telecommunications
Frontiers in Education
International Conference on Software Engineering: Education and Practice

Category C

ACM Information Technology Education
Asia-Pacific Forum on Engineering and Technology Education
Computing in Education Group of Victoria Conference
Conference on Software Engineering Education and Training
Functional and Declarative Programming in Education
Global Chinese Conference on Computers in Education
Informatics Education Europe
Informing Science and Information Technology Education
International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet
International Conference on Cybercrime Forensics Education and Training
International Conference on Engineering Education
Software Education Conference
Software Engineering Education and Training Conference
Visualization In Science and Education
Workshop on Computer Architecture Education
International Academy for Information Management) International Conference on Informatics Education & Research
IASTED International Conference on Computers and Advanced Technology in Education
Conference on Software Engineering Education
International Conference on IT Based Higher Education and Training
IEEE International Conference on Multimedia in Education
Society for Information Technology and Teacher Education Conference
International Conference on Technology Education
World Conference on Computers in Education
Informatics in Education

Australasian

Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education
Australasian Computers in Education Conference
Australasian Computing Education Conference (ACE)

MOOC Viz Interns Week 5 Update

Lubo:

This week I found a new tool for building the dashboard – Shiny.

Shiny is a web framework for R that allows users to easily create good looking analytics dashboards directly in R, without the need of any HTML or backend programming. However, a great benefit of Shiny is that it is built around HTML and if you desire, you can directly add HTML elements or apply a CSS theme. Additionally, the framework offers a good range of filtering controls and supports dynamic content. As far as visualisations go, Shiny works with some of the most popular JavaScript charting libraries, including HighCharts, D3, dygraphs and more.

Given all of its features and the fact that it can easily be understood and shared amongst the R community, we have decided to use Shiny for the final dashboard.

Most of the work Iā€™ve done this week has been comprised of researching and learning Shiny. Apart from that, I received access to the course survey data and I was given the task of making different visualisation by filtering learners based on their survey responses. To accomplish this, I produced two scripts – one which filters out learners based on their survey responses and another that makes different analyses based on the learner ids returned from the methods of the first script.

The next step is to use the scripts I have made to build an initial dashboard using Shiny.

Lin:

This week I have completed two tasks, importing the MySQL data to a remote server and fetching course data from FutureLearn.

The first one wasnā€™t very difficult because I have a script which is able to import data into localhost, I just needed to change arguments so that it can import it to a remote server. However, the script didnā€™t work well because I had some misunderstandings about how MySQL works. MySQL works locally, so if I want to connect to a remote server, I have to create an SSH tunnel and bind it to localhost. Fortunately, there is an external module – SSHTunnel which allows us to bind remote servers easily, so far it works without error.

The second task was harder for me because of my lack of experience. The goal was to create a script that will automatically download the course data from FutureLearn and upload it to the MOOC Observatory at regular periods. To accomplish this I had to write HTTP requests in Python. Given that I have never learned anything related to HTTP before, it took me a few days to build some basic knowledge. Currently, I am just waiting for an admin account because I need to analyse admin webpage. Additionally, I need to decide a suitable period to update data depending on web server.

I think our current progress is good and I believe we are able to finish our project on time. Hopefully nothing will go wrong in the near future. I will also try my best on this project in the following weeks

MOOC Visualisation Interns Week 4 Update

Lubo:

Last week I wrote a summary blog post of the first four weeks of the internship but it never got used so I am going to use it for this week’s post.

First four weeks summary:

My development work can be divided into two categories: data analysis with Python scripts and data visualisation with Google Charts.

Data analysis scripts

At the beginning of the second week we were provided with a set of csv files containing the latest data at the time of the Developing Your Research Project MOOC. Based on the analysis tasks I was given, I started work on Python scripts that will filter the raw data and produce basic visualisations. To help me figure out the data manipulation and filtering process, I first implemented it in Libre Calc and then tried to recreate it in code. I came to realise that the analysis mostly required pivoting the data in some way so I researched the best tools for doing that in Python. In the end I decided to use the pandas library as it seemed to be the standard across the data science community and provides similar functionality to R in Python. The easiest way of installing it was through the Anaconda Python distribution which comes with a set of essential libraries for data analysis including matplotlib, which I used for simple visualisations.

The following is a list of the scripts I have developed paired with a short description of their functionality:

day_analysis.py ā€“ for each day of the course, plots the number of unique learners who have visited a step for the first time
time_analysis.py ā€“ same as for the day analysis but plots the data by hour of the day
reply_analysis.py ā€“ for each step of the course, plots the percentage of replies to comments
enrollment.py ā€“ for each day of the course, plots the total number of enrolled students

All of these scripts can be found on our GitHub repo at https://github.com/Lubo-93/MOOC-Viz-Scripts (note that they will not work without the data sets, which are not provided in the repo).

As long as the data format of the original csv files doesn’t change, these scripts will be able to filter and visualise new data as it is supplied. Since most of the csv files are similar in structure and producing the visualisations requires pivoting, not a lot of changes are needed to the code to adapt the scripts to different analysis scenarios. However, in future work the scripts could be generalised to a single script the manipulates the data depending on user supplied parameter values. This would be beneficial for the final dashboard as well. Additionally, all the scripts can export their pivots to JSON but further work is needed on correct formatting.

Data visualisation

As far as data visualisation goes, I decided to use Google Charts because of its simple API and its dashboard feature which allows you to bind multiple charts and filter controls so that they all respond together when the data or filter is modified. I learned how to develop with Google Charts during WAIS Fest for which I made a dashboard with different chart types for UK road traffic data. Although it was not completely finished, the experience taught me how to work with Google Charts and I also became aware of some of its limitations. For example, it doesn’t let you specify the regions displayed on its geo maps (e.g. the UK map only shows England, Wales, Scotland and Northern Ireland; you can’t include any more specific administrative regions). However, I discovered a workaround by using Highmaps ā€“ it allows you to specify regions with GeoJSON encoded strings or you can make completely custom maps (both of which I successfully tried, although using GeoJSON proved to be really slow). With the skills I gathered from WAIS Fest I developed a dashboard that visualises course activity and enrolment data with multiple choices of filtering.

Lin:

This week I continued with the jobs I had left unfinished from last week. I changed the table structure and used other ways to import csv files into MySQL. Currently, it seems work well and take less time. After a discussion with Lubo and Manuel, I decided to use this version for the time being.

Besides importing efficiency, fetching data quickly is another factor I need consider. MySQL allows us to set up an index to accelerate searching, but it take more time to insert data because MySQL should assign an index to each row. So there is a balance we need decide.

After dealing with MySQL, we started to learn a new programming language for data analysis – R. R is easy to learn and use. To compare it with Python, it costs less time to work out same data. I studied all the chapters in the R online tutorial and now I am familiar with the syntax and have learned about some quite useful and interesting features of R. I also tried to convert my python scripts into R and compared both – I think R works better. During the following week, I will keep going on with my research on R.