I find this question fascinating. I believe we have to get to a space where learning interfaces are more natural and encourage more dynamic interaction. For me this means more video and audio. The process of typing and clicking is just too clunky and adds too much extraneous cognitive load to be dynamic. We already have some pretty good voice recognition systems that can take audio and convert the words into text. This type of technology is going to be important in creating learning environments where the learner can concentrate more on learning and less on using technology. For example, right now, my brain wants to move much faster than my fingers and because my fingers are cold, I keep tripping over my words. This makes me want to give up and stop typing.
David Kelly (Kelley, David: Changes in Interaction Design, 2003) talks about developing technologies that interact with us, using sensors & affecters, to create experiences and services. We have come a long way since 2003, and there is still a lot of development to do. For example, for me to access this course in Moodle, I have to turn on my computer, open a browser, find the site, login, and then click a few times to open the page. If I am studying, and I need to go back and reference something in the course, this is extraneous overhead. An interface that really works with us as humans should be as simple as “Open my HCI course”, and the course should open.
So, how do we develop these interfaces? Bill Atkins talked about how user interfaces get better by designers watching people use them. (Bill Atkinson: Pull Down Menus, 2003) In his demonstration of early pull down menus he showed how they could see users struggle with menus that overlapped, or got lost on the bottom of the page. I also think we need to watch how people interact in real life. If in the classroom, we sit around the table and have a discussion, we need to design an online discussion space that functions like sitting around the table. One possibility could be a video camera that can follow you within a range. So when you sit back or lean to the right, you don’t fall off the screen. Having the ability to autosense interaction could also be important. For example, if one person is talking, the display could automatically enlarge their video window for all other participants to see. Of course, this kind of automation could create problems with one person dominating the discussion. There are still many limitations to technology, and in particular to video and audio technology. One of the major limitations is insufficient bandwidth to handle multiple streams of video. But technologies are improving. Bandwidth is increasing, and compression algorithms are improving. It will be interesting to see how we can incorporate these technologies into a dynamic learning environment.
Bill Atkinson: Pull Down Menus. (2003).
Kelley, David: Changes in Interaction Design. (2003).