ASWEC Day 3 (SE Education Track), Keynote, “Teaching Gap: Where’s the Product Gene?” (#aswec2014 #AdelED @jmwind)Posted: April 9, 2014
Today’s speaker is Jean-Michel Lemieux, the VP of Engineering for Atlassian, to open the Education track for the ASWEC Conference. (I’m the track chair so I can’t promise an unbiassed report of the day’s activities.) Atlassian has a post-induction reprogramming idea where they take in graduates and then get people to value products over software – it’s not about what’s in the software, it’s about who is going to be using it. The next thing is to value experiences over functionality.
What is the product “gene” and can we teach it? Atlassian has struggled with this in the past despite having hired good graduates in the past, because they were a bit narrow and focused on individual features rather than the whole product. Jean-Michel spoke about the “Ship-it” event where you have to write a product in 24 hours and then a customer comes and pick what they would buy.
Jean-Michel is proposing the addition of a new degree – to add a product engineering course or degree. Whether it’s a 1 year or 4 year is pretty much up to the implementers – i.e. us. EE is about curvy waves, Computer Engineering is about square waves, CS is about programs, SE is about processes and systems, and PE is about product engineering. PE still requires programming and overlaps with SE. Atlassian’s Vietnam experience indicates that teaching the basics earlier will be very helpful: algorithms, data structures, systems admin, programming languages, compilers, storage and so on. Atlassian wants the basics in earlier here as well (regular readers will be aware of the new digital technologies curriculum but Jean-Michel may not be aware of this).
What is Product Engineering about? Customers, desirable software over a team as part of an ecosystem that functions for years. This gets away from the individual mark-oriented short-term focus that so many of our existing courses have (and of which I am not a great fan). From a systems thinking perspective, we can look at the customer journey. If people are using your product then they’re going through a lifecycle with your product.
Atlassian have a strong culture of exposure and presentation: engineers are regularly explaining problems, existing solutions and demonstrating understanding before they can throw new things on top. Demoing is a very important part of Atlassian culture: you have to be able to sell it with passion. Define the problem. Tell a story. Make it work. Sell with passion.
There’s a hypothesis drive development approach starting from hypothesis generation and experimental design, leading to cohort selection, experiment development, measurement and analysis and then the publishing of results. Ideally, a short experiment is going to give you a prediction of behaviour over a longer term timeframe with a larger number of people. The results themselves have to be clearly communicated and, from what was demonstrated, associated with the experiment itself.
Atlassian have a UI review process using peer review. This has two parts: “Learn to See” and “Learn to Seek”. For “Learning to See”, the important principles are consistency, alignment, contrast and simplicity. How much can you remove, reuse and set up properly so the UI does exactly what it needs to do and no more? For “Learning to Seek”, the key aspects are “bring it forward” (bring your data forward to make things easier: you can see the date when your calendar app is closed). (This is based on work in Microinteractions, a book that I have’t read.) The use of language in text and error messages is also very important and part of product thinking.
No-one works alone at Atlassian and team work is default. There’s a lot of team archeology and look at what a team has been doing for the past few years and learn from it. The Team Fingerprint shows you how a team works, by looking at their commit history, bug tracking. If they reject commits, when do they do it and why? Where’s the supporting documentation and discussion? Which files are being committed or changed together? If two files are always worked on together, can we simplify this?
In terms of the ecosystem, Atlassian also have an API focus (as Google did yesterday) and they design for extensibility. They also believe in making tools available with a focus on determining whether the product will be open source or licensed and how the IP is going to be handled. Extensibility can be very hard because it’s a commitment over time and your changes today have to support tomorrow’s changes. It’s important to remember that extending something requires you to build a community who will use the extensions – again, communication is very important. An Atlassian platform team is done when their product has been adopted by another team, preferably without any meetings. If you’re open source then you live and die by the number of people who are actually using your product. Atlassian have a no-meeting clause: you can’t have a meeting to explain to someone why they should adopt your product.
When things last for years you have to prepare for it. You need to learn from your running code, rather than just trusting your test data. You need to validate assumptions in production and think like an “ops” person. This includes things like building in consistency checks across the board.
Where’s the innovation in this? The Atlassian approach is a little more prescriptive in some ways but it’s not mandating tools so there’s still room for the innovative approaches that Alan mentioned yesterday.
Question time was interesting, with as many (if not more) comments than questions, but there was a question as to whether the idea for such a course should be at a higher level than an individual University: such as CORE, ACDICT, EA,or ACS. It will be interesting to see what comes out of this.
Today’s keynote was given by Alan Noble, Engineering Director for Google Australia and long-term adjunct at the University of Adelaide, who was mildly delayed by Sydney traffic but this is hardly surprising. (Sorry, Sydney!) Whn asked to talk about Google’s Software Engineering (SE) processes, Alan thought “Wow, where do I began?” Alan describes Google’s processes as “organic” and “changing over time” but no one label can describe an organisation that has over 30,000 employees.
So what does Alan mean by “organic”? Each team in Google is empowered to use the tools and processes that work best for them – there is no one true way (with some caveats). The process encouraged is “launch and iterate” and “release early, release often”, which many of us have seen in practice! You launch a bit, you iterate a bit, so you’re growing it piece by piece. As Alan noted, you might think that sounds random, so how does it work? There are some very important underlying commonalities. In the context of SE, you have an underlying platform and underlying common principles.
Everything is built on Google Three – Google’s third iteration of their production codebase, which also enforces certain approaches to the codebase. At the heart of Google Three is something called a package, which encapsulates a group of source files, and this is associated with a build file. Not exciting, but standard. Open Source projects are often outside: Chrome and Android are not in Google Three. Coming to grips with Google Three takes months, and can be frustrating for new hires, who can spend weeks doing code labs to get a feeling for the codebase. It can take months before an engineer can navigate Google Three easily. There are common tools that operate on this, but not that many of them and for a loose definition of “common”. There’s more than one source code control system, for example. (As a note, any third party packages used inside Google have the heck audited out of them for security purposes, unsurprisingly.) The source code system used to be Perforce by itself but it’s a highly centralised server architecture that hasn’t scaled for how Google is now. Google has a lot of employees spread around the world and this presents problems. (As a note, Sydney is the 10th largest engineering centre for Google outside of Mountain View.) In response to this scaling problem, Google have tried working with the vendor (which didn’t pan out) and have now started to produce their own source control system. Currently, the two source control systems co-exist while migration takes place – but there’s no mandated move. Teams will move based on their needs.
Another tool is a tracking tool called Buganizer which does more than track bugs. What’s interesting is that there are tools that Google use internally that we will never see, to go along with their tools that are developed for public release.
There’s a really strong emphasis on making sure that the tools have well-defined, well-documented and robust APIs. They want to support customisation, which means documentation is really important if sound extensions and new front ends can be built. By providing a strong API, engineering teams can build a sensible front end for their team – although complete reinvention of the wheel is frowned upon and controlled. Some of the front ends get adopted by other teams, such as the Mondrian UI front-end for Buganizer. Another front end for Google Spreadsheets is Maestro. The API philosophy is carried from the internal tools to the external products.
Google makes heavy use of their own external products that they produce, such as Docs, Spreadsheets and Analytics. (See, dog food, the eating thereof.) This also allows the internal testing of pre-release and just-released products. Google Engineers are slightly allergic to GANTT charts but you can support them by writing an extension to Spreadsheets. There is a spreadsheet called Smartsheet that has been approved for internal use but is not widely used. Scripting over existing tools is far more common.
And now we move onto programming languages. Or should I say that we Go onto programming languages. There are four major languages in use at Google: Java, C++, Python, and Go (the Google language). Alan’s a big fan of Go and recommends it for distributed and concurrent systems. (I’ve used it a bit and it’s quite interesting but I haven’t written enough in it to make much comment.) There are some custom languages as well, including scripting languages for production tasks. Teams can use their own language of choice, although it’s unlikely to be Ruby on Rails anytime soon.
Is letting engineers pick their language the key to Google’s success? Is it the common platform? The common tools? No. The platforms, tools and languages won’t matter if your organisational culture isn’t right. If the soil is toxic, the tree won’t grow. Google is in a highly competitive space and have to be continually innovating and improving or users will go elsewhere. The drive for innovation is the need to keep the users insanely happy. Getting the organisational settings right is essential: how do you foster innovation?
Well, how do they do it? First and foremost, it’s about producing a culture of innovation. The wrong culture and you won’t get interesting or exciting software. Hiring matters a LOT. Try to hire people that are smarter than you, are passionate, are quick learners – look for this when you’re interviewing. Senior people at Google need to have technical skills, yes, but they have to be a cultural fit. Will this person be a great addition to the team? (Culture Fit is actually something they assess for – it’s on the form.) Passion is essential: not just for software but for other things as well. If people are passionate about one thing, something, then you’d expect that this passion would flow over into other things in their lives.
Second ingredient: instead of managing, you’re unmanaging. This is why Alan is able to talk today – he’s hired great people and can leave the office without things falling apart. You need to hire technical managers as well, people who have forgotten their technical skills won’t work at Google if they’re to provide a sounding board and be able to mentor members of the team.
The third aspect is being open to sharing information: share, share, share. The free exchange of information is essential in a collaborative environments, based on trust.
“Info sharing is power, info hoarding is impotence.” (Alan Noble)
The fourth thing is to recognise merit. It’s cool to do geeky things. Success is celebrated generously.
Finally, it’s important to empower teams to be agile and to break big projects into smaller, more manageable things. The unit of work at Google is about 3-4 engineers. Have 8 engineers? That’s two 4 person teams. What about meetings? Is face-to-face still important? Yes, despite all the tech. (I spoke about this recently.) Having a rich conversation is very high bandwidth and when you’re in the same room, body language will tell you if things aren’t going across. The 15 minute “stand up” meeting is a common form of meeting: stand up in the workplace and have a quick discussion, then break. There’s also often a more regular weekly meeting which is held in a “fun” space. Google wants you to be within 150m of coffee, food and fuel at all times to allow you to get what you need to keep going, so weekly meetings will be there. There’s also the project kick-off meeting, where the whole team of 20-30 will come together in order to break it down to autonomous smaller units.
People matter and people drive innovation. Googlers are supposed to adapt to fast-paced change and are encouraged to pursue their passions: taking their interests and applying them in new ways to get products that may excite other people. Another thing that happens is TGIF – which is now on Thursday, rather than Friday, where there is an open Q and A session with the senior people at Google. But you also need strong principles underlying all of this people power.
The common guiding principles that bring it all together need to be well understood and communicated. Here’s Alan’s list of guiding principles (the number varies by speaker, apparently.)
- Focus on the user. This keeps you honest and provides you with a source of innovation. Users may not be articulate what they want but this, of course, is one of our jobs: working out what the user actually wants and working out how many users want a particular feature.
- Start with problems. Problems are a fantastic source of innovation. We want to be solving real, important and big problems. There are problems everywhere!
- Experiment Often. Try things, try a lot of things, work out what works, detect your failures and don’t expose your users to any more failures than you have to.
- Fail Fast. You need to be able to tolerate failure: it’s the flip side of failure. (A brief mention of Google Wave, *sniff*)
- Paying Attention to the Data. Listen to the data to find out what is and what is not working. Don’t survey, don’t hire marketing people, look at the data to find out what people are actually doing!
- Passion. Let engineers find their passion – people are always more productive when they can follow their passion. Google engineers can self-initiate a transfer to encourage them to follow their passion, and there is always the famous Google 20% time.
- Dogfood. Eat your own dogfood! Testing your own product in house and making sure that you want to use it is an essential step.
The Google approach to failure has benefited from the Silicon Valley origins of the company, with the approach to entrepreneurship and failure tolerance. Being associated with a failed start-up is not a bad thing: failure doesn’t have to be permanent. As long as you didn’t lie, cheat or steal, then you’ve gained experience. It’s not making the mistake, it’s how you recover from it and how you carry yourself through that process (hence being ethical even as the company is winding down).
To wind it all up, Google doesn’t have standard SE processes across the company: they focus on getting their organisation culture right with common principles that foster innovation. People want to do exciting things and follow new ideas so every team is empowered to make their own choices, select their own tools and processes. Launch, iterate, get it out, and don’t hold it back. Grow your software like a tree rather than dropping a monolith. Did it work? No? Wind it back. Yes? Build on it! Take the big bets sometimes because some big problems need big leaps forward: the moon shot is a part of the Google culture.
Embrace failure, learn from your mistakes and then move on.
Well, it’s the day after CSEDU and the remaining attendees are all checking out and leaving. All that remains now is lunch (which is not a minor thing in Spain) and heading to the airport. In this increasingly on-line age, the question is often asked “Why do you still go to conferences?”, meaning “Why do you still transport yourself to conferences rather than participating on-line?” It’s a pretty simple reason and it comes down to how well we can be somewhere using telepresence or electronic representations of ourselves in other places. Over the time of this conference, I’ve listened to a number of talks and spoken to a number of people, as you can see from my blog and (if you could see my wallet) the number of business cards I’ve collected. However, some of the most fruitful discussions took place over simple human rituals such as coffee, lunch, drinks and dinner. Some might think that a travelling academic’s life is some non-stop whirl of dining and fun but what is actually happening is a pretty constant round of discussion, academic argument and networking. When we are on the road, we are generally doing a fair portion of our job back home and are going to talks and, in between all of this, we are taking advantage of being surrounded by like-minded people to run into each other and build up our knowledge networks, in the hope of being able to do more and to be able to talk with people who understand what we’re doing. Right now, telepresence can let me view lectures and even participate to an extent, but it cannot give me those accidental meetings with people where we can chat for 5 minutes and work out if we should be trying to work together. Let’s face it, if we could efficiently send all of the signals that we need to know if another human is someone we want to work with or associate with, we’d have solved this problem for computer dating and, as I understand it, people are still meeting for dinners and lunch to see if what was represented on line had any basis in reality. (I don’t know about modern computer dating – I’ve been married for over 15 years – so please correct me if I’m wrong.)
Of course, for dating, most people choose to associate with someone who is already in their geographical locale but academics don’t have that luxury because we don’t tend to have incredible concentrations of similar universities and research groups in one place (although some concentrations do exist) and a conference provides us with a valuable opportunity to walk out our raw ideas into company and see what happens. There is also a lot to be said for the “defusing” nature of a face-to-face meeting, when e-mail can be so abrupt and video conferencing can provide quite jagged and harsh interactions, made more difficult by network issues and timezone problems. That is another good reason for conferences: everyone is away and everyone is in the same timezone. The worst conference to attend is one that is in your home town, because you will probably not take time off work, you’ll duck into the conference when you have a chance – and this reduces the chances of all of the good things we’ve talked about. It’s because you’re separated from your routine that you can have dinner with academic strangers or hang around after coffee to spend the time to talk about academic ideas. Being in the same timezone also makes it a lot easier as multi-continent video conferences often select times based on what is least awful for everyone, so Americans are up too early, Australians are up too late, and the Europeans are missing their lunches. (Again, don’t mess with lunch.)
It’s funny that the longer I stay an academic, the harder I work at conferences but it’s such a good type of hard work. It’s productive, it’s exciting, it’s engaging and it allows us to all make more progress together. I’ve met some great people here and run into some friends, both of which make me very happy. It’s almost time to jump back on a plane and head home (where I turn around in less than 14 hours to go and run another conference) but I feel that we’ve done some good things here and that will lead to better things in the future.
It’s been a blast, CSEDU, let’s do it again. Buenos dias!
CSEDU, Day 3, Final Keynote, “Digital Age Learning – The Changing Face of Online Education”, (#csedu14 #AdelED @timbuckteeth)Posted: April 4, 2014
Now, I should warn you all that I’ve been spending time with Steve Wheeler (@timbuckteeth) and we agree on many things, so I’m either going to be in furious agreement with him or I will be in shock because he suddenly reveals himself to be a stern traditionalist who thinks blended learning is putting a textbook in the Magimix. Only time will tell, dear reader, so let’s crack on, shall we? Steve is from the Plymouth Institute of Education, conveniently located in Plymouth University, and is a ferocious blogger and tweeter (see his handle above).
Erik introduced Steve by saying that Steve didn’t need much introduction and noted that Steve was probably one of the reasons that we had so many people here on the last day! (This is probably true, the afternoon on the last day of a European conference is normally notable due to the almost negative number of participants.)
“When you’re a distance educator, the back of the classroom can be thousands of miles away” (Steve Wheeler)
Steve started with the idea that on-line learning is changing and that his presentation was going to be based on the idea that the future will be richly social and intensely personal. Paradoxical? Possibly but let’s find out. Oh, look, an Einstein quote – we should have had Einstein bingo cards. It’s a good one and it came with an anecdote (which was a little Upstairs Downstairs) so I shall reproduce it here.
“I never teach my students. I only provide the conditions in which they can learn.” Albert Einstein
There are two types of learning: shallow (rote) learning that we see when cramming, where understanding is negligible or shallow if there at all, and then there is the fluid intelligence, the deeper kind of learning that draws on your previous learning and your knowledge structures. But what about strategic learning where we switch quickly between the two. Poor pedagogy can suppress these transitions and lock people into one spot.
There are three approaches here: knowledge (knowing that, which is declarative), wisdom (knowing how, which is procedural) and transformation (knowing why, which is critical). I’ve written whole papers about the missing critical layer so I’m very happy to see Steve saying that the critical layer is the one that we often do the worst with. This ties back into blooms where knowledge is cognitive, wisdom is application and transformation is analysis and evaluation. Learning can be messy but it’s transformative and it can be intrinsically hard to define. Learning is many things – sorry, Steve, not going to summarise that whole sentence.
We want to move through to the transformational stage of learning.
What is the first attempt at distance learning? St Paul’s name was tossed out, as was Moses. But St Paul was noted as the first correspondence course offered. (What was the assessment model, I wonder, for Epistola.) More seriously, it was highly didactic and one-way, and it was Pitman who established a two-way correspondence course that was both laborious and asynchronous but it worked. Then we had television and in 1968, the Stanford Instructions TV Network popped up. In 1970, Steve saw an example of video conferencing that had been previously confined to Star Trek. I was around in the early 70s and we were all agog about the potential of the future – where is my moon base, by the way? But the tools were big and bulk – old video cameras were incredibly big and ridiculously short lived in their battery life… but it worked! Then people saw uses for the relationship between this new technology and pedagogy. Reel-to-reel, copiers, projectors, videos: all of these technologies were effective for their teaching uses at the time.
Of course, we moved on to computer technology including the BBC Model B (hooray!) and the reliable but hellishly noisy dot matrix printer. The learning from these systems was very instructional, using text and very simplistic in multiple choice question approach. Highly behaviouristic but this is how things were done and the teaching approach matched the technology. Now, of course, we’ve gone tablet-based, on-line gaming environments that have non-touch technologies such as Kinect, but the principle remains the same: over the years we’ve adapted technology to pedagogy.
But it’s only now that, after Sir Tim Berners-Lee, we have the World Wide Web that on-line learning is now available to everybody, where before it was sort-of available but not anywhere near as multiplicable. Now, for our sins, we have Learning Management Systems, the most mixed of blessings, and we still have to ask what are we using them for, how are we using them? Is our pedagogy changing? Is out connection with our students changing? Illich (1972) criticised educational funnels that had a one-directional approach and intend motivated educational webs that allow the transformation of each moment of living into one of learning, sharing and caring.
What about the Personal Learning Environment (PLE)? This is the interaction of tools such as blogs, twitters and e-Portfolios, then add in the people we interact with, and then the other tools that we use – and this would be strongly personal to an individual. If you’ve ever tried to use your partner’s iPad, you know how quickly personalisation changes your perception of a tool! Wheeler and Malik (2010) discuses the PLE that comprises the personal learning network and personal web tools, with an eye on more than the classroom, but as a part of life-long learning. Steve notes (as Stephen Heppel did) that you may as well get students to use their PLEs in the open because they’ll be using them covertly otherwise: the dreaded phone under the table becomes a learning tool when it’s on top of the table. Steve discussed the embedded MOOC that Hugh discussed yesterday to see how the interaction between on-line and f2f students can benefit from each other.
In the late ’80s, the future was “multi-media” and everything had every other medium jammed into it (and they don’t like it up ‘em) and then the future was going to converge on the web. Internet take up is increasing: social, political and economic systems change incrementally, but technology changes exponentially. Steve thinks the future is smart mobile and pervasive, due to miniaturisation and capability of new devices. If you have WiFi then you have the world.
“Change is not linear, it’s exponential.” Kurzweil
Looking at the data, there are no more people in the world with mobile phones than people without, although some people have more than one. (Someone in the audience had four, perhaps he was a Telco?) Of course, some reasons for this are because mobile phones replace infrastructure: there are entire African banks that run over mobile networks, as an example. Given that we always have a computer in our pocket, how can we promote learning everywhere? We are using these all the time, everywhere, and this changes what we can do because we can mix leisure and learning without having to move to fixed spaces.
Steve then displayed the Intel info graphic “What Happens In an Internet Minute“, but it’s scary to see how much paper is lagging these days. What will the future look like? What will future learning look like? If we think exponentially then things are changing fast. There is so much content being generated, there must be something that we can use (DOGE photos and Justin Bieber vides excepted) for our teaching and learning. But, given that 70% of what we learn is if informal and outside of the institution, this is great! But we need to be able to capture this and this means that we should produce a personal learning network, because trying to drink down all that content by yourself is exceeding our ability! By building a network, we build a collection of filters and aggregators that are going to help us to bring sense out of the chaos. Given that nobody can learn everything, we can store our knowledge in other people and know where to go when we need that knowledge. A plank of connectivist theory and leading into paragogy, where we learn from each other. This also leads us to distributed cognition, where we think across the group (a hive mind, if you will) but, more simply, you learn from one person, then another, and it becomes highly social.
Steve showed us a video on “How have you used your own technology to enhance your learning“, which you can watch on YouTube. Lucky old 21st Century you! This is a recording of some of Steve’s students answering the question and sharing their personal learning networks with us. There’s an interesting range of ideas and technologies in use so it’s well worth a look. Steve runs a Twitter wall in his classroom and advertises the hashtag for a given session so questions, challenges and comments go out on to that board and that allows Steve to see it but also retweet it to his followers, to allow the exponential explosion that we would want in a personal learning network. Students accessed when they harness the tools they need to solve their problems.
Steve showed us a picture of about 10,000 Germans taking pictures of the then-Presidential Elect Barack Obama because he was speaking in Berlin and it was a historical moment that people wanted to share with other people. This is an example of the ubiquitous connection that we now enjoy and, in many ways, take for granted. It is a new way of thinking and it causes a lot of concern for people who want to stick to previous methods. (There will come a time when a paper exam for memorised definitions will make no sense because people have computers connected to their eyes – so let’s look at asking questions in ways that always require people to actually use their brains, shall we.) Steve then showed us a picture of students “taking notes” by taking pictures of the whiteboard: something that we are all very accustomed to now. Yes, some teachers are bothered by this but why? What is wrong with instantaneous capture versus turning a student into a slow organic photocopying machine? Let’s go to a Papert quote!
“I am convinced that heh best learning takes place when the learner takes charge,” Seymour Papert
“We learn by doing“, Piaget, 1960
“We learn by making“, Papert, 1960.
Steve alluded to constructionist theory and pointed out how much we have to learn about learning by making. He, like many of us, doesn’t subscribe to generational or digital native/immigrant theory. It’s an easy way of thinking but it really gets in the way, especially when it makes teachers fearful of weighing in because they feel that their students know more than they do. Yes, they might, but there is no grand generational guarantee. It’s not about your age, it’s about your context. It’s about how we use the technology, it’s not about who we are and some immutable characteristics that define us as in or out. (WTF does not, for the record, mean “Welcome to Facebook”. Sorry, people.) There will be cultural differences but we are, very much, all in this together.
Steve showed us a second video, on the Future of Publishing, which you can watch again! Some of you will find it confronting that Gaga beats Gandhi but cultures change and evolve - and you need to watch to the end of the video because it’s really rather clever. Don’t stop halfway through! As Steve notes, it’s about perception and, as I’ve noted before, I’m pretty sure that people put people into the categories that they were already thinking about – it’s one of the reasons I have such a strong interest in grounded theory. If you have a “Young bad” idea in your head then everything you see will tend to confirm this. Perception and preconception can heavily interfere with each other but using perception, and being open to change, is almost always a better idea.
Steve talked about Csíkszentmihályi’s Flow, the zone you’re in when the level of challenge roughly matches your level of skill and you balance anxiety and boredom. Then, for maximum Nick points, he got onto Vygotsky’s Zone of Proximal Development, where we build knowledge better and make leaps when we do it with other people, using the knowledgable other to scaffold the learning. Steve also talked about mashing them up, and I draw the reader back to something I wrote on this a whole ago on Repenning’s work.
We can do a lot of things with computers but we don’t have to do all the things that we used to do and slavishly translate them across to the new platform. Waters (2011) talks about new learners: learners who are more self-directed and able to make more and hence learn more.
There are many digital literacies: social networking, privacy management, identity management, creating content, organising content, reusing and repurposing, filtering and selection, self presentation, transliteracy (using any platform to get your ideas across). We build skills, that become competencies, that become literacies and, finally, potentially become masteries.
Steve finished with in discussing the transportability of skills using driving in the UK and the US as an example. The skill is pretty much the same but safe driving requires a new literacy when you make a large contextual change. Digital environments can be alien environments so you need to be able to take the skills that you have now and be able to put them into the new contexts. How do you know that THIS IS SHOUTING? It’s a digital literacy.
Steve presented a quote from Socrates, no, Socrates, no, Plato:
“Knowledge that is acquired under compulsion obtains no hold on the mind.“
and used the rather delightful neologism “Darwikianism” to illustrate evolving improvement on on-line materials over time. (And illustrated it with humour and pictures.) Great talk with a lot of content! Now I have to go and work on my personal learning network!
CDEDU, Day 3, “Through the Lens of Third Space Theory: Possibilities For Research Methodologies in Educational Technologies”, (#csedu14 #AdelEd)Posted: April 3, 2014
This talk was presented by Kathy Jordan and Jennifer Elsden-Clifton, both from RMIT University. They discussed educational technologies through another framework that they have borrowed from another area: third space theory. This allows us to describe how teachers and students use complex roles in their activities.
A lot of educational research is focused on the use of technology and can be rather theory light (no arguments from me), leading to technological evangelism that is highly determinist. (I’m assuming that the speakers mean technological determinism, which is the belief that it’s a society’s technology that drives its culture and social structures, after Veblen.) The MOOC argument was discussed again. Today, the speakers were planning to offer an alternative way to think about technology and use of technology. As always, don’t just plunk technology down in the classroom and expect it to achieve your learning and teaching goals. Old is not always bad and new is not always good, in effect. (I often say this and then present the reverse as well. Binary thinking is for circuits.)
“The real voyage of discover consists not in seeing new landscapes, but in having new eyes.” (Proust, cited in Canfield, Hanson and Zlkman, 2002)
“With whose eyes were my eyes crafted?” (Castor, 1991)
Basically, we bring ourselves to the landscape and have to think about why we see what we;re seeing. The new methodology proposed moves away from a simplistic, techno-centric approach and towards Third Space Theory. Third Space Theory is used to explore and understand the spaces in between two or more discourses, conceptualisations or binaries. (Bhabna, 1994). Thirdspace is this a “come together” space (Soja, 1996) to combine the first and second spaces and then enmesh the binaries that characterise these spaces. This also reduces the implicit privileging of one conceptual space over another.
Conceptualisations of the third space include bridges, navigational spaces and transformative spaces. Interesting, from an editorial perspective, I find the binary notion of MOOC good/MOOC bad, which we often devolve to, is one of the key problems in discussing MOOCs because it often forces people into responding to a straw man and I think that this work on Thirdspaces is quite strong without having to refer to a perceived necessity for MOOCs.
Thirdspace theory is used across a variety of disciplines at the moment. Firstspace in our context could be face-to-face learning, the second space is “on-line learning” and the speakers argue that this binary classification is inherently divisive. Well, yes it is, but this assumes that you are not perceiving these are naturally overlapping when we consider blended learning, which we’ve really had as a concept since 1999. There are definitely problems when people move through f2f and on-line as if they are exclusive binary aspects of some educational Janus but I wonder how much of that is lack of experience and exposure rather than a strict philosophical structure – but there is no doubt that thinking about these things as a continuum is beneficial and if Thirdspace theory brings people to it – then hooray!
(As Hugh noted yesterday, MOOC got people interested in on-line learning, which made it worth running MOOCs. Then, hooray!)
A lot of the discussion of technology in education is a collection of Shibboleths and “top of the head” solutions that have little maturity or strategy behind them, so a new philosophical approach to this is most definitely welcome and I need to read up more on Thirdspace, obviously.
The speakers provided some examples, including some learning fusion around Blackboard collaborate and the perceived inability of pre-service teachers to be able to move personal technology literacy into their new workplace, due to fear. So, in the latter case, Thirdspace allowed an analysis of the tensions involved and to assist the pre-service teachers in negotiating the “unfamiliar terrain” (Bhabha, 1992) of sanctioned technology frameworks in schools. (An interesting example was having t hand write an e-mail first before being allowed to enter it electronically – which is an extreme sanctioning of the digital space.)
I like the idea of the lens that Thirdspace provides but wonder whether we are seeing the liminal state that we would normally associate with a threshold concept. Rather than a binary model, we are seeing a layered model, where the between is neither stable nor clearly understood as it is heavily personalised. There is, of course, no guarantee that having a skill in one area makes it transferable to another because of the inherent contextual areas (hang on, have we gone into NeoPiaget!).
Anything that removes the potential classification of any category as a lower value or undesirable other is a highly desirable thing for me. The notion that transitional states, however we define them, are a necessary space that occurs between two extremes, whether they are dependent or opposing concepts, strongly reduces the perceived privilege of the certainty that so many people confuse with being knowledgeable and informed. Our students, delightful dualists that they are, often seek black/white dichotomies and it is part of our job to teach them that grey is not only a colour but an acceptable colour.
I think that labelling the MOOC discussion as a techno-determinist and shallow argument doesn’t really reflect the maturity of the discussion in the contemporary MOOC space and is a bit of a dismissive binary, if I can be so bold. We did discuss this in the questions and the speakers agreed that the discussion of MOOC has matured and was definitely in advance of the rather binary and outmoded description presented in the first keynote that I railed against. Yes, MOOCs have been presented by evangelists and profit makers as something but the educational community has done a lot of work to refine this and very few of the practitioners I know who are still involved in MOOC are what I would call techno-determinists. Techno-utopians, maybe, techno-optimisits, often, but techno-skeptics and serious, serious educational theorists who are also techno-optional, just as often.
The other potential of Third Space Theory is that it “provides a framework for destabilisation” and moving beyond past patterns rather than relying on old binary conceptualisations of new/old good/bad updated/outmoded. Projecting any single method to everything is always challenging and I suspect it’s a little bit of a hay hominid but the resulting questions clarified that the potential of Thirdspace is in being capable of deliberately rejecting staid and binary thinking, without introducing a new mode of privilege on to the new Thirdspace model. I’m not sure that I agree with all of the points here but I certainly have a lot to think about.
CSEDU, Day 2, Invited Talk, “How are MOOCs Disrupting the Educational Landscape?”, (#CSEDU14 #AdelEd)Posted: April 2, 2014
I’ve already spent some time with Professor Hugh Davis, from Southampton, and we’ve had a number of discussions already around some of the matters we’re discussing today, including the issue when you make your slides available before a talk and people react to the content of the slides without having the context of the talk! (This is a much longer post for another time.) Hugh’s slides are available at http://www.slideshare.net/hcd99.
As Hugh noted, this is a very timely topic but he’s planning to go through the slides at speed so I may not be able to capture all of it. He tweeted his slides earlier, as I noted, and his comment that he was going to be debunking things earned him a minor firestorm. But, to summarise, his answer to the questions is “not really, probably” but we’ll come back to this. For those who don’t know, Southampton is about 25,000 students, Russell Group and Top 20 in the UK, with a focus on engineering and oceanography.
Back in 2012, the VC came back infused with the desire to put together a MOOC (apparently, Australians talked them into it – sorry, Hugh) and in December, 2012, Hugh was called in and asked to do MOOCs. Those who are keeping track will now that there was a lot of uncertainty about MOOCs in 2012 (and there still is) so the meeting called for staff to talk about this was packed – in a very big room. But this reflected excitement on the part of people – which waving around “giant wodges” of money to do blended learning had failed to engender, interestingly enough. Suddenly, MOOCs are more desirable because people wanted to do blended learning as long as you used the term MOOC. FutureLearn was produced and things went from there. (FutureLearn now has a lot of courses in it but I’ve mentioned this before. Interestingly, Monash is in this group so it’s not just a UK thing. Nice one, Monash!)
In this talk, Hugh’s planning to intro MOOCs, discuss the criticism, look at Higher Ed, ask why we are investing in MOOCs, what we can get out of it and then review the criticisms again. Hugh then defined what the term MOOC means: he defined it as a 10,000+, free and open registration, on-line course, where a course runs at a given time with a given cohort, without any guarantee of accreditation. (We may argue about this last bit later on.) MOOCs are getting shorter – with 4-6 weeks being the average for a MOOC, mostly due to fears of audience attrition over time.
The dreaded cMOOC/xMOOC timeline popped up from Florida Institute of Technology’s History of MOOCs:
and then we went into the discussion of the stepped xMOOC with instructor led and a well-defined and assessable journey and the connectivist cMOOC where the network holds the knowledge and the learning comes from connections. Can we really actually truly separate MOOCs into such distinct categories? A lot of xMOOC forums show cMOOC characteristics and you have to wonder how much structure you can add to a cMOOC without it getting “x”-y. So what can we say about the definition of courses? How do we separate courses you can do any time from the cohort structure of the MOOC? The synchronicity of human collision is a very connectivisty idea which is embedded implicitly in every xMOOC because of the cohort.
What do you share? Content or the whole course? In MOOCS, the whole experience is available to you rather than just bits and pieces. And students tend to dip in and out when they can, rather than just eating what is doled out, which suggests that they are engaging. There are a lot of providers, who I won’t list here, but many of them are doing pretty much the same thing.
What makes a MOOC? Short videos, on-line papers, on-line activities, links toe external resources, discussions and off platform activity – but we can no longer depend upon students being physical campus students and thus we can’t guarantee that they share our (often privileged) access to resources such as published journals. So Southampton often offer précis of things that aren’t publicly available. Off platform is an issue for people who are purely on-line.
If you have 13,000 people you can’t really offer to mark all their essays so assessment has to depend upon the self-motivated students and they have to want to understand what is going on – self evaluation and peer review have to be used. This is great, according to Hugh, because we will have a great opportunity to find out more about peer review than we ever have before.
What are the criticisms? Well, they’re demographically pants – most of the students are UK (77%) and then a long way down US (2%), with some minor representation from everywhere else. This isn’t isolated to this MOOC. 70% of MOOC users come from the home country, regardless of where it’s run. Of course, we also know that the people who do MOOCs also tend to have degrees – roughly 70% from the MOOCS@Edinburgh2013 Report #1. These are serial learners (philomaths) who just love to learn things but don’t necessarily have the time or inclination (or resources) to go back to Uni. But for those who register, many don’t do anything, and those who do drop out at about 20% a week – more weeks, more drop-out. Why didn’t people continue? We’ll talk about this later. (See http://moocmoocher.wordpress.com) But is drop out a bad thing? We’ll comeback to this.
Then we have the pedagogy, where we attempt to put learning design into our structure in order to achieve learning outcomes – but this isn’t leading edge pedagogy and there is no real interaction between educators and learners. There are many discussions, and they happen in volume, but this discussion is only over 10% of the community, with 1% making the leading and original contributions. 1% of 10-100,000 can be a big number compared to a standard class room.
What about the current Higher Ed context – let’s look at “The Avalanche Report“. Basically, the education business is doomed!!! DOOOMED, I tell you! which is hardly surprising for a report that mostly originates from a publishing house who wants to be a financially successful disruptor. Our business model is going to collapse! We are going to have our Napster moment! Cats lying down with dogs! In the HE context, fees are going up faster than the value of degree (across most of the developed world, apparently). There is an increased demand for flexibility of study, especially for professional development, in the time that they have. The alternative educational providers are also cashing up and growing. With all of this in mind, on-line education should be a huge growing market and this is what the Avalanche report uses to argue that the old model is doomed. To survive, Unis will have to either globalise or specialise – no room in the middle. MOOCs appear to be the vanguard of the on-line program revolution, which explains why there is so much focus.
Is this the end of the campus? It’s not the end of the pithy slogan, that’s for sure. So let’s look at business models. How do we make money on MOOCs? Freemium where there are free bits and value-added bits The value-adds can be statements of achievement or tutoring. There are also sponsored MOOCs where someone pays us to make a MOOC (for their purposes) or someone pays us to make a MOOC they want (that we can then use elsewhere.) Of course there’s also just the old “having access to student data” which is a very tasty dish for some providers.
What does this mean to Southampton? Well it’s a kind of branding and advertising for Southampton to extend their reputation. It might also generate new markets, bring them in via Informal Learning, move to Non-Formal Learning, then up to the Modules of Formal Learning and then doing whole programmes under more Formal learning. Hugh thinks this is optimistic, not least because not many people have commodified their product into individual modules for starters. Hugh thinks it’s about 60,000 Pounds to make a MOOC, which is a lot of money, and so you need a good business model to justify dropping this wad of cash. But you can get 60K back from enough people with a small fee. Maybe on-line learning is another way to get students than the traditional UK “boarding school” degrees. But the biggest thing is when people accept on-line certification as this is when the product becomes valuable to the people who want the credentials. Dear to my heart, is of course that this also assists in the democratisation of education – which is a fantastic thing.
What can we gain from MOOCs? Well, we can have a chunk of a running course for face-to-face students that runs as a MOOC and the paying students have benefited from interacting with the “free attendees” on the MOOC but we have managed to derive value from it. It also allows us to test things quickly and at scale, for rapid assessment of material quality and revision – it’s hard not to see the win-win here. This automatically drives the quality up as it’s for all of your customers, not just the scraps that you can feed to people who can’t afford to pay for it. Again, hooray for democratisation.
Is this the End of the Lecture? Possibly, especially as we can use the MOOC for content and flip to use the face-to-face for much more valuable things.
There are on-line degrees and there is a lot of money floating around looking for brands that they will go on-line (and by brand, we mean the University of X.) Venture capitalist, publishers and start-ups are sniffing around on-line so there’s a lot of temptation out there and a good brand will mean a lot to the right market. What about fusing this and articulating the degree programme, combining F2F modules. on-line, MOOC, and other aspects.
Ah, the Georgia Tech On-line Masters in Computer Science has been mentioned. This was going to be a full MOOC with free and paying but it’s not fully open, for reasons that I need to put into another post. So it’s called a MOOC but it’s really an on-line course. You may or may not care about this – I do, but I’m in agreement with Hugh.
The other thing about MOOC is that we are looking at big, big data sets where these massive cohorts can be used to study educational approaches and what happens when we change learning and assessment at the big scale.
So let’s address the criticisms:
- Pedagogically Simplistic! Really, as simple as a lecture? Is it worse – no, not really and we have space to innovate!
- No support and feedback! There could be, we’d just have to pay for it.
- Poor completion rates! Retention is not the aim, satisfaction is. We are not dealing with paying students.
- No accreditation! There could be but, again, you’d have to pay for someone to mark and accredit.
- This is going to kill Universities! Hugh doesn’t think so but we’ll had to get a bit nimble. So only those who are not agile and responsive to new business models may have problems – and we may have to do some unbundling.
Who is actually doing MOOCs? The life-long learner crowd (25-65, 505/50 M/F and nearly always have a degree). People who are after a skill (PD and CPD). Those with poor access to Higher education, unsurprisingly. There’s also a tiny fourth cohort who are those who are dipping a toe in Uni and are so small as to be insignificant. (The statistics source was questioned, somewhat abruptly, in the middle of Hugh’s flow, so you should refer to the Edinburgh report.”
The patterns of engagement were identified as auditing, completing and sampling, from the Coursera “Emerging Student Pattersn in Open-Enrollment MOOCs”.
To finish up, MOOCs can give us more choice and more flexibility. Hugh’s happy because people want do online learning and this helps to develop capacity to develop high quality on-line courses. This does lead to challenges for institutional strategy: changing beliefs, changing curriculum design, working with the right academic staff (and who pays them), growing teams of learning designers and multimedia producers, legal matters, speed and agility, budget and marketing. These are commercial operations so you have a lot of commercial issues to worry about! (For our approach, going Creative Commons was one of the best things we every did.)
Is it the end of the campus? … No, not really, Hugh thinks that the campus will keep going and there’ll just be more on-line learning. You don’t stop going to see good music because you’ve got a recording, for example.
And now for the conclusions! MOOCs are a great marketing device and have a good reach for people who were out of reach before, But we can take high quality content and re-embed back into blended learning, use it to drive teaching practice change, get some big data and building capacity for online learning.
This may be the vanguard of on-line disruption but if we’re ready for it, we can live for it!
Well, that was a great talk but goodness, does Hugh speak quickly! Have a look at his slides in the context of this because I think he’s balanced an optimistic view of the benefits with a sufficient cynical eye on the weasels who would have us do this for their own purposes.
This is an extension of the position paper that was presented this morning. I must be honest and say that I have a knee-jerk reaction when I run across titles like this. There’s always the spectre of Rand or Gene Ray in compact phrases of slightly obscure terminology. (You should probably ignore me, I also twitch every time I run across digital hermeneutics and that’s perfectly legitimate.) The speaker is Larissa Fradkin who is trying to improve the quality of mathematics teaching and overall interest in mathematics – which is a good thing and so I should probably be far more generous about “syncretic”. Let’s review the definition of syncretic:
Again, from Wikipedia, Syncretism /ˈsɪŋkrətɪzəm/ is the combining of different, often seemingly contradictory beliefs, while melding practices of various schools of thought. (The speaker specified this to religious and philosophical schools of thought.)
There’s a reference in the talk to gnosticism, which combined oriental mysticism, Judaism and Christianity. Apparently, in this talk we are going to have myths debunked regarding the Maths Wars of Myths, including traditionalist myths and constructivist myths. Then discuss the realities in the classroom.
Two fundamental theories of learning were introduced: traditionalist and constructivist. Apparently, these are drummed into poor schoolteachers and yet we academics are sadly ignorant of these. Urm. You have to be pretty confident to have a go at Piaget: “Piaget studied urchins and then tried to apply it to kids.” I’m really not sure what is being said here but the speaker has tried to tell two jokes which have fallen very flat and, regrettably, is making me think that she doesn’t quite grasp what discovery learning is. Now we are into Guided Teaching and scaffolding with Vygotsky, who apparently, as a language teacher, was slightly better than a teacher of urchins.
The first traditionalist myth is that intelligence = implicit memory (no conscious awareness) + basic pattern recognition. Oh, how nice, the speaker did a lot of IQ tests and went from 70 to 150 in 5 tests. I don’t think many people in the serious educational community places much weight on the assessment of intelligence through these sorts of test – and the objection to standardised testing is coming from the edu research community of exactly those reasons. I commented on this speaker earlier and noted that I felt that she was having an argument that was no longer contemporary. Sadly, my opinion is being reinforced. The next traditionalist myth is that mathematics should be taught using poetry, other mnemonics and coercion.
What? If the speaker is referring to the memorisation of the multiplication tables, we are taking about a definitional basis for further development that occupies a very short time in the learning phase. We are discussing a type of education that is already identified as negative as if the realisation that mindless repetition and extrinsic motivational factors are counter-productive. Yes, coercion is an old method but let’s get to what you’re proposing as an alternative.
Now we move on to the constructivist myths. I’m on the edge of my seat. We have a couple of cartoons which don’t do anything except recycle some old stereotypes. So, the first myth is “Only what students discover for themselves is truly learned.” So the problem here is based on Rebar, 2007, met study. Revelation: Child-centred, cognitively focused and open classroom approaches tend to perform poorly.
Hmm, not our experience.
The second myth is both advanced and debunked by a single paper, that there are only two separate and distinct ways to teach mathematics: conceptual understanding and drills. Revelation: Conceptual advanced are invariably built on the bedrock of technique.
Myth 3: Math concepts are best understood and mastered when presented in context, in that way the underlying math concept will follow automatically. The speaker used to teach with engineering examples but abandoned them because of the problem of having to explain engineering problems, engineering language and then the problem. Ah, another paper from Hung-Hsi Wu, UCB, “The Mathematician and Mathematics Education Reform.” No, I really can’t agree with this as a myth. Situated learning is valid and it works, providing that the context used is authentic and selected carefully.
Ok, I must confess that I have some red flags going up now – while I don’t know the work of Hung-Hsi Wu, depending on a single author, especially one whose revelatory heresy is close to 20 years old, is not the best basis for a complicated argument such as this. Any readers with knowledge in this should jump on to the comments and get us informed!
Looking at all of these myths, I don’t see myths, I see straw men. (A straw man is a deliberately weak argument chosen because it is easy to attack and based on a simplified or weaker version of the problem.)
I’m in agreement with many of the outcomes that Professor Fradkin is advocating. I want teachers to guide but believe that they can do it in the constriction of learning environments that support constructivist approaches. Yes, we should limit jargon. Yes, we should move away from death-by-test. Yes, Socratic dialogue is a great way to go.
However, as always, if someone says “Socratic dialogue is the way to go but I am not doing it now” then I have to ask “Why not?” Anyone who has been to one of my sessions knows that when I talk about collaboration methods and student value generation, you will be collaborating before your seat has had a chance to warm up. It’s the cornerstone of authentic teaching that we use the methods that we advocate or explain why they are not suitable – cognitive apprenticeship requires us to expose our selves as we got through the process we’re trying to teach!
Regrettably, I think my initial reaction of cautious mistrust of the title may have been accurate. (Or I am just hopelessly biassed by an initial reaction although I have been trying to be positive.) I am trying very hard to reinterpret what has been said. But there is a lot of anecdote and dependency upon one or two “visionary debunkers” to support a series of strawmen presented as giant barriers to sensible teaching.
Yes, listening to students and adapting is essential but this does not actually require one to abandon constructivist or traditionalist approaches because we are not talking about the pedagogy here, we’re talking about support systems. (Your take on that may be different.)
There is some evidence presented at the end which is, I’m sorry to say, a little confusing although there has obviously been a great deal of success for an unlisted, uncounted number and unknown level of course – success rates improved from 30 passing to 70% passing and no-one had to be trained for the exam. I would very much like to get some more detail on this as claiming that the syncretic approach is the only way to reach 70% is essential is a big claim. Also, a 70% pass rate is not all that good – I would get called on to the carpet if I did that for a couple of offerings. (And, no, we don’t dumb down the course to improve pass rate – we try to teach better.)
Now we move into on-line techniques. Is the flipped classroom a viable approach? Can technology “humanise” the classroom? (These two statements are not connected, for me, so I’m hoping that this is not an attempt to entail one by the other.) We then moved on to a discussion of Khan, who Professor Fradkin is not a fan of, and while her criticisms of Khan are semi-valid (he’s not a teacher and it shows), her final statement and dismissal of Khan as a cram-preparer is more than a little unfair and very much in keeping with the sweeping statements that we have been assailed by for the past 45 minutes.
I really feel that Professor Fradkin is conflating other mechanisms with blended and flipped learning – flipped learning is all about “me time” to allow students to learn at their own pace (as she notes) but then she notes a “Con” of the Khan method of an absence of “me time”. What if students don’t understand the recorded lectures at all? Well… how about we improve the material? The in-class activities will immediately expose faulty concept delivery and we adapt and try again (as the speaker has already noted). We most certainly don’t need IT for flipped learning (although it’s both “Con” point 3 AND 4 as to why Khan doesn’t work), we just need to have learning occur before we have the face-to-face sessions where we work through the concepts in a more applied manner.
Now we move onto MOOCs. Yes, we’re all cautious about MOOCs. Yes, there are a lot of issues. MOOCs will get rid of teachers? That particular strawman has been set on fire, pushed out to sea, brought back, set on fire again and then shot into orbit. Where they set it on fire again. Next point? Ok, Sebastian Thrun made an overclaim that the future will have only 10 higher ed institutions in 50 years. Yup. Fire that second strawman into orbit. We’ve addressed Professor Thrun before and, after all, he was trying to excite and engage a community over something new and, to his credit, he’s been stepping back from that ever since.
Ah, a Coursera course that came from a “high-quality” US University. It is full of imprecise language, saying How and not Why, with a Monster generator approach. A quick ad hominen attack on the lecturer in the video (He looked like he had been on drugs for 10 years). Apparently, and with no evidence, Professor Fradkin can guarantee that no student picked up any idea of what a function was from this course.
Apparently some Universities are becoming more cautious about MOOCs. Really.
I’m sorry to have editorialised so badly during this session but this has been a very challenging talk to listen to as so much of the underlying material has been, to my understanding, misrepresented at least. A very disappointing talk over all and one that could have been so much better - I agree with a lot of the outcomes but I don’t really think that this is not the way to lead towards it.
Sadly, already someone has asked to translate the speaker’s slides into German so that they can send it to the government! Yes, text books are often bad and a lack of sequencing is a serious problem. Once again I agree with the conclusion but not the argument… Heresy is an important part of our development of thought, and stagnation is death, but I think that we always need to be cautious that we don’t sensationalise and seek strawmen in our desire to find new truths that we have to reach through heresy.