Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 26 2010

Tracking the signal of emerging technologies

4727644780_1a2f2e5f04_b.jpgLast week the words of science fiction writer William Gibson ran rampant over the Twitter back channel at the inaugural NASA IT Summit when a speaker quoted his observation that "The future is here. It's just not evenly distributed yet." That's a familiar idea to readers of the O'Reilly Radar, given its focus on picking up the weak signals that provide insight into what's coming next. So what does the future of technology hold for humanity and space flight? I've been reading the fiction of Jules Verne, Isaac Asimov, David Brin, Neal Stephenson, Bruce Sterling and many other great authors since I was a boy, and thinking and dreaming of what's to come. I'm not alone in that; Tim O'Reilly is also dreaming of augmented reality fiction these days.

Last week I interviewed NASA's CIO and CTO at the NASA IT Summit about some of that fiction made real. We discussed open source, cloud computing, virtualization, and Climate@Home, a distributed supercomputer for climate modeling. Those all represent substantive, current implementations of enterprise IT that enable the agency to support mission-critical systems. (If you haven't read about the state of space IT, it's worth circling back.)

Three speakers at the Summit offered perspectives on emerging technologies that were compelling enough to report on:

  • Former DISA CTO Lewis Shepherd
  • Gartner VP David Cearley
  • Father of the Internet Vint Cerf

You can watch Cerf speak in the embedded video below. (As a bonus, Jack Blitch's presentation on Disney's "Imagineers" follows.) For more on the technologies they discuss, and Shepherd's insight into a "revolution in scientific computing," read on.


Building an Internet in space

Even a cursory look at the NASA IT Summit Agenda reveals the breadth of topics discussed. You could find workshops on everything from infrastructure to interactivity, security in the cloud to open government, space medicine to ITIL, as well as social media and virtual worlds. The moment that was clearly a highlight for many attendees, however, came when Vint Cerf talked about the evolution of the Internet. His perspective on building resilient IT systems that last clearly resonated with this crowd, especially his description of the mission as "a term of art." Cerf said that "designing communications and architectures must be from a multi-mission point of view." This has particular relevance for an agency that builds IT systems for space, where maintenance isn't a matter of a stroll to the server room.

Cerf's talk was similar to the one he delivered at "Palantir Night Live" earlier this summer, which you can watch on YouTube or read about from Rob Pegoraro at the Washington Post.

Cerf highlighted the more than 1.8 billion people on the IP network worldwide at the end of 2009, as well as the 4.5 billion mobile devices that are increasingly stressing it. "The growth in the global Internet has almost exhausted IPv4 address space," he said. "And that's my fault." Time for everyone to learn IPv6.

Looking ahead to the future growth of the Internet, Cerf noted both the coming influx of Asian users and the addition of non-Latin characters, including Cyrillic, Chinese, and Arabic. "If your systems are unprepared to deal with non-Latin character sets, you need to correct that deficiency," he said.

Cerf also considered the growth of the "Real World Web" as computers are increasingly embedded in "human space." In the past, humans have adapted to computer interfaces, he said, but computers are increasingly adapting to human interfaces, operating by speech, vision, touch and gestures.

Cerf pointed to the continued development of Google Goggles, an app that allows Android users to take a picture of an object and send it to Google to find out what it is. As CNET reported yesterday, Goggles is headed to iPhones this year. Cerf elicited chuckles from the audience when describing the potential for his wife's Cochlear implant to be reprogrammed with TCP/IP, thereby allowing her to ask questions over a VoIP network, essentially putting her wife on the Internet. To date, as far as we know, she is not online.

Cerf also described the growing "Internet of Things." That network will include an InterPlaNetary Internet, said Cerf, or IPN. Work has been going forward on the IPN since 1998, including the development of more fault-tolerant networking that stores and forwards packets as connections become available in a "variably delayed and disrupted environment."

"TCP/IP is not going to work," he said, "as the distance between planets is literally astronomical. TCP doesn't do well with that. The other problem is celestial motion, with planets rotating. We haven't figured out how to stop that."

The "Bundle Protocol" is the key to an interplanetary Internet, said Cerf. The open source, publicly available Bundle protocol was first tested in space on the UK-DMC satellite in 2008. This method allows three to five times more data throughput than standard TCP/IP, addressing the challenge of packetized communications by hopping and storing the data. Cerf said we'll need more sensors in space, including self-documenting instruments for meta-data and calibration, in order to improve remote networking capabilities. "I'm deeply concerned that we don't know how to do many of these things," he observed.

Another issue by Cerf is the lack of standards for cloud interoperability. "We need a virtual cloud to allow more interoperability."

Government 2.0 and the Revolution in Scientific Computing

Lewis Shepherd, former CTO at the Defense Information Systems Agency and current Director of Microsoft’s Institute for Advanced Technology in Governments, focused his talk on whether humanity is on the cusp of a fourth research paradigm as the "scale and expansion of storage and computational power continues unabated."

Shepherd put that prediction in the context of the evolution of science from experimental to theoretical to computational. Over time, scientists have moved beyond describing natural phenomena or Newton's Laws to simulating complex phenomena, an ability symbolized by comparing the use of lens-based microscopes to electron microscopes. This has allowed scientists to create nuclear simulations.

Shepherd now sees the emergence of a fourth paradigm, or "eScience," where a set of tools and technologies support data federation and collaboration to address the explosion of exabytes of data. As an example he referenced imagery of the Pleiades star cluster from the Digitized Sky Survey synthesized within the WorldWide Telescope.

"When data becomes ubiquitous, when we become immersed in a sea of data, what are the implications?" asked Shepherd. "We need to be able to derive meaning and information that wasn't predicted when the data sets were constructed. No longer will we have to be constrained by databases that are purpose-built for a system that we design with a certain set of requirements. We can do free-form science against unconstrained sets of data, or modeling on the fly because of the power of the cloud."

His presentation from the event is embedded below.

In particular, Shepherd looked at the growth of cloud computing and data ubiquity as an enabler for collaboration and distributed research worldwide. In the past, the difficulty of replicating scientific experiments was a hindrance. He doesn't see that as a fundamental truth anymore. Another liberating factor, in his view, is the evolution of programming into modeling.

"Many of the new programming tools are not just visual but hyper-visual, with drag and drop modeling. Consider that in the context of continuous networking," he said. "Always-on systems offer you the ability to program against data sets in the cloud, where you can see the emergence of real-time interactive simulations."

What could this allow? "NASA can design systems that appear to be far simpler than the computation going on behind the scenes," he suggested. "This could enable pervasive, accurate, and timely modeling of reality."

Much of this revolution is enabled by open data protocols and open data sets, posited Shepherd, including a growing set of interactions -- government-to-government, government-to-citizen, citizen-to-citizen -- that are leading to the evolution of so-called "citizen science." Shepherd referenced the Be A Martian Project, where the NASA Jet Propulsion Laboratory crowdsourced images from Mars.

He was less optimistic about the position of the United States in research and development, including basic science. Even with President Obama's promise to put science back in its rightful place during his inaugural address, and some $24 billion dollars in new spending in the Recovery Act, Shepherd placed total research and development as a percentage of GDP at only 0.8%.

"If we don't perform fundamental research and development here, it can be performed elsewhere," said Shepherd. "If we don't productize here, technology will be productized elsewhere. Some areas are more important than others; there are some areas we would not like to see an overseas label on. The creation of was NASA based on that. Remember Sputnik?" His observations were in parallel with those made by Intel CEO Paul Otelinni at the Aspen Forum this Monday, who sees the U.S. facing a looming tech decline.

"Government has the ability to recognize long time lines," said Shepherd, "and then make long term investment decisions on funding of basic science." The inclusion of Web 2.0 into government, a trend evidenced in the upcoming Gov 2.0 Summit, is crucial for revealing that potential. "We should be thinking of tech tools that would underlay Gov 3.0 or Gov 4.0," he said, "like the simulation of data science and investment in STEM education."

Gartner's Top Strategic Technologies

Every year, Gartner releases its list of the top 10 strategic technologies and trends. Their picks for 2010 included cloud computing, mobile applications (Cearley used the term apptrepreneurship to describe the mobile application economy that is powered by the iTunes and Android marketplaces, a useful coinage I wanted to pass along), flash memory, activity monitoring for security, social computing, pod-based data centers, green IT, client computing, advanced analytics, and virtualization for availability. Important trends all, and choices that have been born out since the analysis was issued last October.

What caught my eye at the NASA IT Summit were other emerging technologies, several of which showed up on Gartner's list of emerging technologies in 2008. Several of these are more likely familiar to fellow fans of science fiction than data center operators, though to be fair I've found that there tends to be considerable cross over between the two.

Context-aware Computing
There's been a lot of hype around the "real-time Web" over the past two years. What's coming next is the so-called "right-time Web," where users can find information or access services when and where they need them. This trend is enabled by the emergence of pervasive connectivity, smartphones, and the cloud.

"It will be collaborative, predictive, real-time, and embedded," said Clearey," adding to everyday human beings' daily processes." He also pointed to projects using Hadoop, the open source implementation of MapReduce that Mike Loukides wrote about in What is Data Science? Context-aware computing that features a thin client, perhaps a tablet, powered by massive stores of data and predictive analytics could change the way we work, live, and play. By 2015-2020 there will be a "much more robust context-delivery architecture," Cearley said. "We'll need a structured way of bring together information, including APIs."

Real World Web
Our experiences in the physical world are increasingly integrated with virtual layers and glyphs, a phenomenon that blogger Chris Brogan described in 2008 in his Secrets of the Annotated World. Cyberspace is disappearing into everyday experience. That unification is enabled by geotagging, QR codes, RFID chips, and sensor networks. There's a good chance many more of us will be shopping with QR codes or making our own maps in real-time soon.

Augmented Reality
Context-aware computing and the Real World Web both relate to the emergence of augmented reality, which has the potential to put names to faces and much more. Augmented reality can "put information in context at the point of interaction," said Cearley, "including emerging wearable and 'glanceable' interfaces. There's a large, long term opportunity. In the long term, there's a 'human augmentation' trend."

Features currently available in most mobile devices, such as GPS, cellphone cameras, and accelerometers, have started to make augmented reality available to cutting edge users. For instance the ARMAR project shows the potential of augmented reality for learning, and Augmented reality without the phone is on its way. For a practical guide to augmented reality, look back to 2008 on Radar. Nokia served up a video last year that shows what AR glasses might offer:

Future User Interfaces
While the success of the iPad has many people thinking about touchscreens, Cearley went far beyond touch, pointing to emerging gestural interfaces like the SixthSense wearable computer at MIT. "Consider the Z-factor," he suggested, "or computing in three dimensions." Cearley pointed out that there's also a lot happening in the development of 3D design tools, and he wouldn't count virtual worlds out, though they're mired "deep in the trough of disillusionment." According to Cearley, the problem with current virtual worlds is that they're "mired in a proprietary model, versus an open, standards-driven approach." For a vision of a "spatial operating system" that's familiar to people who have seen "Minority Report," watch the video of g-speak from oblong below:

Fluid User Interface
This idea focuses on taking the user beyond interacting with information through a touchscreen or gesture-based system and into contextual user interfaces, where an ensemble of technologies allow a human to experience emotionally-aware interactions. "Some are implemented in toys and games now," said Cearley, "with sensors and controls." The model would include interactions across multiple devices, including building out a mind-computer interface. "The environment is the computer." For a glimpse into that future, consider the following video from the H+ Summit at Harvard's Science Center with Heather Knight, social roboticist and founder of marilynmonrobot.com:

.

User Experience Platforms
Cearley contended that user experience design is more important than a user experience platform. While a UXP isn't a market yet, Cearley said that he anticipated news of its emergence later in 2010. For more on the importance and context of user experience, check out UX Week, which is happening as I write this in San Francisco. A conceptual video of "Mag+" is embedded below:

Mag+ from Bonnier on Vimeo.

3D Printing
If you're not following the path of make-offs and DIY indie innovations, 3D printing may be novel. In 2010, the 3D printing revolution is well underway at places like MakerBot industries. In the future, DARPA's programmable matter program could go even further, said Cearley, though there will need to be breakthroughs in materials science. You can watch a MakerBot in action below:

Mobile robotics driving mobile infrastructure
I experienced a vision of this future myself at the NASA IT Summit when I interviewed NASA's CTO using a telerobot. Cearley observed many applications coming for this technology, from mobile video conferencing to applications in healthcare and telemedicine. A video from the University of Louisville shows how that future is developing:

Fabric Computing
Cearley's final emerging technology, fabric computing, is truly straight out of science fiction. Storage and networking could be distributed through a garment or shelter, along with displays or interfaces. A Stanford lecture on "computational textiles" is embedded below:

August 05 2010

Teachers become senseis while tech handles drills

Can kids really learn from computers and mobile devices? And if so, should they? When we talk about children learning from software instead of teachers it conjures up a sterile picture of kids staring at computer screens with no human contact. It triggers an automatic aversion to losing the human touch and warm insight we associate with great teaching. We suspect the only thing a computer has to offer is rote learning at the lowest possible common denominator. So when representatives from High Tech High, a San Diego school where teaching is centered around collaborative projects and ensuring every student is known, told me about their proposals to use intelligent tutoring systems, I was more than intrigued.

Last year, High Tech High performed an extensive search for a computer-based system for learning math, in particular to drill students in areas where they needed more practice. They found that the ALEKS intelligent assessment and tutoring system was the best fit for their particular needs, but budget cuts kept them from obtaining the software for more than three or four classrooms. Ben Daley, COO and chief academic officer of High Tech High, explained how ALEKS captivates students by giving them simple feedback in the form of pie charts that represent how thoroughly they have mastered a given topic. It's just a report, but there is a serendipitous magic in smart experimentation -- in this case, presenting information in a certain way changed students and inspired teachers.

It turns out that, simple as it is, students get pretty serious about getting their pie charts filled out. The smart design of the ALEKS math programs also does a good job of giving students math drills, feedback, and help that is at the right level for what they know, making them highly independent in making progress. Although they are still in the preliminary stages of analyzing the data, the teachers at High Tech High were surprised by the increase in student achievement when kids were turned loose on the ALEKS system. The administrators were surprised to learn that although only a handful of licenses had been purchased, ALEKS had spread through the school like a virus as teachers talked to each other about what they were seeing in the classroom and found creative ways to finance additional licenses.

This story presents an interesting counterpoint to the recent emphasis on using technology primarily in developing 21st century skills such as collaboration, creativity, critical thinking, and communication. These "higher order" skills are associated in the education dialogue both with students taking more ownership of their learning through access to rich original source material and collaboration via the Internet, and with students learning through solving authentic problems or working on long-term cross-disciplinary projects that more resemble the work of professionals than traditional lecture/worksheet/multiple-choice-test schooling. The notion of "computer tutors" echoes back to the unrealized ideals of Artificial Intelligence from the 1980s, and the success of intelligent tutoring systems suggests that there is something important in traditional, time-intensive individual practice. Yet, it also evokes a distasteful undertone that the complex role of a teacher can be reduced to a set of algorithms impersonally enacted by a machine -- the absolute antithesis of the teaching environment at High Tech High.

Drills, chunking, and attention to spare

Perhaps there is some reconciliation of these ideas in the nature of expertise-building and the quirks of the human brain. When it comes to logical problem solving the human brain is brutally slow, linear, and limited. Our minds can only reason with the building blocks we are able to hold in working memory, which for most people is about seven items (not coincidentally the number of digits in a phone number.) When we are novices in an area, such as when learning to drive a car, it requires all our attention to take our foot off the gas, push in the clutch, push on the brake, put one hand on the gear shift, move the gear shift up and to the right, and turn the wheel with the other hand. By the time we add in glancing in the rearview mirror and watching out for pedestrians, our working memory may well overflow, causing us to stall the car or get into an accident. With practice, through repetition, all of these separate actions get "chunked" together in long term memory, and making a right turn gets simplified to a single integrated action. Eventually, the complex actions of driving become so automatic that we sometimes bypass working memory altogether and find ourselves waking up at our destination with no real memory of having driven there.

This chunking and automatizing frees up our cognitive resources when performing mundane tasks. We now have attention to spare for other things. An experienced driver might choose to focus this attention on listening to the radio, talking with passengers, or thinking about work. An expert driver, however, uses those resources to become a better driver -- having mastered the art of the right turn he or she begins to master the art of defensive driving or perhaps race-car driving as a true professional, putting all his or her attention on increasingly sophisticated nuances of expert driving. Similarly, as an expert in any field gains experience, elementary ideas get chunked together into a single concept. As the concepts held in working memory become increasingly complex, the expert can address increasingly complex problems. The more information working memory can hold, the more room there is for multiple constraints and real-world variables and the less a problem has to be simplified to be tractable.

Athletes and musicians drill endlessly on simple tasks that are fundamental to their field. Coaches ensure that the drills are performed with proper form since it takes far longer to unlearn a bad habit than to learn a good one. Martial artists practice katas for years that eventually become the subroutines they automatically execute during competition and sparring. Does drill play a similar role in math and other learning? Is it necessary to free up working memory from the mechanics of addition and multiplication in order to solve problems in algebra? Do the patterns of algebraic manipulation need to be chunked into long-term memory to free up attention for a problem in calculus? What is the role of drill in math, and is it one a computer can provide better than a teacher?

If we view a teacher as a coach or a sensei, then software takes on its proper role as one of many tools available to teachers and students. With human guidance to ensure students are gaining understanding and with software tools to drill that understanding into automaticity, it is possible to structure learning so that every student can advance at his or her own pace. Individualized learning is far more efficient than when students are required to learn in lockstep, listening to the same lectures or completing the same assignments regardless of whether they have already mastered the material or are hopelessly behind. Self-paced intelligent tutors can help students learn more independently, more quickly, and more deeply.

How will mobile and 24/7 connectivity change learning?

At High Tech High, the reason for turning to intelligent tutoring systems is simple: if they help teachers enjoy a coaching role that supports kids in learning basics more independently, it gives them far more flexibility in how teachers spend precious classroom minutes. High Tech High has applied for grants to provide students and teachers with mobile devices that are connected to the Internet 24/7 via mobile broadband. In large part, High Tech High is exploring how technology can support anytime, anywhere collaboration within communities of learning, but they will also experiment to see how intelligent tutoring systems have an impact in the snippets of time available to mobile device users -- beyond the results they see with students using the software only in the classroom. They suspect that 24/7 connectivity will support and enhance the human connection in learning. If it also lets kids move through curriculum basics more quickly or more independently outside the classroom, it gives back something the High Tech High community never has enough of: more time for cross-disciplinary, collaborative projects that build higher-order skills and ground the basic curriculum.

The High Tech High programs will also give the education community something it doesn't yet have enough of: concrete data. Does anytime, anywhere learning with technology help teachers and students be more efficient in what they already do? Does it enable new ways of learning? How should those two different goals be balanced to leverage great teachers? What approaches increase community versus fostering isolation? There are countless opinions and plausible theories. Leadership like that at High Tech High will provide the data to ground the debate.

Related:

May 18 2010

Educational technology needs to grow like a weed

Why do so many well-conceived education reform designs fail in implementation? For the same reason that old-school top-down software development fails in today's rapidly evolving Internet-based marketplaces.

In both cases there is an implicit false assumption that the designers can accurately predict what users will need in perpetuity and develop a static one-size-fits-all product. In response to that fallacy, both software development and education reform have developed agile models of adapting to unpredictable environments. Independently, these have failed to scale to their potential in the real-world trenches of the U.S. educational system. Interdependently, could they achieve the results that have so far eluded each?

Traditional education reform, like traditional engineering development, invests heavily in up-front design. In engineering, this makes sense when dealing with deliverables that are hard to change, like silicon, or when mistakes are not an option, as with space flight or medical technology. However, when the deliverable is malleable, as with consumer software, once the market starts to change the implementer is trapped between the choice of piling modification upon modification until the initial design is completely obscured, or plowing ahead unswervingly only to deliver a product that is obsolete on delivery. The software developer is destined to be outperformed by more nimble developers who can adapt effectively to changing market needs, new information, and an evolving industry.

Similarly, education reform interventions are rigidly constrained. To prove a treatment's effectiveness, research needs to demonstrate that one particular variable in a messy human dynamic environment is responsible for a change in student outcomes. This means that an educator and his/her students must behave precisely as designed in order for the research to be valid. Tremendous resources are spent in these kinds of trials to ensure "fidelity of implementation." In this situation, the educator is trapped between the choice of corrupting trial data by changing the implementation to meet the changing needs of students and the environment, or plowing ahead only to limit the good he/she can do for students to the lowest, common, measurable denominator.

In the software world, we address this dilemma through an iterative development model. That is, we assume that when we are thinking about what users might need or how they will use our product, we will get some things wrong. So we code up some simple end-to-end functionality, throw it out for people to use, and then improve it iteratively based on feedback from our users. This feedback may be explicit, in the form of questions and requests, or implicit, based on our observations of how the software is used. It may well be automated, in the way Google instruments the applications we use and modifies them based on how we engage.

In the education world, there is also a shift away from rigid implementations to more scalable adaptive approaches. Alan Bain writes in "The Self-Organizing School" about how the metaphor of emergence mediates the tensions between top-down control and bottom-up chaos. Rather than designing and dictating the everyday workflow of educators and students, the self-organizing school identifies a small set of simple rules. These rules, in combination with multiple feedback loops, drive and iterate the work of teachers, students, administrators and others involved in teaching and learning. As with the emergent behaviors of ant hills and flocks of birds, the simple rules drive elegant, complex system-level behaviors that adapt to changing circumstances.

This model of education reform depends on real-time, effective feedback loops of information at a scale that is possible only with the support of technology. But the technology platforms to support a self-organizing school haven't been developed -- as with most educational use of technology they are likely to be pulled together on an ad-hoc basis with minimal support, making them clunky to use and difficult to modify. As a result, rather than enabling and supporting adaptation, they are just as likely to carve existing processes into digital concrete and become a force resisting change.

How do you get to a technology platform that supports scalable education reform? Perhaps the best option is to grow it. Plant it in the fertile soil of existing open source education software and open education resources. Seed it with some simple elements: digital content creation or assessment distribution or maybe collaboration spaces or online courses. Feed it with a few data flows: perhaps computer-graded quiz results to students, teachers and parents; homework assignments and recorded lectures in one direction, completed projects in the other; automated attendance data to teachers and administrators. Immerse it in an environment built on feedback loops that are nourished by the data that is generated on the platform. Adapt and evolve it in response to decisions and needs that are uncovered by those feedback loops.

In symbiosis, the platform and the practices it supports mature and reach a sort of dynamic equilibrium of continual, steady, incremental growth. As it matures iteratively, the technology platform becomes ready for transplantation to other environments.

Traditional education reform fails to scale because top-down designs don't survive the reality of the day-to-day classroom. Emergent designs adapt to real circumstances but depend on extensive data collection driving feedback loops at every level. Not only is this not well supported by existing technology implementations, but the functional requirements of those implementations are not yet well understood. Through a process of co-evolution, those requirements can be surfaced and technology platforms developed that can then enable education reform to scale.

March 04 2010

Cell phones in the classroom

Guest blogger Marie Bjerede is Vice President of Wireless Education Technology at Qualcomm, Inc., where she focuses on addressing the technical, economic, social, and systemic challenges to enabling every student to gain the advantages afforded those who have 24/7 mobile broadband access.

In most schools, cell phones are checked at the door -- or at best powered off during school hours in a tacit "don't ask, don't tell" understanding between students and administrators. This wide-spread technology ban is a response to real concerns: if kids have unfettered instant access to the Internet at school, how do we keep them safe, how do we keep out inappropriate content, how do we prevent real-time cyberbullying, how do we even keep their attention in class when competing with messaging, gaming, and surfing?

At the same time, though, there is a growing sense among education thought leaders and policy leaders that not only are cell phones here to stay but there seems to be interesting potential to use these small, connected computers that so many students already have. I've been insanely fortunate over the past year to work closely with Wireless Reach (Qualcomm's strategic social initiative) and real innovators in education who are finding that cell phones in classrooms don't have to be a danger or a distraction but, in fact, can help kids learn in some surprising ways.

During the 2007-2008 school year, Wireless Reach began funding Project K-Nect, a pilot project in rural North Carolina where high school students received supplemental algebra problem sets on smartphones (the phones were provided by the project). The outcomes are promising -- classes using the smartphones have consistently achieved significantly higher proficiency rates on their end of course exams.

Now, the population is small (on the order of 150 kids) and the make-up is essentially what researchers call a "convenience sample." It was selected from a population of kids that: largely qualified for free and reduced lunch; didn't have home Internet; and had low math proficiency. It was not balanced with a formally designed control group. There was self-selection on the part of the participating teachers -- they are extremely motivated -- but the results are consistent and startling. Overall, proficiency rates increased by 30 percent. In the best case, one class using the devices had 50 percent more kids finishing the year proficient than a class learning the same material from the same teacher during the same school year, but without the cell phones.

So what's so different about delivering problem sets on a cell phone instead of a textbook? The first obvious answer is that the cell phone version is multi-media. The Project K-Nect problem sets begin with a Flash video visually demonstrating the problem -- you could theorize that this context prepares the student to understand the subsequent text-based problem better. You could also theorize that watching a Flash animation is more engaging (or just plain fun) and so more likely to keep students' attention.

Another difference is that digital content is personalized. In this case, that just means that different students get the same problem (how long will it take a space ship to catch up with a space probe?) but with different numbers plugged in (the velocity might be given as 40,000 mph for one student and 37,500 mph for another). The result is that students can't simply compare answers - they need to compare solutions. "How did you get that" replaces "what did you get?"

A third difference is that, unlike the traditional practice where each student works on textbook problems in isolation, the learning environment in Project K-Nect is participative. Students are asked to record their solutions on a shared blog and are encouraged to both post and comment. Over time, a learning community has emerged that crosses classrooms and schools and adds the kind of human interaction that an isolated, individual drill (be it textbook or digital) lacks and that a single teacher is unlikely to have the bandwidth to provide to each student.

A final observation is that having a digitally mediated component to the learning environment can be surprisingly inclusive. As teachers in Project K-Nect began to experiment with using the blogs and instant messaging for discussing math in the classroom, an unexpected (to us) dynamic emerged. It turns out that many kids who don't like speaking up in class are completely comfortable speaking up online. Students who don't like to raise their hands use the devices to ask questions or participate in collaborative problem solving. There appears to be something democratizing about having a "back channel" as part of the learning environment.

So far all these distinctions are not unique to cell phones but common to any personal computing solution. A WiFi-equipped netbook at every desk could readily provide the same kind of differentiation from a lecture-and-textbook based traditional classroom. But taking the next step from computer labs or laptops at school to a personal, connected device changes the game. Beyond just computing in the classroom, cell phones give the students in Project K-Nect access to the Internet and their learning communities 24 hours a day and 7 days a week, whether they are at school, at home, on the bus, at after-school activities, or in the case of one chronically ill student, at the hospital.

Back when I was in school, I remember math learning went something like this:

  • Sit in a lecture and take notes furiously -- verbatim, if possible
  • The night before homework is due, try to reverse engineer how to solve problems from the now cryptic notes
  • Find examples that look like the problem at hand
  • Plug in numbers from the given problem
  • Hope

Because the students in Project K-Nect have 24/7 mobile broadband, that dynamic has changed for them. When a student sits down to work on problems and gets stuck, she can post a question or just a general plea for help to the shared blog. Soon, several classmates will reply with help and encouragement. Students who might otherwise give up can get just-in-time support to help them be successful while the students who are providing the help get the reinforcement and deeper understanding that comes from teaching.

Teachers from the pilot also tell me that their instruction has changed since they started using cell phones in class. I had a chance to see one teacher give her students a simple bingo game to play on the phone that involved solving a number of algebra problems. She told me that her kids had far more patience for, and interest in, working problems as quickly and accurately as possible when it was part of a digital game rather than performing the same drill using worksheets.

I've seen another teacher use Poll Everywhere software with the students to check on their understanding during a lecture. The teacher posed a math problem, the students texted their replies to the Poll Everywhere site, and a pie chart showing the distribution of answers was instantly projected at the front of the class, giving the teacher a chance to clear up any misconceptions before moving on.

Much of the teaching has also shifted to problem-based learning. I was fascinated to see an example of this on one visit. The students worked in groups to develop a public service announcement describing the dangers of compound interest and credit card debt. They then made a video of their commercial using their cell phones and posted it to the shared blog. Not only did they learn by discussing and debating as a team how best to communicate compound interest, but they then had the resulting video to refer to when it came time to review for the test. In fact, they had everyone's videos at their fingertips via their cell phone browsers. If one team's explanation didn't kindle the "aha" moment, another one just might. Once again, the connected learning community had a significant and unanticipated impact on these students.

As for the issues of safety and appropriate use of the Internet, each student in the pilot has signed an acceptable use policy outlining their responsibilities as cell phone users at school. Soti's MobiControl software, which allows the teachers to interact with each student's cell phone, also allows them to monitor use and apply standard classroom discipline techniques for inappropriate behavior in the virtual world -- just as they manage behavior in physical hallways and on campus grounds. Not surprisingly, after some initial testing of the boundaries, a culture of responsible use quickly evolved among the students.

Finally, what about messaging, gaming, and surfing in class? In the Project K-Nect classrooms, students don't use these to play virtual hooky, but they do use them regularly for learning. In the classrooms I've had a chance to see, the students are far too busy participating to tune out. Of all the expected and unexpected outcomes of this project, I find the way that cell phones have facilitated the social aspects of learning to be one of the most intriguing.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl