Friday 27 July 2012

The Horseless Carriage Becomes the Driverless Car


Toronto, Ontario. The annual Association for the Advancement of Artificial Intelligence conference ended yesterday. For me, Thursday’s talk by Sebastian Thrun (Stanford University and Google) was a highlight: Google’s self-driving car project. The bottom line: this technology isn’t science fiction any more. More than a century ago, the introduction of the horseless carriage dramatically changed the world. The next step in this evolution, the driverless car, promises to be no less impactful.

Sebastian is passionate about building robotic systems for every-day use. For a decade now, he has been concentrating on self-driving cars. Seeing a car pull up beside you with no one in the driver’s seat would be an unnerving experience for most, but for Sebastian it’s a daily experience. In 2005, his team won the U.S. Department of Defense Grand Challenge, having a computer-controlled car successfully travel 212 kilometers on California desert roads. His team came second in the 2007 Grand Challenge, where the car had to navigate through a mock-up downtown area. Since 2010 he has been working with Google on realizing his dream of turning this technology into something that will change the world.

In 2006 I saw him give a presentation on his work. It was interesting, but the road (so to speak) from where he was to where he wanted to be was long and the research problems to be solved, hard. Six years later, the only word I can use to describe where he’s at is “stunning.” The advances that have been made are truly impressive, suggesting that the technology is almost ready for prime time. Sebastian says it’s at least a decade way from being widely deployed. More on this later.

The Google car is being extensively used in the San Francisco and Silicon Valley area. To date it has 320,000 kilometers of accident-free driving. Can you make the same claim? Sebastian showed numerous impressive videos of the car doing its thing, such as driving down San Francisco’s (in)famous Lombard Street, negotiating downtown traffic, and easily traversing highways. What made this even more impressive was that demos showed the car performing well in a variety of difficult situations, including day and night (day turns out to be harder because of the sun), in the presence of pedestrians, and even through a construction zone (lanes shifted). In the latter case, although the car uses GPS maps, it has the ability to improvise when it comes across signs that force it to deviate from its planned route. Impressive!

The technology has been added to a handful of smaller vehicles (golf carts). He showed a video where a person uses their phone to request a ride. The call is routed to an available vehicle that, upon receiving the request uses your GPS coordinates to automatically drive to you. Imagine how this could change your life. You can be chauffeured anywhere, sans the chauffer.

Sebastian revealed some interesting details on the car’s performance. It tends to drive slower than other vehicles on the road (not a surprise given safety concerns), prefers an interior lane (the vision system works better if there’s left and right feedback), does less braking and less acceleration than humans, and maintains a safer distance between cars (which helps reduce the chance of an accident).

What remains to be done? Lots, mostly special cases. For example, the research team has not addressed the problem of snow and ice. They admit that their work has been California-centric. Another example he cited was a policeman on the road directing traffic. This situation is challenging since the software needs to distinguish a policeman from a pedestrian, and understand that the hand gestures have meaning. They also have problems with sudden surprises, such as an animal running across the road. He did not mention a variety of other possible situations, such as getting a flat tire, hitting a pothole at high speed, or being crowded by another vehicle (does the car honk its horn?). Every one of them has to be addressed and then thoroughly tested.

Sebastian believes that it will take at least a decade before we will see widespread use of driverless cars on the road. Part of the reason is the many uncommon circumstances that need to be addressed. However, the bigger hurdles have nothing to do with technology: political, legal, and psychological matters all stand in the way. As well, insurance companies will have to weigh in.

The implications of this technology if/when it becomes commercially viable are transformative, some of which include:
  •  improved quality of life (the one-hour daily commute becomes usable time);
  •  fewer accidents (data strongly supports this case);
  • increased freedom for people with mobility-related disabilities; and
  •  better traffic throughput (less need for increased road infrastructure).

A high-reliability self-driving car will dramatically change the world as we know it. I have seen the future and it’s exciting, coming much sooner than I would have expected, going to have enormous societal benefits, and will be transformative.

It’s not often that I come away truly excited about technology. A single research talk has made this a memorable day for me. I will not soon forget the excitement I felt being in the audience for Sebastian Thrun’s wonderful talk. 

Monday 23 July 2012

“Big Breakthroughs Happen When What Is Suddenly Possible Meets What Is Desperately Necessary” – Thomas L. Friedman


Toronto, Ontario. I am attending the annual conference of the Association for the Advancement of Artificial Intelligence (AAAI). This year roughly 1,000 attendees have converged on Toronto to hear about and see demonstrations of the latest advances in building “intelligent” computing systems.

This morning I attended Andrew Ng’s talk on “The Online Revolution: Education at Scale.” For those of you who may not have been following the shake-up that’s happening in higher education, Andrew is at the heart of the revolution. In the fall of 2011, he taught his Stanford University course on Machine Learning to 400 students in class, simultaneously with 100,000 students online. That course (and the Artificial Intelligence course taught by Sebastian Thrun and Peter Norvig) ignited a firestorm, generating massive international media attention on MOOCs – massive open online courses. The result was that Andrew (with colleague Daphne Koller) founded Coursera and Thrun started Udacity, both companies having the goal of bringing superior educational courses to the world via online technology.

Here is a brief summary of the key points in Andrew’s talk. Warning: this is a much longer post than usual. There is lots to talk about!

Motivation
Andrew argues that world-wide there is a shortage of opportunities for getting access to high-quality higher education. There are many reasons for this, including financial obstacles and limited enrolments. By offering courses online for free, both barriers get razed. His online Machine Learning course reached an audience 250 times larger than his traditional in-class audience. He did not say how many successfully completed the course. I understand from other sources that it was around 5,000. Still, getting that many people to pass an advanced highly-technical course is stunning.

Of course, there is a difference: a certificate for passing an online course is not the same as academic credit towards a Stanford degree. However, in terms of learning outcomes, the point is mute.

Of interest is that he received appreciative feedback from people around the world, such as from a 39-year-old single mother in India who had never dared to dream of taking a Stanford University course. When I went to talk with Andrew after his lecture today, I had to wait in line. Most of the 20 people ahead of me were international students who had enrolled in his online course. They wanted to thank him in person for their excellent learning experience.

In less than a year, Coursera has had 800,000 registrants from 190 counties for a total of 2 million course enrollments in the 111 online courses offered (science, humanities, business, etc.). No word on how many people passed the courses. Regardless, by any standard these are impressive numbers.

Andrew noted that many people did not finish a course because “their life got busy”; they could not sustain the three-month intensive experience. Coursera is considering offering courses at “half speed,” to spread the workload over a longer period.

Secrets of Success (1): Video-based Instruction
I have looked at courses offered by Udacity (specially filmed) and Coursera (professor lectures), and been more impressed with the Udacity production values. However, Andrew argued that the lecture approach is critical to success. Instructors can create lectures in their home or office; all they need is a quiet space with a computer and web cam. This avoids the expensive production infrastructure that, presumably, Udacity invests in. The argument is that this is the easiest way to quickly scale up the number of courses available since, essentially, it enables anyone to prepare an online course.

Coursera doesn’t offer the traditional hour-long lectures. Instead the material is broken into 10-minute “bite-sized” chunks, allowing students to more easily absorb the material.

Students are presented with optional pre-requisite material (for those who need to refresh their background skills) and optional advanced material (for the keeners). This allows Coursera to say they have moved away from the one-size-fits-all model offered by most online courses.

Secrets of Success (2): Assessment
Andrew argues that the online world allows for novel assessment opportunities. He did not claim this, but I inferred that he believed they were superior to traditional assessment models. He raised several important points:
  • videos can have test questions interspersed, allowing the student to pause (for as long as they want) and test whether they understand the material;
  • Coursera uses extensive software-based tools to validate student answers;
  • students can attempt a problem as many times as needed until they get it right;
  • additional test questions can be automatically generated;
  • the online system can be adaptive so that when a student makes a mistake, they are pointed to the relevant instructional material; and
  • use of a peer grading system.

The last point is critical to their model. The education literature, as well as Coursera’s own research, says that peer grading can be highly correlated to grading done by the instructor. By combining peer grading with crowd sourcing, Coursera can scale up to accommodating (hundreds of) thousands of students.

Students must attend a grading boot camp. They demonstrate their proficiency by assigning marks to assignments that have already been graded by the instructor. If their result closely matches that of the instructor, then they are allowed to grade other students’ work. For each course assignment, every student is expected to grade five and, in return, gets feedback from five. Coursera has data to say this produces high-quality results.

Secrets of Success (3): Community
The global nature of the course audience means that students have 24x7 access to course assistance. A support community quickly builds, with students helping each other. For the Machine Learning course, the median time for a student question to be answered by someone was 22 minutes – much better than anything I could ever do in any course that I have taught.

Perhaps surprisingly, some of the online study groups translated into face-to-face study groups, including two in China, three in India, and one in London. A study group has recently been set up in Palo Alto and 1,000 people have signed up!

Secrets of Success (4): Statistics and Analytics
I have read several papers in the education literature that involve experiments with human subjects. They usually involve small samples (as low as five, and rarely more than 100). As a scientist who is used to working with large data sets, I find these papers unsatisfying. Contrast that with what Coursera is doing. Because of the large enrolments, they have the opportunity to do some amazing analysis. For example, to test a hypothesis they will use data gathered from 20,000 students, with a further 20,000 as a control group. They collect anonymized data on every student mouse click, key stroke, time spent reading a web page, number of times questions were answered incorrectly, how often a video is watched, and so on. They mine this data to better understand the pedagogy of their courses – what works and what doesn’t work. Andrew claimed that what Amazon did for e-commerce, Coursera will do for e-education.

He gave one interesting example of how data mining can work. On an assignment, the system identified that 2,000 students submitted the same wrong answer. After manually looking at the incorrect solutions, it became clear that there was a trivial misunderstanding. The online system was then modified to detect when the mistake is made and then point the student to a web page that hints at what they are doing wrong. Coursera is working on automating this process.

Conclusions
Andrew raised an interesting characterization of how the online world differs from the in-class experience. In a traditional course, time is held constant (you need to complete something by a specific date); the amount you learn is variable. In a Coursera course, the amount you learn is a constant (you can try as many times as you like until you get it right); the time you spend on the course is variable. He believes (and may have data to support his claim) that the amount learned per student in his online course was higher than for his in-class course.

Whether you agree or disagree with the move towards online education is irrelevant; it;s here and it's not going away. How often does university teaching attract extensive international media attention? These massive open online courses may be the disruptive technology that will shake the traditional educational system to its very foundation. It is too early to know the full implications of what is happening, but it is obvious that we are only feeling the tremors of change right now; the full seismic impact is yet to come.

Let me conclude with a quote from the New York Times: “This is the tsunami,” said Richard A. DeMillo, the director of the Center for 21st Century Universities at Georgia Tech. “It’s all so new that everyone’s feeling their way around, but the potential upside for this experiment is so big that it’s hard for me to imagine any large research university that wouldn’t want to be involved.”

Thursday 19 July 2012

We Have The World’s Fastest Computer Program For Solving Rubik’s Cube. Who Cares?


Niagara Falls, Ontario.  The fourth annual Symposium on Combinatorial Search (SoCS) started this evening. Fifty-five of the top people in this research area will spend two days intensely discussing the latest results. I hope to come away with at least one good idea from this event.

Let me tell you a bit about my research and then use that to motivate the point I want to make in this blog posting. I am best known for my work in applying artificial intelligence technology to computer games and puzzles (one-player games). Consider the well-known Rubik’s Cube. This puzzle is daunting to solve for a human, but what about a computer? Combinatorial search, the subject of the SoCS conference, includes the search techniques used to sift through the myriad of move sequences that one can try to solve the puzzle. Humans solve Rubik’s Cube non-optimally, using many unnecessary moves to orient the Cube into a familiar position. As computer scientists, we do not want just any answer; we want the optimal answer – solve it in the minimum number of moves.

Technology developed by Joe Culberson and I (pattern databases) in the mid-1990s is used by many puzzle-solving programs. For Rubik’s Cube, Rich Korf (UCLA) used our work to build the first practical solver. Today, a team of Israeli (Ariel Felner) and University of Alberta (Robert Holte and I) researchers have the fastest program for getting an optimal solution to this challenging puzzle. Fastest. In the world.
From an article on Rubik's Cube and my work on Checkers
www.sciencenewsforkids.org/2007/09/play-for-science-2
 We’re number 1! We’re number 1!
<Wait for applause>
<Deadly silence>
<Sheepish person bravely speaks up>
“Excuse me, sir. Who cares that a computer can solve Rubik’s Cube?”
<Pause while I compose myself>

Solving Rubik’s Cube, in and of itself, isn’t earth shattering. The world is not a better place because computers have super-human skills here. It's not the application that's important; it's the techniques used to solve the problem. Where can this technology be used? Real-world problems such as:
  • Path finding: finding the shortest route between two locations. This is used in GPS systems, computer games, and even on Mars (the Mars Rover). It would be pretty cool if I found out I had created “out of this world” technology.
  •  Scheduling: produce a schedule that satisfies constraints. The technology can be used for arranging a trip (minimal travel distance), airline schedules (guarantee that the planes will be in the right location at the right time), and classroom bookings (accommodate all the classes at appropriate times without conflicts).
  • DNA sequence alignment: measuring the similarity of two strands of DNA. The measure is important to researchers as they attempt to decipher the human genome.

Working with puzzles is an example of curiosity-based research. You investigate a challenging problem to devise better ways of solving it – perhaps getting an answer faster, or getting a better answer. If all your new technology can do is solve that one problem, then it’s probably not interesting (unless the problem being solved is important). More often than naught, the application domain is a placeholder for a class of problems. In my case, I use games and puzzles as my experimental test-bed because a) they are fun to work with and b) they map to real-world situations.

Over the past decade, research funding in Canada has become increasingly targeted towards projects that have direct industrial applications. The reasoning is obvious: potential economic impact. But this is a short-term view. What about research that produces results for which there is no obvious “usefulness”? Consider the recent discovery of the Higgs boson. There are no commercial products that will likely come out of the discovery itself. However, there can be side effects. For example, building the billion-dollar infrastructure needed to “see” the Higgs may have real impact; surely creating the technology needed to find a particle so small that you can pack 1025 of them into a kilogram will translate into products.

Research does not have to have commercial value to be valuable. One can never predict how ideas will eventually be used. For example, in my area of games, an innocuous Ph.D. thesis of less than 30 pages published in the early 1950s has enormous commercial impact today. Three decades later, John Nash’s work on equillibria in games (showing that in some games a state can be reached where neither player has an incentive to change their strategy) became essential for diverse applications such as auctions and military strategy. Nash won the 1994 Nobel Prize in Economics and was the subject of the Academy-Award-winning film A Beautiful Mind.

As a society we must continue to fund – even grow – our investments in curiosity-driven research. A faster Rubik’s Cube solver may not mean much today, but who knows about tomorrow?

Sunday 15 July 2012

The Myth of the Four-Month Holiday

Warning! This posting is almost a rant. I won’t call it a rant because I edited out all the juicy stuff (sorry). I find it therapeutic to write what I really want to say (my form of venting), but then reflect on it for a day or so and edit it into something more politically correct. The result is less entertaining to the reader, but it allows me to stay employed.

A few weeks ago I heard someone jokingly remark about the “four month holiday” that academics get. The reasoning goes something like this. Classes are finished by the first of May and don’t start up again until the first of September. Just like K-12 teachers, professors get the summer off -- four months in this case. And they get paid for it. Great job (wink, wink)!

<delete offensive remarks>
<take a deep breath>
<assume a calm demeanour>

Over my 28 years of being a professor, I’ve heard similar comments often enough that I need to say something in a public forum. At the University of Alberta, a typical faculty member spends 40% of their time in support of teaching, 40% doing research, and 20% performing service (e.g., university committee work). September to April is the main teaching period for most professors, intermixed with some research and service. Come May 1, the teaching is done. Now it’s time to concentrate on the research.

Before I entered administration, I would spend my summers as follows:
  • Keeping current: reading the latest scientific papers in my areas of interests;
  • Doing research: thinking about ideas, fleshing them out, writing computer programs to demonstrate that they work, perform experiments to document the impact, and then iterating over the whole process again to come up with something better;
  • Writing: documenting your work and submitting it to the critical review of your peers is the cornerstone of scientific discourse;
  • Supervision: working with graduate students (which often numbered around 10), undergraduates (maybe one or two each summer), and hired staff (in my case, usually two or three research assistants and/or programmers);
  • Attending conferences: in computing science, these meetings are important for getting the hot-off-the-press research results and interacting with colleagues (more on this below);
  • Scientific service: this includes serving on scientific committees and refereeing papers that are under submission for publication;
  • Teaching: preparing for the coming year’s courses, and
  • Other duties. 
The end result was that I was busy all summer long. The difference with administration is that the administrative work never stops, even during the summer, so some of the above tasks have to be reduced/curtailed. For example, I am only supervising two students right now -- my lowest level since 1985. 

Academics are passionate about what they do. They are highly motivated and, often, obsessive. Being a professor is an open-ended job; you are the boss, you decide what has to be done. You can get by with a minimum amount of work, or you can immerse yourself. The annual reports of some professors are incredible in terms of what they accomplish in 12 months, whether it be extraordinary quality and quantity of research results, supervision of an impressive number of students, major commitments to the scientific community, and so on. To me, it looks like they work 24 hours a day, seven days a week. Holiday? It’s an unknown word in some of their vocabularies.

Many academics do not take all the holidays that they are entitled to, and even if they do, they still work. For example, when on holidays I still try to keep on top of my email. That can be an hour or more each day, something that is an annoyance to my family. In 2007 I took three weeks off for a family holiday in Australia. I promised not to read email during that time and I was mostly successful. However upon returning home, there were over 2,000 messages in my inbox. It took almost two weeks to wade through this mess -- all done during my evening and weekend time of course.

Regarding conferences, I have occasionally heard a derisive remark that these are “holiday junkets”. In many disciplines, conferences are an important part of an academic’s life. They are a chance to hear the latest cutting-edge research (essential if you want to be publishing cutting-edge material yourself), engage in discussions with your research peers (research is becoming increasingly collaborative), and recruit graduate students (every academic wants to work with the very best graduate students). Most conferences are tiring experiences, with typically eight hours of formal meetings per day, and possibly many more informal meetings over breakfast, lunch and dinner. I doubt that many (any?) academics regard a conference as a holiday (although some, like me, often tack on a few extra days of holidays afterwards if the venue is in an interesting location -- at our own expense of course). 

Four month holiday? Yeah, right.

Wednesday 4 July 2012

The Not-So-Standard Model of Physics

The news echoed around the world: “A Higgs Boson Has Been Discovered.” It was nice to see science on the front page of most media outlets. (How often does this happen? Not enough!) Never mind that most people haven’t the foggiest idea what a boson is, let alone the elusive Higgs boson. For this blog posting, all you need to know is that it is an elementary particle – one of the building blocks of matter.


CERN, the European Organization for Nuclear Research based in Geneva, is the focal point for the Large Hadron Collider (LHC). The LHC is essentially a 27-kilometer circular tunnel running under Switzerland and mostly France (look at Swiss real estate prices and you’ll know the reason why). Particles in the tunnel are accelerated to near the speed of light and then – bang (but not the big bang) – they collide with each other. The resulting impact breaks them apart into smaller sub-particles. The LHC was designed to see if a hypothesized particle, the Higgs boson, could be detected. This discovery is important to physicists since it would help explain how mass is acquired and possibly validate the so-called Standard Model of Physics. Because of its importance to understanding our universe, the Higgs has been called the “God particle”. Physicists are apparently pretty good at creating media-friendly catchphrases.


When I realized that my 2011 summer holiday was going to take me within a short hop of Geneva, I  asked my family if they would be willing to take a detour. To my surprise, they enthusiastically approved! Reading about the LHC is one thing; seeing it in person is quite another.


If you look closer at the LHC facility and the quest to detect the Higgs, then you see something amazing. Imagine:
  • the audacity of creating a vision over 20 years ago to build the facility needed to show the existence of the Higgs,
  • the effort required to convince thousands of scientists to work towards realizing the LHC vision,
  • the sales job required to convince dozens of governments to invest billions of dollars in a grand physics experiment,
  • the perseverance required to bring the project to reality,
  • the challenge of finding a particle that is so tiny you can pack roughly 1025 of them in a kilogram (ten million billion billion),
  • the challenge of creating new technology to meet the extraordinary precision needed to “see” the Higgs, and
  • the challenge of getting all of this to work. 
These accomplishments are impressive! But that is not what made the lasting impression on me. At CERN, I saw hundreds of scientists from dozens of countries, all working together in a spirit of cooperation. Politics were irrelevant. Religion was irrelevant. They all shared a passion for scientific discovery. I have seen projects with similar characteristics, but they are all on a micro scale compare the LHC effort.

It is amazing what humanity can do when we put aside our differences and work towards a common goal. What has been accomplished at CERN is an example that the rest of the world should admire and emulate. In other words, the physics example is a Not-So-Standard Model for the world.

Congratulations to everyone associated with this incredible project!
On holidays at CERN. Photograph taken by the physics star Roger Moore, not the movie star Roger Moore.

Sunday 1 July 2012

"If I Ever Get Into Administration, Just Shoot Me!"

Yes, I said that, first in the late 1980s and several times thereafter (but not since 2005). My wife, Stephanie, continually reminds me of that. As a professor who was having a lot of fun doing teaching and research, it was inconceivable to me that I would ever give this up to “push paper”.

I don’t really know why I took on the position of Chair of the Department of Computing Science in 2005, but things became clearer in 2008 when I agreed to become the Vice Provost and Associate Vice President for Information Technology (longest title at the University of Alberta; size matters!). Today in 2012 I understand why being Dean of the Faculty of Science appealed to me.
  • Have impact. At the risk of being controversial, most research has little impact. Being an academic leader – as Chair, Vice Provost or Dean – can have enormous impact, especially if you’re prepared to create a vision and carry it out.
  • Help others. I'm know this sounds corny, but it's true: as a Chair I discovered that some of my actions helped others to succeed, and this gave me great satisfaction. 
  • Opportunity to learn. My first seven years in administration have been amazing learning experiences. I love to learn, and the opportunity to discover more about biological sciences, chemistry, earth and atmospheric sciences, mathematics and statistics, physics, psychology, and computing science is irresistible.
  • Being challenged. As a researcher, I build high-performance (super human) game-playing programs. Administration is a much more complex game (e.g., the rules keep changing). Whereas having a computer win or lose at checkers affects few people, administrative decisions can impact people's lives. The stakes are high – you've got to get it right.
  •  Feed the ego. Some academics jump at the chance to become a Department Chair as a way of stoking their ego (Academics? Ego? Go figure). While I am not immune to the guile of ego, this time around it was an insignificant factor (that's my story and I'm sticking to it).
Whatever the reasons, the die is now cast. As of today I begin my appointment as Dean. I’m sure I made the right decision since I’m excited about my new job.