Podcasts

I think podcasts are about to become a big thing. Not that they aren’t already very popular, but I believe that they’re about to break through to a much bigger (overall) audience. My friend, Joe Miller, and I have launched our own show, Oral Argument, in part because I believe strongly in the medium. If you’ve never listened to podcasts, have only listened while sitting in front of a computer, or have only used iTunes and synching, I’m going to tell you how much easier it now is to listen to them and why you should.

Here’s what’s compelling to me: (1) podcasts are like on-demand radio that caters to your particular interests, and (2) the best of them continue the trend of doing away with that kind of show business artifice represented by laugh tracks, deep-voiced morning zoo hosts, and banal theme music and stingers. Just conversation among people you’d like to hang out with and about things you find interesting. It’s becoming easier to produce these, easier to disseminate them, and easier to listen.

This revolution is happening for many reasons, but that last point -- easier to listen -- is critical. It used to be, back in the monopolistic dark ages, that regular people just didn’t expect their computers to work very well. They certainly didn’t expect to be able to install applications without a hitch. If you wanted to install a printer, it would usually involve a call to or visit from the relative who was a “computer whiz.” Those days are over. People expect their computers to work. They expect to be able to install apps. They expect those apps will work well. When things go wrong, they blame the people that write the code, not their own technical ineptitude.

If you want to listen to podcasts, install a good podcast app on your phone. I like Instacast, Downcast, and Castro. The brand new Network looks easy to use. Give Apple’s Podcasts app a pass for now. Marco Arment’s new app, Overcast, is highly anticipated and will hopefully be the best of the lot, but it’s not out quite yet. As John Gruber has pointed out, the podcast app genre is the latest playground for designers and app makers. They’re insanely useful and young enough that radically new designs and functionality are possible.

No matter which one you use, it’s much easier to get into podcasts than you might think. You launch the app, search for shows (by name, by genre, or by popularity), preview and subscribe to them, and that’t it. Once you subscribe, the app will download episodes as they’re released. In your car or on your walks, the latest episodes of your shows will just be there, ready for you to listen. All you need to do is launch the app, go to the search field, type in Oral Argument, hit subscribe, and your phone will always have our latest episode so that all you have to do is touch the play button.

I woke up this morning to find, waiting for me in Instacast, a new episode of Oral Argument, The Flop House, The Incomparable, The Talk Show, and Accidental Tech Podcast. Others I subscribe to include Philosophy Bites, Radiolab, and Judge John Hodgman. They all have websites, but I never visit them. I heard about them, listened to an episode in the app, and kept the subscriptions. While I don't always have time to listen to all these, I do fit in a good deal of listening while walking the dog, driving, doing dishes, and other such times.

Now, I don’t know whether our little show will ever attract even a moderately-sized audience. But it doesn’t have to do so. We have fun doing it and talking to our guests. It doesn’t cost much to host it, and it’s now very simple for those who do find value in it to keep up with it. That’s what feels like the future: lots of people producing things that lots of other people (perhaps in small groups) enjoy. It’s good to be alive now.

Dark Sky

About a year and a half ago, I was ready to leave the law school for the day, and so I packed up, glanced outside, and noticed a light drizzle. I ride my bike to and from work, about two miles from my home. I have no real problem being rained on, but if I could wait a few minutes and ride in mist or drizzle, I’d prefer it to riding in what might become a downpour. Glancing at the weather, you get something like this, which gives you a rough probability for rain for the “afternoon” or “evening.” Yes, weather.com does give forecasts in fifteen-minute increments, but I’ve never been a fan — having always been partial to the National Weather Service site and, especially in winter storm season, the forecast discussions.

I wanted an app that would quickly tell me what the chances were for rain, at different intensities, over the next half hour or so. An app that would quickly tell me whether to go home right now or wait it out a bit. I had this idea that you could scrape the NWS radar images, use an algorithm to detect the edges of the colored shapes that are the storms, and extrapolate to predict where the colored shapes would be over the next hour (maybe getting a little fancy by pulling in other data). While weather prediction is very, very hard and especially difficult to get right at particular places, perhaps it wouldn’t be so difficult to get reasonably accurate (for my purposes) forecasts for only an hour in the future.

Needless to say, I never made the app. But Adam Grossman and Jack Turner (no relation) had a similar idea and an absolutely terrific concept for implementing it. They raised money on kickstarter and have now shipped Dark Sky. It’s a fantastic application. Far better than the one I’d had in my head and just beautifully designed to do one kind of thing excellently.

On the iPhone, it comprises essentially two views. In one, you see text that tells you what the temperature and state of precipitation is right now and, below it, what will happen in the next hour. This view also shows a very clever graph. On the x-axis is the time, from now to an hour from now. On the y-axis is the level of precipitation, ranging continuously from none to low to medium to heavy. The yellow curve representing this wiggles to show uncertainty. The more it wiggles, the less confident you should be. Of course, this portrayal elides two distinct concerns: confidence that it will rain at all and heaviness of precipitation. But for the purposes for which you’ll use the thing, collapsing that into one nice graph is somehow perfect.

Just today, I was wondering whether it would rain while I was out for a run. I wanted to know whether it was likely to pour so hard that I needed a sandwich bag to encase my iPhone and whether I should encourage my father-in-law to go out for a walk. The app told me it would continue to rain lightly for a few minutes, not rain for about thirty, and then rain moderately to heavily after that for the rest of the hour. So I grabbed the sandwich bag, encouraged my father-in-law to walk into town (with an umbrella and having thought I could pick him up after the run if need be), and headed out. For the first few minutes, it drizzled lightly. It stopped for the next thirty minutes, and I was drenched for the last fifteen minutes. The app nailed it. Smells like the future.

Clear: An App Object

When I began this blog with a post about the future of applications, I had in mind something like Clear, a new todo app for the iPhone. It, like Flipboard or Twitter for iPad but to an even greater degree, moves away from presenting an interface composed of standard computer-y metaphors. People who use computers all the time understand cursors, menus, dialog boxes, minimizing, windows, and the like. The designer, through these tools, is trying to say, “Use this program like you’ve used other programs that you’ve gradually learned to use or been taught how to use.” Apps of this style scream to the user, “I’m an interface. Just learn which parts of me to select or click in order to request I do something from the list of things I’m able to do.”

Clear is more like a physical object — but not at all in the faux-thing sense (or skeumorphic sense) of Apple’s calendar app, which is made to look but not completely act like a calendar, or the leather stitching in Find My Friends. No, Clear has an intuitive, internal logic and physics. It works more like a fully realized thing. You learn how to use it more like you’d learn how to use, say, an old cash register or a popcorn popper. In Clear, you pull, pinch, and push, and the app responds as if it’s a thing in the world being pulled, pinched, and pushed — but responding in slightly magical but coherent ways. You play with it for a bit, and then you just get how it works. Imagine that.

Clear’s not all the way there, but it’s pushing the boundaries toward App as Object. I’m convinced that the future of applications lies in the design of coherent, complete, functioning things.

[Note: Posting’s slow and going to continue to be slow for the next couple of weeks as I’m deep in article drafting and revising mode. Will post about that when I reemerge.]

Wikipedia

It’s difficult, but I’m trying to avoid using the blog as a running commentary on my favorite 5by5 podcasts. I’ll indulge this time, though, because John Siracusa’s rant on what’s wrong with Wikipedia raises an illustration of a broader problem I’ll write about in an upcoming post on the freedom of speech. For now, I want to add something that I think Siracusa intuited but did not say. As always, it’s about the institution, not the rules per se.

The gist of the rant, which begins at about the hour mark of the podcast, is that Wikipedia’s criteria for inclusion probe a fact’s verifiability, not truth value. If you know a fact, you are not permitted simply to assert the fact in a Wikipedia entry. If a fact is published in a “reliable source,” then you can write that fact in a Wikipedia entry. Siracusa points out the various ways that this approach and the other criteria, like notability, which are similarly biased towards appeals to authority rather than truth, undermine truth seeking.

What Siracusa did not say, but which I think he intuits, is why these rules are the wrong fit, institutionally, for Wikipedia. He argues, and I agree, that Wikipedia’s rules seem calculated to appeal to exactly the kinds of people, older school teachers and librarians, who now discount Wikipedia and forbid reliance on it even as they allow other tertiary sources like encyclopedia. Whether that was the intent, the rules do seem to replicate those of the encyclopedias that Wikipedia has, in fact, made obsolete.

The trouble is that verifiability criteria are a solution to the institutional problems of encyclopedia editorial boards. These problems are not those of the free-market-style collective that builds Wikipedia. Encyclopedia editorial boards were, of necessity, limited to a small-ish group of people. The challenge is to ensure that such boards are good agents of the readers, meaning dedicated to accuracy and free of undue bias. By restricting inclusion to information verifiable elsewhere, readers have a means to hold encyclopedia producers accountable. After all, the biggest danger in a disconnected world in which information comes to us from a small number of gatekeepers is that those gatekeepers will manipulate the information for their own selfish ends. Verifiability is a strategy to deliver the truth given the particular institutional structures that produced encyclopedias.

But that’s not Wikipedia’s problem, at least not to the same extent. Self-interested and misleading assertions need not stand unchallenged. The “marketplace of ideas” has a chance to work on the Internet to ferret out falsehoods that a cabal of editors might have been able to sneak through decades ago. Wikipedia’s difficulty is to govern the commons, to regulate an open market of speakers to produce a high quality result. For this institution, verifiability is perhaps the wrong strategy to deliver the truth.

This is not an argument that Wikipedia should lift all its restrictions and let the market work out what articles are included and what their contents are. Like other markets, speech markets can fail. But that’s the subject of a forthcoming post.

Why Separate Knob?!

On January 10, 2012, an iPhone sounded the famous “Marimba” alarm. You know the one. Unfortunately, this iPhone belonged to a man seated in the front row of the New York Philharmonic, and the orchestra was in the middle of Mahler’s Ninth Symphony. The conductor, exasperated, let his arms fall to his side, silencing the music. For an uncomfortable period thereafter, the only sound in the concert hall was Marimba. Pure poetry. If you missed it, you can read Daniel Wakin’s report in the New York Times or, through the magic of the interneteratti, enjoy a simulation.

Considerable debate among technology bloggers I follow has erupted over this incident because of one fact. The concertgoer’s iPhone was “silenced” with the “mute switch.” Naturally, this poses a question. Should alarms sound even when a phone is set to mute? Marimbagate (yes, I said it) surely points out the downside to alarms sounding when the phone is muted, and your initial reaction might be, like that of Andy Ihnatko, mute means mute. When the switch is set to silence, the phone should make no noise under any circumstances.

This problem is more general than phones or even technology. It’s inherent in the design of complex systems with many, heterogeneous users, whether they be legal systems, computer networks, or ubiquitous, mass-market devices. What to do when any choice of system behavior will at times deliver unexpected results but where meeting expectations is the goal? This particular controversy illustrates how a device in the head is different from a device in the hand and how a seductively clear and simple rule may turn out, in the hand, to be the less desirable one.

In the abstract, Dan Benjamin’s argument is compelling: “Physical settings should always trump and override software settings. If you’ve flipped a switch, you’ve told the iPhone something very important, just like when you flip a switch in the real world.” Further, he argues, the behavior should mirror that of real switches, like light switches, in that the switch should completely disable the system you’re trying to turn off. Mute means mute, as he titles his post. “When I ask the iPhone to be quiet, I’d really like for it to be quiet and stay quiet until I ask it to make noise again, and I think most people expect the mute switch to mute everything.”

As Ihnatko puts it:

I should slide the switch to “Mute,” and then the phone goes SILENT. If I miss an appointment because I did that, it’s completely on me. If my phone disrupts a performance despite the fact that I took clear and deliberate action to prevent that from happening…that’s the result of sloppy design. Or arrogant design, which is harder to forgive.

… .

[T]the right answer seems clear. The iPhone must never let a user down the way it let down that man at the philharmonic.

But the iPhone, and many similar devices, are designed to let such a user down in order not to let other users down. As Ihnatko acknowledges, the simplest solution will inevitably result in users’ not waking up on time, missing flights, and otherwise having their expectations foiled in situations where they really didn’t want mute to mean mute. As Marco Arment summarizes, the iPhone mute switch mutes all sounds except: (1) Sounds in the Movies or Music apps when you play a movie or song. (2) Third party apps that explicitly choose to ignore the switch but only if the app is in the foreground, the one you’re currently running. You can hit the sleep switch and the sound will play, but if you exit the app by hitting the Home button, the sound will not play. (3) Alarms and Timers set in the Clock app. (But not Calendar items.)

In each of these three cases, you, the user, are explicitly telling the device to make noise. The design problem is to figure out whether that instruction or the instruction to remain mute should be ignored. The simplest answer, and the most conceptually appealing, is to respect the switch, turning all sound off. In any system, a simple answer that delivers desirable results is preferable to a more complex answer. But sometimes conceptually simple solutions have unintended consequences. And the solution that is more complex conceptually (in the head) is simpler and better in practice (in the hand).

So here’s the downside to mute meaning mute. Suppose you use your iPhone as an alarm clock, as I do. Suppose you also either generally mute your phone, as I do, or at least do so at night to prevent notifications from sounding at 2 a.m. Your alarm would not sound if the mute switch were on. To silence my phone but allow the alarm to sound would require me to unmute the phone and put it in airplane mode — so that no other notifications would come in. The simpler solution is, for me, more complex.

In the head, the device should obey the conceptually simple rule. In the hand, most users expect the device to follow a more complex rule: mute everything except the things I expect to make noise. The iPhone, in my view, obeys the more complex rule that makes for the simplest user interaction. I want my phone to wake me up in the morning and vibrate instead of ringing for phone calls, texts, emails, and the like. The design choice is the simplest rule that matches general user expectation, even if that expectation is not the simplest conceptually.

Let’s think about what has to happen to create a Marimbagate. First, you must have an iPhone, go to the Clocks app, and set an alarm. Second, you must be somewhere where you do not want the alarm to sound at the time you set it to sound. Third, you must be unaware that alarms will sound despite mute and also not have turned the phone off. Fourth, you must, despite having set the alarm, not know how to silence it. Marimbagate only happened because the user had an iPhone he didn’t know how to use that had been given to him with alarms set (for odd hours) by someone else.

How often will that happen compared to the frequency with which people want to use their phone as an alarm clock but still otherwise be silent? I’d lay a large sum that the latter is far, far more frequent. This is John Gruber’s conclusion as well:

You can’t design around every single edge case, and a new iPhone user who makes the reasonable but mistaken assumption that the mute switch silences everything, with an alarm set that he wasn’t aware of, and who is sitting in the front row of the New York Philharmonic when the accidental alarm goes off, is a pretty good example of an edge case.

Whereas if the mute switch silenced everything, there’d be thousands of people oversleeping every single day because they went to bed the night before unaware that the phone was still in silent mode.

I’d go further than Gruber. I bet that most new iPhone users if asked what the mute rule was would respond with “mute means mute.” They would be responding to a question about the right conceptual model. But if you measure their expectations from the way they actually use the device, then you’d see they expect the device to behave pretty much the way it does. The simplest rule, so clear and easy in the head, doesn’t take account of how most people would live with it.

This distinction, between thinking abstractly about rules and considering their consequences, is apparent in the law as well. In the abstract, many people prefer simple solutions like border fences to keep out all illegal immigrants and stiff penalties and deportation for those who get through, but faced with an actual person caught up in such a system, people’s views tend to change. Sentencing a real person to prison or death is far, far different from determining what the right punishment would be for an abstract criminal. Hard-core libertarians are famous for conceptually simple solutions that fail to take account of how life is really lived. In the head, law is easy, but in the hand, we naturally, and more simply, latch on to a more complex calculus for what the rules should be.

Back to the iPhone, well how about settings? Let users, when they set an alarm, further specify whether that alarm should sound when the phone is set to mute. Yes, settings, the solution to every problem on which users might differ, right? More choice is better! Not necessarily, yet again the conceptually simple solution to heterogeneity — choice — fails to take account of what is practically simple. Yes, the introduction of this one setting might not be a big deal. But the mode of thinking that sees every heterogeneity in use as a mandate to introduce a choice, well, there would be no end to it. That way lies disaster, like this and this. An interface littered with choice is one dripped in sadness.

Ihnatko believes that if there are such settings, the default should be that the mute switch silences explicitly set alarms. It seems to me that settings, regardless of default (and I think Ihnatko’s choice of default would exacerbate this), would lead to am/pm and “separate knob” mistakes. You start with the simple, conceptual model, and then to accommodate how people actually use the device, you add settings and choices. But now you have a device that matches no one’s conceptual or practical model but is instead something to be configured. In the immortal words of Jean-Paul, “Why separate knob!? Why separate knob!?” In an instant, Seinfeld gets to the heart of it.

This Thing I Made

Textbooks suck. They’re heavy, difficult to update, expensive, in a fixed order, rarely (excepting some graduate and professional school materials) written by leaders in the field, and too often ridiculous compromises reached by less than competent state committees. A few years ago, I built HydraText.org, which made it possible to solve these problems by giving to teachers the power more easily to build and share their own textbooks. Next week, Apple just might introduce software that solves them for everyone else.

Textbooks Are Playlists

Think about what a textbook is and how you’d build one. There really isn’t much difference between making a textbook and a playlist in iTunes. With music, you put your songs in order. Once you’ve done that, you can burn the playlist to a CD (I said “can” not “would” - it’s not 2001 anymore), listen right on your computer, or sync up the playlist with your phone. So you build it by choosing and ordering songs, and then you output it in various ways.

Same thing with textbooks. You make a table of contents, and under each heading, you put contents. Traditionally, textbooks could only be bought as complete works and in a single format. The teacher or professor would choose a book. The students would then pay whatever it costs and lug it around. The prof then has a choice whether simply to teach the book in order (just play the playlist the publisher shipped) or to try to adapt the book to the class he or she actually wants to teach. Most of my favorite teachers did the latter and would also supplement the book with other materials, some from other books or journals and some of their own notes or other writing.

The actual course book, then, would be another playlist, one laid out in the syllabus, which specifies how the units of content are ordered. First, read this, which can be found here. Then read that, which can be found there. The physical textbook is, in this model, a sourcebook, one among a library of materials from which the virtual course book draws.

There must be a better way. What we want are beautiful, high-quality textbooks, customized to the course the teacher wants to deliver, at low or no cost to students, and available in a range of convenient formats. Getting this right requires a different information-sharing architecture than existed when I first decided to build HydraText.

Many to Many

The best example of shared production of written content on the internet has probably been Wikipedia. The whole idea of a wiki is that a great many people can collaborate to produce a single thing, an encyclopedia for example. Some tasks lend themselves to parallel effort like this, and others don’t. For textbooks, there is Wikibooks, which has essentially the same architecture as Wikipedia. Many people cooperate to produce textbooks.

But that’s not good enough. A teacher wants to produce the perfect textbook for him or her, and at the same time to take advantage of others’ work producing similar materials. We need software that permits many people to collaborate to produce not a single, definitive thing, but many customized things.

Here’s the HydraText solution. Every teacher has an account. And each subject has its own space (or Hydra, as I call it). You can upload or enter individual units of content, which I call Articles. These can be law cases straight from Google, your own text, ideally in John Gruber’s Markdown format but it’ll handle Word documents and other formats as well, PDFs, or even items from the web you’ve saved in Instapaper. You can build a Textbook by creating a table of contents and then adding Articles to each heading. You then hit a button, and out comes a web, ePub, and PDF version of your textbook.

But the real potential lies in sharing. You see, no one else can change your book, like they could if it were a wiki. But they can copy it, make it their own and rearrange the chapters, add more content, or delete content. They can take a chapter of your book, add a chapter of someone else’s, and add some of their own content. And they can build their own book using Articles you and others have added. This is the origin of the name HydraText, the tiny freshwater animals that can grow into whole new animals if cut into pieces.

An iTunes Education Store

My main goal with HydraText was to permit people to cooperate on producing customized texts. The result is an iTunes-style process for building a playlist from content you provide or that already exists in the Hydra. To extend the analogy, HydraText is meant to provide an iTunes Store filled with content in your subject area.

This content has the potential to be much better than what the textbook industry now produces. As it is, to author a casebook is a monumental undertaking. You likely need co-authors and research assistants. A Nobel Laureate, for example, is unlikely ever to take up the task of authoring a Biology 101 textbook but might enjoy writing a terrific section dealing with the heartland of his or her research. Just think of all the supplements teachers at all levels have prepared for their classes and that remain only in file folders, unused by others. By reducing the unit of meaningful contribution from an entire book to a single Article (however short), the HydraText model holds the potential of global use of better learning materials than have ever been produced.

Incidentally, although I haven’t added this feature, there is no barrier to letting users set a price for their content. And so the analogy to the iTunes Store gets even closer. Most materials shared would probably be free, like podcasts, but great content might be worth paying for.

Here’s an example of a textbook, a website from which students can browse the book directly or download a PDF or ebook. This particular book lacks audio, video, or images, all of which are possible, but it gives you an idea. I haven’t used a commercial textbook in years. In the past, my students generally wanted printed copies, and so I’d arrange a group rate with Kinko’s (now Fed-Ex), for maybe $10 for an 800 page book. This year, I’m just beginning to see more students prefer digital-only versions. However they consume it, the choice is theirs. They can read it on their phones, in a three-ring binder, on a tablet, or in a web browser.

What’s Apple Going to Do?

Back in January of 2010, about a week before the iPad was released but when it was clear a tablet was in the offing, I emailed Steve Jobs to tell him about HydraText. I had no idea whether he’d even see it (he did), but I wanted him to know just how important the tablet could be for breaking the textbook logjam. I didn’t hear back from him, but, obviously unrelated to anything in my email, we now know that Jobs had targeted the textbook industry as the next to be revolutionized. According to Isaacson’s Steve Jobs biography (another topic but if you’re at all interested in Apple or Jobs, you owe it to yourself to listen to this), he apparently wanted “to hire great textbook writers to create digital versions, and make them a feature of the iPad. In addition, he held meetings with major publishers, such as Pearson Education, about partnering with Apple.”

On Wednesday of this week, Apple issued an invitation to the media to gather at the Guggenheim Museum in New York on Tuesday, January 19 of “an education announcement.” According to the New York Times, the “event will showcase a new push by Apple into the digital textbook business, but will not feature any new devices.”

I think we can assume, at least, that Apple is planning to make textbooks easy to acquire and to consume on the iPad. It’s a reasonable guess that Apple’s solution will use the ePub3 standard, which would allow richer ebooks with more interactive features. But what architecture will Apple pursue? Will they treat textbooks like they do music, where Apple has made deals with the major publishers and features them on something like the Store where they can be easily downloaded? Would there also be something like podcasts on the Store, where teachers can, perhaps using Apple tools, produce and place their own materials, where their students can easily find it. Or could there be something more, a place for more than just books, but some Apple version of MIT’s Open Courseware, where whole courses live, and where the book is just an integrated piece?

My gut, and that’s all it is, tells me Apple will release new reader software, maybe standalone or maybe a new version of iBooks, on the iPad that allows for deeply interactive textbooks, which it will provide on iTunes free or very cheaply. I’m less certain about book creation tools. The Jobs model, from the limited quotes available and from what we know of his preferences and approach in other industries, would seem more likely to be making available beautifully made, professionally produced materials, rather than the tools for open-source-style collaboration.

But perhaps HydraText will become obsolete because Apple provides better production and collaboration tools. That will be so if Apple sees the essential value of customization and production in education. Perhaps, though, HydraText will become even more useful if Apple only provides a better reading platform and supports the richer books HydraText could be used to produce. Quite honestly, whichever they do will be fine by me. I’m a teacher first, and what I really want is the best possible experience for my students.

I’m excited, though, because we’re on the cusp of changing everything. My son now carries to school a backpack filled with tens of pounds of static books and a ridiculous, boxy Thinkpad in an even more ridiculous carrying case. All this is about to be disrupted.

I remember being in law school and having a class discussion about technology prediction. Electronic books and videophones were held up as examples of technology that people always predicted would soon arrive but which never did. For ebooks, the claim was that there were fundamental reasons for their inferiority to paper books. I argued that this was only a matter of display technology and that the advantages of electronic processing, storage, and display would become overwhelming with time. I have been wrong about a great many things in my life, but that was not one of them.

Five Xboxes

In today’s New York Times, Amy Chozick reports on the “whiter collar” approach to cable guys that cable companies are taking. Two things. First, check this out:

Quirino Madia, a 46-year-old supervisor for Time Warner Cable, recently set up a system for Selene Tovar, 35, a stay-at-home mother of three.

She needed the Internet service at her three-story home in New York’s Chelsea neighborhood to be fast enough to power the family’s six televisions, five Xboxes, several PlayStations and multiple iPads and laptops. Even a new scale — a Hanukkah present from her husband — required a fast connection so it could send daily weigh-ins to an iPhone app.

I don’t even know what to say. So I’ll just leave that right there.

Second, the article refers throughout to “high tech” installations that are complex because of the sheer number and variety of devices that will be sharing the connection. This is, of course, nonsense. When you “install” a router, it makes no difference whether it talks to five Xboxes or a single laptop. (Yes, I get that some of these providers may be setting up each device, and maybe I’m out of touch with how difficult it is for some people to connect an Xbox to a wireless network.)

The one job that we want our cable-based Internet to do is to provide a working Ethernet cable to our wireless router. If the cable company wants to sell wireless routers on the side, fine. Nothing they do, however, should be more complex on account of my having two dozen iPads in the house rather than a single, beige Packard Bell in the basement that plays the latest CD-ROM games, runs Napster, and surfs the geocities homepages on the World Wide Web.

But, you see, the cable companies want to delight us with high tech magic, not just provide a dumb pipe:

“We think the consumer wants a state-of-the-art experience,” Brian L. Roberts, Comcast’s chairman and chief executive said, as he showed off the company’s forthcoming partly cloud-based cable box with the internal code name of Xcalibur.

Sure, I get it. The cable companies don’t want to be the water company. They want to be in the content business, and they’re afraid if they don’t move quickly that they’ll be the next RIM, while Google, Apple, and others take the higher-margin content-delivery business. This is all an artifact of a time when cable was content and when it was not possible to get your data feed from one company and all your content from another. Now, however, I don’t need my internet provider to do anything other than provide fast internet service.

With any luck, technology will come around that creates a real market for commodity internet providers. Until that happens, I think we should let these companies choose. Either you get to be a local monopoly providing data service OR you get to sell content. It looks like we’re about to see what happens when large corporations that have local monopolies, a history of poor service and high prices, and a perfect track record of producing the world’s worst user interfaces get into the cloud-based “state-of-the-art experience” business.

Apps as Objects

What is it that makes some mobile apps feel so compelling? Why is the skeumorphic design of some of Apple's recent apps so awful? Why do desktop apps feel dated? I came to my own answer to these questions while listening to Dan Benjamin and John Gruber complain on The Talk Show about the regrettable new version of the official Twitter app for iPhone.

Ever since I can remember, applications have presented to me interfaces. Even the best of them put graphics on a screen that clearly mediated the relationship between the user and the processing of data managed according to a set of rules. (The worst of them were exercises in trying to figure out how to manipulate a crummy interface to do the processing that you wanted done. A number of versions of Microsoft Word, for example, required hunts through menus and dialogs to try to determine the magical series of clicks that would trigger the right formatting or appearance of text. You knew what you wanted and that it could be done but not how to tell Word to do it.) Great apps could try to make you forget, through inspired design and new metaphors, but in the end you knew an interface was there - that there was mediation between the processing you wanted the app to do and the actions required to trigger it.

Touch computing makes progress on this front, as it removes at least one layer of artifice. If the interface doesn't lag, then selection and clicking feel like direct manipulation of data, not as though they are actions requesting processing, followed by a graphical response. A drag of the mouse and a click of its button are abstract, obviously actions you intend the computer to observe and respond to. Touch rids us of this, but the way forward requires more.

I believe that the best apps will be designed as complete objects. They will have an internal physical consistency, a logic of manipulation, that will make using them feel – not just look – like using a *thing* rather than putting in a series of requests. When you pick up a new object in the world, you look at it, turn it over, fiddle with it. You try to figure out what it can do, how it interacts with other things, and you gather this from its shape and how it responds. Think about picking up a book, a Swiss Army knife, a jewelry box, or anything else you happen to have around you. All these are designed, to be sure, but you learn how to use them and know their limits by the physics that apply to everything else in the world. How they are put together dictates the rules of your interaction with them, and you just know those rules because physics are everywhere and the object is right there, exposed in front of you.

I keep coming back to this when I think about what makes the Twitter for iPad app so compelling, even though it has significant flaws. That app, with its sliding drawers, just hints at a semblance of being a complete thing unto itself. You feel, when the app is at its best, that you are playing with an object, that obeys known, physical rules. You are not requesting some data processing to be executed and to report back to you.

Now, Twitter for iPad is not perfect. Tapping a tweet to see a conversation feels like a request, not a physical revealing of a conversation from the germ of the tapped tweet. Doing a reverse pinch to expand a tweet into a profile, figuring out whether to hit the tweet itself or the picture of the Twitter user, and some of the sliding don't feel particularly intuitive or connected to an easy to understand set of rules. But the potential is obvious. With more attention to detail, using Twitter can be like using a real thing, like a magical set of cards but where that magic has simple and clear rules and bounds.

Although Twitter for iPad feels a little like sliding cards on a desk, it doesn't try to replicate that experience - and it certainly doesn't look like a desk covered by paper. Flipboard may even be a better example. It doesn't act like any real book, and yet turning "pages" is so addictive, because it just feels manipulative, not input-output driven. That probably gets at it best, that it's the feel of using these apps, not the look of the animations in itself, that makes them seem so right.

Indeed, the best apps will likely not take many graphical cues from the real world objects they logically resemble. Criticism of skeumorphic designs in Lion, like address book and calendar, and in iOS, like iBooks and Find My Friends, are rampant and on target. Although I think Steve Jobs was right relentlessly to focus on making the user feel like he or she was using a real object rather than a computer, these Apple apps make clear why mimicking the appearance of a real object is often a lousy way to do that. A book reading app will never be a book, and we shouldn't try to enlist the user in pretending otherwise. It's like putting wood panelling on a TV or making a car look more like a buggy. No thing will ever be great masquerading as something else. They can only disappoint when the physics and rules of the app diverge from the real thing. If it looks just like a calendar and I can't tear off pages, circle dates, flip back and forth, and the like, it will feel like I'm using a poor facsimile of a calendar. I'd rather use a new kind of object altogether.

The object oriented app will have a consistent, internal physics. It will have a simple and bounded set of interaction rules. And most importantly, using it will feel like manipulating a thing. We will use the app to make things happen directly, not as a remote control for sending messages in order to receive other messages.

I don't know what these new apps will be, but I do know that we've glimpsed their birth.