Sep 102010
 

Photograph of Stéphane Mallarmé's Un Coup de Dés, Public Domain

E-readers have tried to make reading as smooth, natural and comfortable as possible so that the device fades away and immerses you in the imaginative experience of reading. This is a worthy goal, but it also may be a profound mistake.

This is what worries Wired’s Jonah Lehrer about the future of reading. He notes that when “the act of reading seems effortless and easy … [w]e don’t have to think about the words on the page.” If every act of reading becomes divorced from thinking, then the worst fears of “bookservatives” have come true, and we could have an anti-intellectual dystopia ahead of us.

Lehrer cites research by neuroscientist Stanislas Dehaene showing that reading works along two pathways in the brain. When we’re reading familiar words laid out in familiar sequences within familiar contexts, our brain just mainlines the data; we can read whole chunks at a time without consciously processing their component parts.

When we read something like James Joyce’s Finnegan’s Wake, on the other hand — long chunks of linguistically playful, conceptually dense, sparsely punctuated text — our brain can’t handle the information the same way. It goes back to the same pathways that we used when we first learned how to read, processing a word, phoneme or even a letter at a time. Our brain snaps upright to attention; as Lehrer says, “[a]ll the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.”

I think Lehrer makes a few mistakes here. They’re subtle, but decisive. I also think, however, that he’s on to something. I’ll try to lay out both.

First, the mistakes. I think Lehrer overestimates how much the material form of the text — literally, the support — contributes to the activation of the different reading pathways in the brain. This actually deeply pains me to write down, because I firmly believe that the material forms in which we read profoundly affect how we read. As William Morris says, “you can’t have art without resistance in the material.”

But that’s not what Dehaene’s talking about. It’s when we don’t understand the words or syntax in a book that we switch to our unfamiliar-text-processing mode. Smudged ink, rough paper, the interjection of images, even bad light — or, alternatively, gilded pages, lush leather bindings, a gorgeous library — are not relevant here. We work through all of that. It’s the language that makes this part of the brain stop and think, generally not the page or screen.

Second, it’s always important to remember that there are lots of different kinds of reading, and there are no particular reasons to privilege one over the other. When we’re scanning the news or the weather (and sometimes, even reading a blog), we don’t want to be provoked by literary unfamiliarity. We want to use that informational superhighway that our brain evolved and that we have put to such good use processing text.

Reading is, as the philosophers say, a family-resemblance concept; we use the same words to describe different acts that don’t easily fall under a single definition. It’s all textual processing, but when we’re walking down a city street, watching the credits to a television show, analyzing a map, or have our head deeply buried in James Joyce, we’re doing very different things. And in most cases, we need all the cognitive leverage we can get.

Now, here’s where I think Lehrer is right:  Overwhelmingly, e-books and e-readers have emphasized — and maybe over-emphasized — easy reading of prose fiction. All of the rhetoric is about the pure transparency of the reading act, where the device just disappears. Well, with some kinds of reading, we don’t always want the device to disappear. Sometimes we need to use texts to do tough intellectual work. And when we do this, we usually have to stop and think about their materiality.

We care which page a quote appears on, because we need to reference it later. We need to look up words in other languages, not just English. We need displays that can preserve the careful spatial layouts of a modernist poet, rather than smashing it all together as indistinguishable, left-justified text. We need to recognize that using language as a graphic art requires more than a choice of three fonts in a half-dozen sizes. Some text is interchangable, but some of it is through-designed. And for good reason.

This is where we’ve been let down by our reading machines — in the representation of language. It isn’t the low-glare screens, or the crummy imitative page-turn animations. They’ve knocked those out of the park.

In fact, we’ve already faced this problem once. In the late nineteenth and early twentieth century, book production went into overdrive, while newspapers and advertising were inventing new ways to use words to jostle urban passers-by out of their stupor.

Writers wanted to find a way to borrow the visual vitality of what was thought of as ephemeral writing and put it in the service of the conceptual richness and range of subject matter that had been achieved in the nineteenth-century novel.

That’s where we get literary and artistic modernism — not only Joyce, but Mallarmé, Stein, Apollinaire, Picasso, Duchamp, Dada, Futurism — the whole thing. New lines for a new mind, and new eyes with which to see them.

That’s what e-books need today. Give us the language that uses the machines, and it doesn’t matter if they try to get out of the way.

See Also:


Sep 102010
 

Apple on Thursday published a set of rules about the types of content that aren’t allowed in the iOS App Store, answering questions that have been bugging software developers and customers for years while introducing some new ambiguities.

Still, it’s an important step. By publishing the guidelines, Apple mobile customers will be able to know what they can and can’t get on an iOS device versus, say, an Android phone. Also, third-party programmers will have a clearer sense of whether or not to invest in developing an app, whereas before they were subject to rejection without knowing what they weren’t allowed to do. However, some developers think parts of the guidelines could be more clear.

“By no means is what they put out today perfect,” said Justin Williams, developer of Second Gear software, who quit iPhone development last year. “There are some vague areas. But compared to where we were yesterday, it’s a big improvement.”

Apple CEO Steve Jobs has described the App Store as a “curated platform” that is regulated to ensure a high quality, secure experience for customers. IPhone, iPad and iPod Touch get third-party applications through the App Store, and Apple must approve any software before it can be sold through the store. Unless you hack your iOS device, the App Store is the only way to get additional native software.

The regulated App Store model deviates from the traditional experience of owning a PC, where customers can typically purchase and install any software that’s compatible with their computers. Critics have argued that by curating the iOS platform, Apple tightly controls the mobile devices that customers own as well as the developers who create software for them.

Additionally, by not publishing the guidelines on its iOS app review policy, programmers were left guessing as to what they were allowed to create, potentially putting a bottleneck on their innovation. Publishing the list of app review guidelines — a step that Wired.com called for Apple to take in a previous editorial — addresses this potential problem of self-censorship.

“Hopefully it will give developers increased confidence when starting projects,” said Jamie Montgomerie, developer of the Eucalyptus book-reading app, which was approved by Apple after its controversial rejection. “I suspect there are a lot of interesting apps that were never made because people were scared of the approval process.”

Apple’s seven-page list of guidelines (.pdf) splits reasons for app rejections into 11 categories. Reasons for rejection range from technical to editorial offenses: Apps that crash will be rejected, for example, and apps that defame people in a mean-spirited way are rejected, with the exception of political satirists and humorists.

“We hope they will help you steer clear of issues as you develop your app, so that it speeds through the approval process when you submit it,” Apple said in a statement Thursday about the app guidelines.

The publication of the guidelines is a major step toward transparency for a company as opaque as Apple. Since the App Store opened in 2008, critics scrutinized the App Store for its undisclosed editorial guidelines, which resulted in seemingly arbitrary rejections of a wide variety of applications.

For example, Apple in 2009 rejected an app called Me So Holy, which enabled iPhone users to edit their self-portraits to look like Jesus Christ. However, Apple that year approved Baby Shaker, a game that involved shaking a baby to death. Apple later pulled Baby Shaker, admitting its approval was a mistake.

Because of its unclear app approval system, some developers gave up on making content for the App Store because they couldn’t be sure that an app would be a wise investment of their time and money. Second Gear developer Williams said he quit iPhone development last year because Apple didn’t disclose its policies.

“One of the big reasons I got frustrated was I didn’t like the black box review system, which is basically you’re submitting your apps to the review process and you have no idea what the review process is,” Williams said. “I think [Apple publishing guidelines] is a good step towards being more up front and honest about what the criteria is.”

However, Williams noted that there was still room for improvement, as several parts of the guidelines are still unclear. For example, one clause in the guidelines reads apps will be rejected if they duplicate functionality of other apps, “particularly if there are too many of them.” Williams said it was unclear how many is “too many,” and such vagueness could discourage developers from competing with other apps in the App Store.

It also remains a question as to whether Apple’s App Store is now allowing Adobe to join the iOS scene. In addition to publishing guidelines, Apple said in a press release that it was “relaxing all restrictions on the development tools used to crease iOS apps, so long as the resulting apps do not download any code. This change was not detailed in Apple’s guidelines, but some are speculating that Adobe’s iPhone Packager, a tool to automatically convert Flash software into native iPhone apps, will be allowed — whereas before third-party app creation tools were banned. Wired.com’s Epicenter will have more to report soon on that aspect of Apple’s App Store revisions.

Brian X. Chen is author of an upcoming book about the always-connected mobile future titled Always On, due for publication Spring 2011. To keep up with his coverage in real time, follow @bxchen or @gadgetlab on Twitter.

See Also:

Photo: Jon Snyder/Wired.com


Sep 102010
 


Apple has opened up the App Store review process, dropping its harsh restrictions on the tools developers are allowed to use and at the same time actually publishing the App Store Review Guidelines — a previously secret set of rules that governed whether or not your app would be approved.

Apple did not specifically mention Adobe — though investors drove up shares of the company up 12 percent on the news — but the changes seem to mean that you can use Flash to develop your apps, and then compile them to work on the iPhone and iPad with a tool called Adobe Packager. This could be boon to publishers, including Condé Nast, owner of Wired, which use Adobe’s Creative Suite to make print magazines and would now be able to easily convert them into digital version instead of re-creating them from scratch in the only handful of coding languages Apple had allowed.

To be clear, that doesn’t mean Flash is coming to iOS as a plugin: You still won’t be able to view Flash content on your iPhone, iPad or iPod Touch. This change in Apple’s policy just means developers can use third-party tools such as Flash to create apps sold through the App Store.

And transparent guidelines will go a long way to making iOS a better place for developers. Previously, you wouldn’t know if you had broken a rule until your app was rejected. And if your app had taken months and months and tens of thousands of dollars to develop then you were pretty much screwed.

This uncertainty has kept a lot of professional and talented developers out of the store and caused the rise of quick-to-write fart applications. In fact, the point I have heard spoken over and over is that the developers don’t mind what the rules are, as long as they know about them.

The second part of Apple’s relaxation of restrictions is even less expected. Here’s the relevant point from the press release:

We are relaxing all restrictions on the development tools used to create iOS apps, as long as the resulting apps do not download any code. This should give developers the flexibility they want, while preserving the security we need.

This is a direct reversal of Apple’s previous ban on third-party development-tools. Why? Games. Many games use non-Apple, non-iOS code to make them work: the Unreal Engine behind the stunning Epic Citadel shown off at last weeks’ Apple event, for example, would fall foul of Apple’s previous rules. The “do not download any code” part of this is important. Apple will let you use non-iOS runtimes within your apps as long as it can inspect them first. Anything downloaded after installation which bring out the ban-hammer.

It’s a completely unexpected reversal, and one which will eventually lead to much more complex and refined apps in the iTunes Store. And everyone should be pleased about that.

Statement by Apple on App Store Review Guidelines [Apple]

See Also:

Follow us for real-time tech news: Charlie Sorrel and Gadget Lab on Twitter.

Photo: Jon Snyder/Wired.com


Sep 102010
 

ARM Cortex A-15 MPCore image via ARM

Almost all high-profile mobile devices use a version of ARM’s microprocessor. Samsung, Texas Instruments, and Qualcomm compete to get their chips on different devices, and Apple now makes their own, but all of them license their tech from ARM. Now ARM has announced their next-generation Cortex chip, the A-15, and it’s a doozy.

The new chip was announced at a press conference last night in San Francisco. Eric Schorn, ARM’s vice president of marketing, said, “Today is the biggest thing that has happened to ARM, period.” The chips, which will support up to four processing cores, should appear in consumer devices sometime in 2012.

The big breakthrough for the Cortex A-15 is virtualization. For instance, Samsung’s new Orion chip, which is based on ARM’s Cortex A-9, can send different video images to multiple screens. The A-15 can actually support different operating systems or virtual appliances on those screens. So when VMWare Fusion finally hits your iPad, it might really have something to work with.

Hardware virtualization has traditionally been the hallmark of chips designed to power servers, which frequently have to support different environments; with this chip, ARM is bringing a little bit of the server’s versatility to the smartphone, and (it hopes), some of the power-conserving elements of smartphone chips to servers.

Finally, there’s the markets everywhere in between: tablets, laptops, and home media servers, among others. Om Malik calls the A-15 “a tiny chip with superpowers.” That might not be far off.

See Also:


Sep 102010
 

This ugly monster is either the most ridiculously niche iPad accessory yet, or it’s a photographer’s best friend. Actually, it could be both. The HyperDrive iPad Hard Drive is an external USB storage box for your tablet, holding up to 750GB of movies and photos and serving them up to the iPad via the Camera Connection Kit.

The iPad is a wonderful device for viewing photos and movies. I have the Camera Connection Kit and its a great way to check, edit and send photos when on a trip away. The problem is that even a 64GB iPad will fill up pretty quick, especially if you’re shooting a lot of RAW files.

The iPad can in fact read files from any USB drive that is formatted the right way. It needs to use the FAT 32 file system (the same as all camera memory cards use) and files need to be in a folder called DCIM. The problem is that there is a limit on the size of the drives that can be used: anything over 32GB won’t work.

The HyperDrive gets around this by only offering photos in 32GB virtual drives that the iPad can see. You load the images onto the dive itself via two card-reader slots (any card will fit) and can browse the file-structure on the built-in screen via an interface even uglier than the unit itself.

If you need something like this, then you’ll already have skipped to the link below and be ordering one. Otherwise you’ll likely be slightly bemused as to what possible point this could have. If you are in the latter group, let me give you another chuckle: the bare-box comes in at $250. Add in a 750GB hard-drive and you’re looking at $600. Ouch.

HyperMac iPad Hard Drive [HyperShop via Digital Story]

See Also:

Follow us for real-time tech news: Charlie Sorrel and Gadget Lab on Twitter.


Sep 102010
 

MSI Wind U160; image via MSI.

Three years ago, Bill Gates looked like a dummy for carrying around a tablet. Steve Jobs was ragging on netbooks and tablets when he was rolling out the MacBook Air. Now, eight months post-iPad, everybody’s pushing out tablets, and netbooks are looking very 2007. But any death notices anyone puts out for the netbook are premature.

Let’s check the numbers. One of the big research reports thrown around is from Forrester Research, which predicts that tablets will outsell netbooks by 2012, pass netbooks in total usage by 2014, and have a 23% share of all PCs (a category that for Forrester includes everything from a tablet on up) by 2015. By 2015, Forrester predicts, netbooks will only have 17 percent of the PC market, just behind desktops with 18 percent.

Wait a minute — 17 percent of all computers in 2015 will be netbooks? About as many netbooks as desktops? And the whole personal computing pie is going to continue to grow? Maybe this is silly, but — isn’t that still really, really good?

The tablet has mindshare, but not yet market share. Netbooks are already starting to strap on the powerful new dual-core mobile processors that will give them full computing parity with notebooks. And the two innovations of netbooks, small screens and small hard drives, have already come uncoupled — you have lightweight, large-screen/low-storage devices like the MacBook Air or Samsung N150 and compact, high-powered netbooks like the 250GB MSI Wind U160. They’re all getting better at managing battery life, too, which remains the real bane of all portable computers, netbook and tablet alike.

Part of the problem has been the unrealistic expectations manufactuers and analysts had for netbooks three years ago. It was foolish to think that everybody and their cousin would buy a netbook and that other lightweight form factors like the tablet (which, people forget, had already been kicking around for a while) wasn’t going to jump up and take a chunk. If you look at projected numbers five years out and assume that all of the form factors are going to look and function the same way they do now, that’s foolish too.

At CNET, Erica Ogg asks “So, Who’s Still Buying Netbooks?” Tech/culture blogger Joanne McNeil had already written a terrific post answering the question, “Why I Got a Netbook Instead of an iPad.” JoAnne bought a $300 off-the-shelf Asus, took it to Asia for the summer, and loved it.

First, there’s a cost difference: “the price difference wasn’t simply $200. The iPad required accessories — the case, the bluetooth keyboard, the SD adapter — the total price would hoover just under what I spent the year before on my new laptop.” Finally, there’s that keyboard, which some people hate and others need:

As a non-dude with narrow fingers, the keyboard feels right to me [Maybe the Macbook's wide keyboard, like the name iPad and their translucent staircases (Skirts! Steve Jobs! Women wear skirts!) is another example of Apple's failed outreach to women in market research.]

The computer industry — and maybe even more so, the marketers who work for it and the media who cover it — is always looking for products that scale: something that can be put as-is into everyone’s hands. Netbooks don’t have to be that thing any more. They can be quirky, eccentric — just right for one user and for her alone.

See Also:


Sep 102010
 

Unless you have a box of Pentax lenses lying around, there’s little reason to buy a Pentax SLR: Nikon, Canon and even Sony are the places where innovation and competition are forging the best cameras around. On the other hand, those boring companies don’t make their SLRs in anything other than practical black. With Pentax’ new K-r, you have a choice of red, black or white.

Inside the candy-coating, Pentax has added a 12MP sensor (with ISO up to 25,600 in extended mode) which can capture photos at a speedy 6 frames-per-second and also 720p video. Round the back is a 921,000-dot LCD-screen and inside the pentamirror-viewfinder (not the brighter pentaprism found on its older brother, the K-7) the active focus-point is now illuminated. See? It’s pretty dull stuff.

Going on the spec sheet alone, this camera could be judged as competent, in the same way an accountant might be described “competent” (a desirable, if unexciting trait). If you’re expecting me to say that the camera makes up for this with personality, you’ll be disappointed. The only quirk is that it can use AA-batteries as well as the supplied lithium-ion cell. Again, useful, but it’s no in-camera HDR, or iTTL-Flash control-system.

If you can stay awake long enough to make it to the store, the K-r will cost $800 with the included 18-55mm kit lens, which puts it smack between the high-end K-7 and the toylike K-x. Available October. Zzzzz.

Pentax K-r press release [DP Review]

See Also:

Follow us for real-time tech news: Charlie Sorrel and Gadget Lab on Twitter.


Sep 102010
 

Promotional Image From Google TV

Google unveiled its new Instant search feature, which autoloads search results as you type. I’m skeptical about claims that it will save fifty kajillion man-hours once you add up all the milliseconds saved. Its real use cases are still on the way: local, mobile, and video search.

Part of the inherent silliness of doing a Google Instant search on the wide-open web is the sheer size and heterogeneity of the data sets you’re working with. Google has no idea whether you’re looking for a quote, a movie title, a blog, a government site, or a string of text you remember sticking into a doc file months ago. So it spits out a similarly wild range of results.

Now let’s suppose we narrow that data set. Suppose I’m not looking at every string of text on the web, but for movie or television titles on the new Google TV.

Now, when I begin to enter text, Google will have a much better idea of what I’m looking for. In fact, it might actually be able to give me what I’m looking for even when I don’t know what that is.

The key to the next generation of TV is likely to be search, and the biggest drag on search is going to be text entry. This isn’t your laptop; people are going to be banging out text on remotes and mini-keyboards in bad light. Anything a company can do to minimize the number of keystrokes and make that process as painless as possible is going to be a tremendous usability boon to its customers.

If Google TV is really going to be the “one screen to rule them all,” it has to solve that problem.

Suppose I’m looking for a movie I saw years ago. I can’t remember anything about it except it was an action movie and that I think the word “China” was in the title. IMDB.com might be able to tell me the title and the year, but I’d have to click on each one, then click again to find the plot synopsis, just to discover that it wasn’t the movie I was looking for.

Instead I might type “C-h-i” into a future Google product — let’s call it Instant Movie Search — quickly discard all the variations on “Chicago,” and get to “China.” I know I don’t want “Chinatown” or “The China Syndrome.” In the sidebar, I see that I can narrow it by “Action/Adventure.” Perfect. And there it is: “Big Trouble In Little China.” It shows me a movie poster thumbnail, a short synopsis and a cast list — even before I click on the title! Then I can go ahead and queue it up.

Instead of drilling down and back out through dozens of pages, I’ve typed five characters and clicked one menu link. Not only did I find what I was looking for, I knew that it was what I was looking for with a high degree of confidence before ever clicking on the link — indeed, before I ever glanced at the title. As soon as I saw that poster of Kurt Russell and Kim Cattrall out of the corner of my eye, I knew that was the movie I wanted.

Gmail already does this with contacts, and it’s a big time saver. Now extend that concept to a half-dozen other forms of local search: Google Books, Google Scholar, Froogle, Desktop, News, Reader, Apps. Imagine it in all of Google’s local search sites, popping up thumbnails and textual descriptions.

We already have an analogous mode of search in the analog tech world — flipping through channels on television or scanning the dial on the radio. Simple up-down TV channel flipping, though, can’t make finer distinctions the closer it gets to your target, and analog radio tuners can’t deliver the same precision. Both though, have the virtue that they can present what you’re looking for while you’re in the process of looking for it. Search engines couldn’t do that before. Now they can.

Right now, Google Instant is just a game, the alphanumeric equivalent of the Google buckyball logo from the other day. The real innovations in discovery are still on their way.

See Also:


Get Adobe Flash player