Tuesday, April 8, 2014 Saturday, March 29, 2014
POST-THOUGHTS: I got an insightful email response to this post saying that while this is a shock, these issue are almost inevitable with VR- I’m inclined to agree. The reason I wrote this post in the first place was the confusion and frustration of seeing so many people upset about this acquisition for what I perceived to be the wrong reasons. The focus of the conversation is self-focused and short-sighted.The ‘larger issues’ I discussed are inevitable, yes - but they have been brought to the absolute front-burners with the most promising VR company being acquired by the world’s largest data mining operation. VR ethics is a conversation that has not even begun to happen, but with this news it has become clear that we need to begin writing it’s constitution immediately. The wrong and right reasons to be upset about oculus
Friday, March 28, 2014

6. Medication Management

“Smart” medication bottles remotely monitor and automatically alert patients on their adherence to prescribed medication. According to a 2012 report in the Annals of Internal Medicine, lack of medication adherence as the estimated cause of 125,000 deaths in the United States alone. It was also cited as the reason for at least 10% of hospitalizations and a cost of between $100 billion to $289 billion annually to the US healthcare system. This technology between wireless pill bottles and its servers will help patients, physicians, and the pharmaceutical industry ensure medication compliance, preventing drug abuse, further illness, and death.

from: http://blog.getrobin.com/2014/01/internet-of-things-examples/

potentially really complicated, but it would be great to see an automated medication dispenser that took control of the whole process of taking medication.

Complicated because, among other things, pharmacies would have to associate instructions with the patient. IoT drug dispensers would have to access an encrypted API w/ drug instructions.)

Wednesday, March 19, 2014

Janina Woods: UX for Virtual Reality (Games)\

Only 19 views? Come on people!

Saturday, January 18, 2014
User experience is often much less a design problem than it is an organizational problem. As much as we just want to do our work without obstruction, we can only be truly effective if we also make a compelling argument to people in other parts of the organization. These structured prioritization methods make that step reasonably painless by helping you produce written and visual records of your thought process. Rian Van Der Merwe for A List Apart
Tuesday, January 14, 2014 Monday, January 6, 2014

"Mobile to the Future" - Talk @ Google by Luke Wroblewski

This is a great talk—a must watch for anyone working in mobile UX and intuitively gets the “Mobile First” approach, but wants to see better reasons and examples.

In this video he makes the case that mobile devices are a new medium—in the vein of printed text and TV—rather than just a “smaller computer.” The bulk of this talk is him walking through login, purchase, and checkout processes—very standard, maybe banal processes—and demonstrating how taking a ‘mobile first’ approach can increase usability without reducing security.

I do think Luke makes things sound simpler than they are at times. Because he has used these standard components of the consumer mobile experience, he’s attacking a large percentage of usability problems. But I don’t think these findings directly translate to other more app- or company-specific interactions. Yes, the person designing the UX takes into account the same or similar variables—that mobile is personal, geospatial, smaller, etc—but its seems like the designs he suggest leverage the users’ knowledge of how these forms should work. At times the improvements look more like Progressive Reduction than improved user experience. 

That said, I think this talk got my brain working in a more “mobile first” way, and I hope to see Luke talk in person some time. Totally great talk, totally worth the hour of watching.

Sunday, December 29, 2013

Spreadsheets - “data. in bed.” aka “quantifying sex”

Bizarre and sad, but separately.

Bizarre in that recording this sort of data is inherently social (in that we all must assume that the data is being collected and recorded elsewhere). The extent to which these “private” moments are shared is terrifying, not on an individual level, but en masse. Can you imagine your friends thinking your love life is not working because you’re not registering an above average coitus frequency in your first 3 months of dating? Yea, me neither…but I can imagine that for some people…

Sad, though, because of the message of the video, which is basically “An extremely hot girl with a nerdy boyfriend need to use this app to get his attention.” Not only does this sound generally incorrect, but it also kinda paints her out to be needy and him negligent. Is this app going to “solve the problems” of this relationship? Probably not as well as talking would.

There’s worse to say about the ad, but to the app… The app itself set up “goals” like frequency, duration, number of thrusts, and loudness (measured in decibels). This seems like another misdirection—There’s nothing wrong with recording this information, but “loudness” doesn’t translate to pleasure or meaning. “But you were so loud, you must have enjoyed it!” 

I don’t ultimate disdain the product itself—it seems like it could find a niche audience of singles/couples/groups who find the data fascinating and kinky to manipulate—but the ad gives me a pretty poor impression about the people making this product. Ultimately, what bothers me the most is the idea that the quantification of sex could “improve” it. Turning “improvement” into an end recklessly and irrevocably distorts the experience.

(From spreadsheetsapp.com via sparks and honey’s "future of relationships" slideshow.)

Wednesday, December 18, 2013

It makes me excited/uncomfortable that this thesis presentation from ITP is probably identical to what I would have produced had I gone, although I probably would have approached the question with slightly stronger Liberal Art bent and slightly weaker technical implementation.

In his presentation, Robbie goes through a series of new interaction methods, including using the Leap Motion to mockup the interaction from the Iron Man movies (skipping Minority Report’s similar interface), building a sort of Google Glass-like HUD prototype with a depth sensor, and creating a Leap-controlled holographic reproduction of a globe, which kinda reminds me of holograms that I saw at the Oregon Museum of Science and Industry when I was a child). The number of experiments he builds and conducts itself is impressive, and you really get a sense that he’s worked to become very familiar with the “future interfaces” space.

Answering the question from Tom Igoe “which of these do you find most comfortable as part of your everyday life?” Robbie responds that the Google Glass interface seemed to be best to him. (It should be noted, his addition of a “take a picture” hand movement to make his depth-camera powered HUD record an image is a great idea.)

I got a sense from Robbie that he really enjoyed this work, put a lot into it and got a lot out…but that he was a little disappointed with the results, a little less optimistic of the impact of these new interaction methods. Or maybe I’m reading into it—in the past year, I’ve become a little less excited by these novel interaction tools myself.

The Leap, Oculus Rift, Kinect (and other PrimeSense/Apple devices), Google Glass, Myo…it’s still not entirely clear how these products will get good enough to outshine other standard interfaces*, including input devices like mice, trackpads, and keyboard, and output methods like flat displays, phone vibrations, and dual i/o things like touchscreens and voice-activated services/products like Google Now and Siri. It’s not clear that gestural interfaces will ever enable the degree of precision offered by more typical inputs, not just because the “technology isn’t there yet,” but because tools like mice allow us to anchor our arms to the desk, decouple commands (clicks) from movement(arm motion), and provide clear tactile feedback. (Edit: I should note, I understand the Oculus will probably hugely impact gaming. Some think it will change film-making, but I think that idea will go more the way of 3D television, at least in the short term. In this essay, I’m thinking about the use of these interaction tools outside of entertainment.)

When these new interaction tools do outshine standard interfaces, when companies like Oblong Industries make products for the government, the level of education required to use them reduces the pressure to innovate on interaction quality. Or their products are made for the business sector—“like magic,” but for particular uses in a particular setting for a high price.  Also, these input devices/methods seems best suited for the collaboratively manipulating visualizations of large data sets rather than helping me make my coffee in the morning or contact a friend.

But going back to our initial devices—If these are the first steps, if the Rift is the low-fi version of its successor, if the Leap is the prototype of a reliable movement sensor, if the Kinect’s framerate and pixel density are 1/10th of what it will be, where will these interface types take us? In one way, the Google Glass seems the most obvious success because it performs tasks that we already know, give import, and do frequently—checking the weather, taking video calls, text messaging, taking photos. In fact, in the Glass demo video, the Google Glass basically operates like a smartphone on your head. But this is also its downfall—its understood functions already belong to the less wonky smartphone; its “to-be-discovered” features have yet to reveal themselves. We already know the tasks it performs are useful, we know the mechanics of the tasks can be satisfied by this interface, but the potential to be more than a “smartphone on your head” is murky. It’s not clear how the Glass or these other products will help us complete important, frequent activities better than the current options or work themselves into our lives some other way. This isn’t to say that they won’t—personal computers were assumed impossible and unnecessary, smartphones were thought excessive…but still, there is no screaming need to make it obvious how these products will become common.

(Tangential thought—these products, which provide a novel, specific input method, are almost opposite of products coming from the Internet of Things movement, where objects provide output without forcing any input. This isn’t to say they’re direct compliments—most IoT products, including Nest, Hue, and quantified-self products like the Nike Fuel and Fitbit run “in the background.” They pride themselves on requiring the smallest amount of input possible. That said, their pairing at times seems the most futuristic. Imagine this Kinect-powered lighting setup paired with Hue lights!) 

One product that I think could make a real impact, is Meta’s “Space Glasses" I still have yet to see a convincing proof of concept for their device, but a true augmented reality interface could offer passive data presentation while taking advantage of a gesture vocabulary. The benefits seem real to me, even if their current teaser video avoids realistic use cases. While the device lacks the haptic feedback required to successfully simulate the act of sculpting or designing a 3D product with your hands, the device can extend one’s work space, offering passive data about current situations, situate information using 3D modeling, and these features could all be added on top of Google Glass’s capabilities. Granted, the glasses from META are not being built as a mobile device and require an additional computer to power itself, but the lighter/smaller/faster/cheaper of Moore’s law could help bridge this gap, even if Moore’s Law peters out soon. 

Smart watches are another product type on the horizon, but I can’t see them having significant impact outside of a pre-existing technology infrastructure. Apple may be the ones to make smart watches a desirable object by incorporating them into an existing Apple-powered product system, but I can’t see this development as more than moving the smartphone to the user’s wrist rather than the head (like Glass). I bet in a couple years I’ll regret saying this though, as I haven’t spent as much time thinking about it. (Edit: Tog writes about the future of iWatches. great read.)

All of this said, Chris Dixon recently reminded me of Amara’s Law—that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” I want to take a stab at why that is the case with these technologies.

The main reason it’s difficult to grasp the potential of recently-introduced input technologies like the Leap and the Kinect is that we are primarily viewing them outside of their future, integrated contexts, which may include multiple products and multiple instances of these technologies. (Edit: I’m not so sure anymore. Maybe, like with the 3D TV, it’s more to do with the ecosystem around these tools…back to “on the fence” about this.)

First, depth sensing and gesture recognition are features, not products. As opposed to products themselves (software suites, for example), it’s not entirely clear what to do with them outside their current confines. The majority of Kinect end-users are buying them as part of the X-Box package—only the experimenters and tinkerers are buying them stand alone. We’re beginning to see further integration, as the Leap is already being integrated into HP computers, but the add-on value is still vague, especially given that existing models of computing are designed around more standard input methods. But these two are still the most obvious. Can you imagine the Oculus Rift as a productivity tool? What will MYO’s control in the future? How will Apple’s iWatch work with iBeacons and IoT products to augment your everyday experience? What will it look like all together?

There are a few examples. “Tech of the future” videos tend to offer a reflection of how companies imagine themselves in the future, rather than how the future will really appear. That said, they offer a view on integrated technology environments and explore ways in which the technology works together to accomplish users’ goals. For a slightly different approach, the dark, fictional video “Sight" provides a less sterile version of these presentations. 

Another example, Oliver Kreylos has built software that uses the Kinect, Razor Hydra controllers (similar to the Wii controllers), and the Oculus Rift. The demo video presents the technology in a way similar to early Oblong and Sixth Sense demonstrations, but, like Oblong’s current enterprise system Mezzanine, Kreylos uses controllers that offer a larger number of input methods (“buttons”) without reducing precision the way hand gestures do. Kreylos’s example is still primarily a visual data manipulation tool, but it’s a “personal” enough solution that you can actually imagine someone using it at their desk. 

Steven Sinofski at learningbyshipping.com thinks that 2014 will be the “culmination of the past 15 years of development of the consumer internet." He’s focusing, however, on more standard devices (including "phablets"), storage methods (the cloud) and consumer behavior. I think the novel interaction technologies I’ve discussed still have 3 or 4 years before we begin to see them in full bloom.

I’d like to spend more time in this space—the “nearish future of the recently possible.” What I suggested as an aside earlier, the movement toward an Internet of Things, will, I think play a major role in determining the possibilities of new interaction technology. There will be advances in both the consumer and enterprise world as we find ways that novel interactions will help us better understand and manipulate information, add intuitive methods of navigating data, simplify and extend actions. Make things easier.  

* That said, the nuance of gesture has already enabled people to do more than they can with a mouse and keyboard in specific situations. The kinectar is a great example.

Saturday, November 2, 2013